Golden Case CLAPNQ
Goal Benchmarking to compare our retrieval system. Evaluating the retrieval part of our RAG System with a publicly available dataset that has ideal structured texts. Data Subset of the CLAPNQ Dataset. 104 questions, 724 chunks. Only documents with less than 10 chunks per documents were selected and a random sample was taken from them. Documents have between 1 and 10 chunks. The chunk containing the answer to the question is always included. Method/Approach Retrieval evaluated for three different vector embedding models: • Sbert • OpenAi text-embedding-3-small • OpenAI text-embedding-3-large Results The best performance measured by nDCG was 79% at k=5,6, while by CG a very high performance of 96% was achieved at k=6. Results were as expected given that nDCG is a stricter measure of quality. No large differences were observed between sBERT and OpenAI embedding models for this dataset. Evaluation Metrics Ranking quality: nDCG (Discounted Cumulative Gain), measuring the best possible result considering the right ranking. Retrieval quality: CG (Cumulative Gain), measuring total relevance, without penalizing for lower ranking. Conclusions The experiment produced expected results. Cumulative Gain (CG) reached a high of 96% at k=6, showing strong overall retrieval performance across all models. Although OpenAI’s embeddings (both small and large) showed marginally better ranking quality than sBERT, the difference was not significant enough to justify switching from our current sBERT model.
Last updated