EduPLEx_API
InfoPrototypeAll docs
Recommendation, reporting & analytics
Recommendation, reporting & analytics
  • Experiments report
    • Key concepts
    • Data sources
    • First demonstrator: ESCO ontologies and semantic matching
    • Software design
      • Endpoints Sbert_eduplex
      • Setup Sbert_eduplex
    • AI Applications
    • Conclusions
    • Recommendation
    • Bibliography
  • Recommendation Engine
  • Reporting and predictive analytics
  • LRS User Journey Visualizer
  • AI Tutor - RAG system
    • LLM-augmented Retrieval and Ranking for Course Recommendations
    • Retrieval of course candidates when searching via title.
    • Answer Generation Evaluation
    • Chunk Size and Retrieval Evaluation
    • Chunking Techniques – Splitters
    • Golden Case CLAPNQ
    • Comparative Retrieval Performance: Modules vs Golden Case
    • LLM-based Evaluator for Context Relevance
    • Retrieval Performance Indexing pdf vs xapi, and Keywords vs Questions
Powered by GitBook
On this page
Edit on GitLab
  1. AI Tutor - RAG system

Golden Case CLAPNQ

Goal Benchmarking to compare our retrieval system. Evaluating the retrieval part of our RAG System with a publicly available dataset that has ideal structured texts. Data Subset of the CLAPNQ Dataset. 104 questions, 724 chunks. Only documents with less than 10 chunks per documents were selected and a random sample was taken from them. Documents have between 1 and 10 chunks. The chunk containing the answer to the question is always included. Method/Approach Retrieval evaluated for three different vector embedding models: • Sbert • OpenAi text-embedding-3-small • OpenAI text-embedding-3-large Results The best performance measured by nDCG was 79% at k=5,6, while by CG a very high performance of 96% was achieved at k=6. Results were as expected given that nDCG is a stricter measure of quality. No large differences were observed between sBERT and OpenAI embedding models for this dataset. Evaluation Metrics Ranking quality: nDCG (Discounted Cumulative Gain), measuring the best possible result considering the right ranking. Retrieval quality: CG (Cumulative Gain), measuring total relevance, without penalizing for lower ranking. Conclusions The experiment produced expected results. Cumulative Gain (CG) reached a high of 96% at k=6, showing strong overall retrieval performance across all models. Although OpenAI’s embeddings (both small and large) showed marginally better ranking quality than sBERT, the difference was not significant enough to justify switching from our current sBERT model.

PreviousChunking Techniques – SplittersNextComparative Retrieval Performance: Modules vs Golden Case

Last updated 4 months ago