EduPLEx_API
InfoPrototypeAll docs
Recommendation, reporting & analytics
Recommendation, reporting & analytics
  • Experiments report
    • Key concepts
    • Data sources
    • First demonstrator: ESCO ontologies and semantic matching
    • Software design
      • Endpoints Sbert_eduplex
      • Setup Sbert_eduplex
    • AI Applications
    • Conclusions
    • Recommendation
    • Bibliography
  • Recommendation Engine
  • Reporting and predictive analytics
  • LRS User Journey Visualizer
  • AI Tutor - RAG system
    • LLM-augmented Retrieval and Ranking for Course Recommendations
    • Retrieval of course candidates when searching via title.
    • Answer Generation Evaluation
    • Chunk Size and Retrieval Evaluation
    • Chunking Techniques – Splitters
    • Golden Case CLAPNQ
    • Comparative Retrieval Performance: Modules vs Golden Case
    • LLM-based Evaluator for Context Relevance
    • Retrieval Performance Indexing pdf vs xapi, and Keywords vs Questions
Powered by GitBook
On this page
Edit on GitLab
  1. AI Tutor - RAG system

Retrieval Performance Indexing pdf vs xapi, and Keywords vs Questions

Goal Comparing retrieval performance across two indexing methods (pdf and xapi) and two query methods (Keywords vs Questions) Data Future Skills Module with 20 test questions. Keywords generated from questions using GPT-4 (gpt-4-1106-preview). Indexed documents in OpenSearch from pdf and xapi structure. Method/Approach Retrieval evaluation using two different indexing formats in OpenSearch (pdf and xapi) and two query methods (keywords vs full question). Vectoization with sbert. Relevance evaluated using gpt-4 as relevance grader (prompt based on trulens-eval library) to score similarity between query and retrieved chunks. Results Average relevance score for xapi index: 0.45. Average relevance score for pdf index: 0.3 Full questions as queries resulted in a higher average relevance score (0.47) compared to keywords (0.34). Evaluation Metrics Mean Context Relevance Score (LLM-based): score from 0 to 1, averaged over 2 runs for each query. Conclusions Indexing based on xapi structure resulted in better retrieval relevance scores compared to pdf indexing.Using full questions as queries provides more accurate retrieval compared to keywords. So question-based retrieval is preferable for higher context relevance.

PreviousLLM-based Evaluator for Context Relevance

Last updated 4 months ago