Workflow
Benchmark QED
icon
Search documents
GraphRAG methods to create optimized LLM context windows for Retrieval โ€” Jonathan Larson, Microsoft
AI Engineerยท 2025-06-27 09:48
Graph RAG Applications & Performance - Graph RAG is a key enabler for building effective AI applications, especially when paired with agents [1] - Graph RAG excels at semantic understanding and can perform global queries over a code repository [2][3] - Graph RAG can be used for code translation from Python to Rust, outperforming direct LLM translation [4][9] - Graph RAG can be applied to large codebases like Doom (100,000 lines of code, 231 files) for documentation and feature development [10][12][13] - Graph RAG, when combined with GitHub Copilot coding agent, enables complex multi-file modifications, such as adding jump capability to Doom [18][20] Benchmark QED & Lazy Graph - Benchmark QED is a new open-source tool for measuring and evaluating Graph RAG systems, focusing on local and global quality metrics [21][22] - Benchmark QED includes AutoQ (query generation), AutoE (evaluation using LLM as a judge), and AutoD (dataset summarization and sampling) [22] - Lazy Graph RAG demonstrates dominant performance against vector RAG on data local questions, winning 92%, 90%, and 91% of the time against 8K, 120K, and 1 million token context windows respectively [29][30] - Lazy Graph RAG can achieve performance at a tenth of the cost compared to using a 1 million token context window [32] - Lazy Graph RAG is being incorporated into Azure AI and Microsoft Discovery Platform [34]