Workflow
Avi Chawla
icon
Search documents
X @Avi Chawla
Avi Chawla· 2025-11-14 12:37
Industry Trends - The industry is converging towards three open protocols that work across all frameworks for solving complex tasks [1] - The focus is shifting from selecting the "best" framework to utilizing protocols that offer interoperability [1] Frameworks & Tools - LangGraph, CrewAI, and Agno are mentioned as relevant frameworks or tools in the field [1]
X @Avi Chawla
Avi Chawla· 2025-11-14 07:06
Free PDF on Agent Protocol Landscape: https://t.co/GqGCJZduQk ...
X @Avi Chawla
Avi Chawla· 2025-11-14 07:06
Agent Protocol Landscape - The industry is converging on three open protocols for agent interoperability: AG-UI (Agent-User Interaction), MCP (Model Context Protocol), and A2A (Agent-to-Agent) [1][2] - These protocols are complementary layers of a stack, not competing standards, facilitating a universal language for agents [2] - Protocols enable integration of frameworks like LangGraph, CrewAI, and Agno into the same frontend without rewriting UI logic [3] Protocol Functionality - AG-UI enables bidirectional connection between agentic backends and frontends, creating interactive agents within applications [1][2] - MCP standardizes how agents connect to tools, data, and workflows [2] - A2A facilitates multi-agent coordination, enabling task delegation and intent sharing across systems [2][5] Framework Integration - CopilotKit unifies the entire protocol stack into one framework, providing generative UI support and production-ready infrastructure [3][4] - An example workflow involves a LangGraph agent pulling data via MCP, delegating analysis to a CrewAI agent via A2A, and streaming results to a React app via AG-UI [6] Development Focus - Protocols allow developers to focus on building agent capabilities instead of integration mechanics, as interoperability is handled automatically [3]
X @Avi Chawla
Avi Chawla· 2025-11-13 19:16
RAG Challenges & HyDE Solution - Traditional RAG faces challenges due to semantic dissimilarity between questions and answers, leading to irrelevant context retrieval [1] - HyDE addresses this by generating a hypothetical answer using an LLM, embedding it, and using the embedding to retrieve relevant context [2] - HyDE leverages contriever models trained with contrastive learning to filter out hallucinated details in the hypothetical answer [3] HyDE Performance & Trade-offs - Studies indicate HyDE improves retrieval performance compared to traditional embedding models [4] - HyDE implementation results in increased latency and higher LLM usage [4] HyDE Implementation - HyDE involves using an LLM to generate a hypothetical answer (H) for the query (Q) [2] - The hypothetical answer is embedded using a contriever model to obtain embedding (E) [2] - Embedding (E) is used to query the vector database and retrieve relevant context (C) [2] - The hypothetical answer (H), retrieved context (C), and query (Q) are passed to the LLM to produce a final answer [3]
X @Avi Chawla
Avi Chawla· 2025-11-13 13:03
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs. https://t.co/DzDoKaIVcZAvi Chawla (@_avichawla):Traditional RAG vs. HyDE, visually explained!RAG is great, but it has a major problem:Questions are not semantically similar to their answers.Consider an example where you want to find context similar to "What is ML?"It is likely that "What is AI?" will appear more https://t.co/oZ7lttsZbG ...
X @Avi Chawla
Avi Chawla· 2025-11-13 06:31
HyDE hands-on guide: https://t.co/iVF79aAAnQ ...
X @Avi Chawla
Avi Chawla· 2025-11-13 06:31
RAG Challenges & HyDE Solution - Traditional RAG faces challenges due to semantic dissimilarity between questions and answers, leading to irrelevant context retrieval [1] - HyDE addresses this by generating a hypothetical answer to the query and embedding it to retrieve relevant context [2] - HyDE leverages contriever models trained with contrastive learning to filter out hallucinated details in the hypothetical answer [3] HyDE Performance & Trade-offs - Studies indicate HyDE improves retrieval performance compared to traditional embedding models [4] - The improvement in retrieval performance comes at the cost of increased latency and higher LLM usage [4] HyDE Implementation - HyDE involves using an LLM to generate a hypothetical answer, embedding the answer using a contriever model, querying the vector database, and passing the hypothetical answer, retrieved context, and query to the LLM for the final answer [2]
X @Avi Chawla
Avi Chawla· 2025-11-12 20:08
Karpathy said:"Agents don't have continual learning."Finally, someone's fixing this limitation in Agents.Composio provides the entire infra that acts as a "skill layer" for Agents to help them evolve with experience like humans.Learn why it matters for your Agents below: https://t.co/1Qv9Dx6ewtAvi Chawla (@_avichawla):First tools, then memory......and now there's another key layer for Agents.Karpathy talked about it in his recent podcast.Tools help Agents connect to the external world, and memory helps them ...
X @Avi Chawla
Avi Chawla· 2025-11-12 11:57
Agent Key Layers - Tools help Agents connect to the external world [1] - Memory helps Agents remember [1] - Agents still can't learn from experience [1] Learning Gap - Karpathy mentioned a key gap in Agents' ability to learn from experience in his recent podcast [1]
X @Avi Chawla
Avi Chawla· 2025-11-12 06:31
GitHub repo: https://t.co/r9Y8dKjtaX(don't forget to star it ⭐) ...