LLMs

Search documents
X @Avi Chawla
Avi Chawla· 2025-07-08 06:34
That's a wrap!If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs.Avi Chawla (@_avichawla):How LLMs work, clearly explained (with visuals): ...
X @Ansem
Ansem 🧸💸· 2025-07-06 12:35
RT sankalp (@dejavucoder)fixing three things that i think will make learning with LLMs a much better and easier to trust experience -1. hallucination in long context scenarios2. reducing the agreeableness so LLM can call out my bullshit and other model's bullshit. correlated with hallucination3. better intent detection where the LLM asks follow up questions if the intent was not clear or if it just wants to underrated the preference of the user ...
X @Avi Chawla
Avi Chawla· 2025-07-04 18:54
RT Avi Chawla (@_avichawla)6 no-code LLMs, Agents, and RAG builder tools for AI engineers:(open-source and production-grade) ...
X @Avi Chawla
Avi Chawla· 2025-07-04 06:47
6 no-code LLMs, Agents, and RAG builder tools for AI engineers:(open-source and production-grade) ...
Context Engineering for Agents
LangChain· 2025-07-02 15:54
Context Engineering Overview - Context engineering is defined as the art and science of filling the context window with the right information at each step of an agent's trajectory [2][4] - The industry categorizes context engineering strategies into writing context, selecting context, compressing context, and isolating context [2][12] - Context engineering is critical for building agents because they typically handle longer contexts [10] Context Writing and Selection - Writing context involves saving information outside the context window, such as using scratch pads for note-taking or memory for retaining information across sessions [13][16][17] - Selecting context means pulling relevant context into the context window, including instructions, facts, and tools [12][19][20] - Retrieval-augmented generation (RAG) is used to augment the knowledge base of LLMs, with code agents being a large-scale application [27] Context Compression and Isolation - Compressing context involves retaining only the most relevant tokens, often through summarization or trimming [12][30] - Isolating context involves splitting up context to help an agent perform a task, with multi-agent systems being a primary example [12][35] - Sandboxing can isolate token-heavy objects from the LLM context window [39] Langraph Support for Context Engineering - Langraph, a low-level orchestration framework, supports context engineering through features like state objects for scratchpads and built-in long-term memory [44][45][48] - Langraph facilitates context selection from state or long-term memory and offers utilities for summarizing and trimming message history [50][53] - Langraph supports context isolation through multi-agent implementations and integration with sandboxes [55][56]
X @mert | helius.dev
mert | helius.dev· 2025-06-30 20:11
RT Helius (@heliuslabs)Our docs are now enhanced with AI, thanks to @mintlifyYou can ask questions around Solana/Helius RPCs, APIs, and streaming — to a smart AI modelyou can also make & test calls interactively and even copy/download the pages for your LLMsand yes we have llms.txt files! ...
The Future Of Education In An AI-First World
ARK Invest· 2025-06-30 16:58
Industry Shift - AI 和 LLM 掌握世界知识,教育行业面临重大转变 [1] - 学校和大学的角色将从提供信息转变为创造人际连接和情感成长的空间 [1] AI Disruption - AI 智能体正在颠覆教育 [1]
X @The Economist
The Economist· 2025-06-29 16:37
Influencing LLMs means influencing their sources. That involves building a loyal online following. To the relief of the big agency holding companies, it also means an enduring role for old-school PR https://t.co/giXPqyi4bG ...
X @The Economist
The Economist· 2025-06-28 23:29
Influencing LLMs means influencing their sources. That involves building a loyal online following. To the relief of the big agency holding companies, it also means an enduring role for old-school PR https://t.co/g7XpiHEq4l ...