Workflow
Context Engineering
icon
Search documents
X @Avi Chawla
Avi Chawla· 2025-11-25 06:30
To summarise, I'll leave you with the context engineering graphic.Also, here's an open-source stack for context engineering:- Memory: @zep_ai- Knowledge base: @milvusio- Agent orchestration: @crewAIInc- Observability & tracing: @deepeval https://t.co/qseVsjlXBr ...
X @Avi Chawla
Avi Chawla· 2025-11-25 06:30
Context engineering, clearly explained (with visuals):(an illustrated guide below) https://t.co/am97ee4vTA ...
The Unbearable Lightness of Agent Optimization — Alberto Romero, Jointly
AI Engineer· 2025-11-24 20:16
Right. Hello everyone. Uh today I will present meta adaptive context engineering or meta AC for short which is a new framework designed to optimize AI agents beyond single dimension approaches.We will explore how orchestrating multiple adaptation strategies can overcome the limitations of existing context engineering methods. Now a little introduction about myself. Uh so I'm Alberto Romero.I'm the co-founder and CEO at jointly. And for context at jointly we build the main specialized agents for regulated in ...
Context Engineering: Connecting the Dots with Graphs — Stephen Chin, Neo4j
AI Engineer· 2025-11-24 20:16
Context Engineering & AI - Context engineering is evolving from simple prompt engineering to a dynamic approach that feeds AI with wider context for better results [3] - Context engineering enables selective curation of information relevant to specific domains, especially important in enterprise environments [4] - Structuring input in context engineering improves signal over noise, addressing a major problem with current AI models [5] - Memory, both short-term and long-term, is crucial for AI, enabling collaboration, remembering conversation history, and effective long-term operations [10][11][12] Knowledge Graphs & Graph RAG - Knowledge graphs provide structured information that complements AI's ability to create and pull from different sources [17] - Graph RAG, which uses graphs as part of the retrieval process, provides more relevant results than vector similarity search by incorporating relationships, nodes, and community groupings [22][23] - Graph RAG enables explainable AI and allows for the implementation of role-based access control, ensuring that only authorized individuals can access specific information [25] Neo4j Solutions & Resources - Neo4j offers a knowledge graph builder, a web application that allows users to upload files and generate knowledge graphs [28] - Neo4j's MCP server is an open-source extension that enables querying knowledge graphs using Cypher, a graph query language [46] - Neo4j provides resources like Graph Academy (free learning resources) and Nodes AI (virtual conference) for learning about graph technology and AI applications [53][54]
拆解Gemini 3:Scaling Law的极致执行与“全模态”的威力
3 6 Ke· 2025-11-24 03:55
Core Insights - Google’s Gemini 3 has transformed the AI landscape in Silicon Valley, positioning the company as a leader rather than a follower in the AI race against OpenAI and Anthropic [1][3] - Gemini 3 is recognized for its significant advancements in multimodal capabilities and is seen as a prime example of executing Scaling Law effectively [1][3] Performance Evaluation - Within 48 hours of its release, Gemini 3 topped various performance rankings, showcasing its true multimodal native model capabilities [4][6] - Users reported that Gemini 3 provides a more integrated development experience, particularly with tools like Google AntiGravity, which enhances coding efficiency by allowing simultaneous visual and coding tasks [6][7] Technical Innovations - The model achieved a notable improvement in Few-shot Learning, reaching over 30% on the ARC-AGI-2 Benchmark, indicating a qualitative leap in its reasoning capabilities [10][11] - Gemini 3 employs a tree-based thought process and self-rewarding mechanisms, allowing it to explore multiple reasoning paths simultaneously [19][20] Developer Ecosystem - The release of Gemini 3 and AntiGravity has led to discussions about the end of the coding competition, as Google’s ecosystem may create significant barriers for startups like Cursor [22][23] - Despite the strong capabilities of AntiGravity, it still faces challenges in backend deployment and complex system architecture, suggesting that independent developers may still find opportunities in niche areas [25][26] Future Trends in AI - The focus is shifting towards new AI paradigms beyond LLMs, with emerging labs like NeoLab attracting significant venture capital [27][28] - There is a growing interest in developing world models that understand physical laws, indicating a potential shift in AI research directions [31][32] Conclusion - The launch of Gemini 3 serves as a robust counter to the "AI bubble" narrative, demonstrating that with sufficient computational power and engineering optimization, Scaling Law remains a viable path for AI advancement [32][33]
终于,TRAE SOLO全量开放,我们用它复刻了PewDiePie的大模型智囊团
机器之心· 2025-11-13 04:12
Core Viewpoint - TRAE SOLO has officially launched, marking a significant advancement in AI coding tools, particularly for complex project development in the AI IDE sector [1][6][49]. Group 1: Product Features and Enhancements - The SOLO official version introduces several core capabilities, including the built-in intelligent agent SOLO Coder, multi-task lists, context compression, and code change functionalities, enhancing its ability to handle complex tasks [6][10]. - The new positioning of SOLO as "The Responsive Coding Agent" emphasizes its capabilities in real-time perception, task management, and multi-tasking [6][49]. - A limited-time free trial for all TRAE international version users is available until November 15, allowing users to experience SOLO Coder and SOLO Builder [7][8]. Group 2: Context Management and User Experience - The "Responsive Context" feature allows developers to maintain control over the development process by ensuring that context is trackable, retrievable, and uninterrupted, addressing common frustrations with AI programming [11][13]. - The updated Plan function provides clear task planning before coding begins, allowing for alignment between the developer and the AI model [13][41]. - The "Responsive Review" feature enhances transparency in the development process, allowing developers to see task progress and understand AI actions in real-time [16][20]. Group 3: Multi-Tasking and Collaboration - SOLO supports genuine multi-tasking, enabling developers to work on multiple projects or sub-tasks simultaneously without losing context [23][25]. - The integration of Sub-Agents allows for specialized tasks, reducing the need for manual handling and improving efficiency [25][40]. Group 4: Testing and Iteration - The testing of SOLO Coder demonstrated its ability to handle complex scenarios, such as recreating a chatbot project, showcasing its rapid development capabilities [27][28]. - The iterative process allows for continuous improvement, with SOLO Coder capable of understanding feedback and autonomously correcting issues [39][41]. Group 5: Industry Trends and Future Outlook - The evolution of TRAE from a simple AI coding assistant to a comprehensive coding agent reflects a broader industry trend towards intelligent systems that can manage complex projects [48][50]. - The future of AI programming tools is expected to focus on enhancing the capabilities of intelligent agents, allowing developers to shift from coding to architectural roles [56][57].
How Agents Use Context Engineering
LangChain· 2025-11-12 16:36
Context Engineering Principles for AI Agents - The industry recognizes the increasing task length AI agents can perform, with task length doubling approximately every seven months [2] - The industry faces challenges related to context rot, where performance degrades with longer context lengths, impacting cost and latency [3][4] - Context engineering, involving offloading, reducing, and isolating context, is crucial for managing context rot in AI agents [8][9][10] Context Offloading - Giving agents access to a file system is beneficial for saving and recalling information during long-running tasks and across different agent invocations [11][15][18] - Offloading actions from tools to scripts in a file system expands the agent's action space while minimizing the number of tools and instructions [19][22] - Progressive disclosure of actions, such as with Claude skills, saves tokens by selectively loading skill information only when needed [26][30] Context Reduction - Compaction, summarization, and filtering are techniques used to reduce context size and prevent excessively large tool results from being passed to the language model [32][33][39] - Manis compacts old tool results by saving them to a file and referencing the file in the message history [34] - Deep agents package applies summarization after a threshold of 170,000 tokens [38] Context Isolation - Context isolation, using separate context windows or sub-agents for individual tasks, helps manage context and improve performance [10][39][40] - Sub-agents can have shared context with the parent agent, such as access to the same file system [42] Tool Usage - Agent harnesses often employ a minimal number of general, atomic tools to save tokens and minimize decision-making complexity [44] - Cloud code uses around a dozen tools, Manis uses less than 20, and the deep agent CLI uses 11 [24][25][44]
X @Decrypt
Decrypt· 2025-11-03 22:35
A Smarter Way to Talk to AI: Here's How to ‘Context Engineer’ Your Prompts► https://t.co/MZXESEAivM https://t.co/MZXESEAivM ...
X @Avi Chawla
Avi Chawla· 2025-10-27 06:32
AI Engineering Skill - Context engineering is rapidly becoming a crucial skill for AI engineers [1] - It's about the systematic orchestration of context, not just clever prompting [1] Context Engineering Workflow - The demo provides more information about what context engineering actually means [1] - Let's build a context engineering workflow, step by step [1]
Building LangChain and LangGraph 1.0
LangChain· 2025-10-22 14:57
Langchain Evolution & Strategy - Langchain started as an open-source package and has evolved into Typescript packages, Langchain, and Langraph [1][2] - The industry focus has shifted from easy prototyping to production-ready solutions, leading to the launch of Langraph [7] - Langchain 1.0 is built on top of Langraph, combining ease of use with production-ready runtime [16] Langraph Features & Benefits - Langraph was launched to provide more controllability and customization for users transitioning to production [8][9] - Langraph includes utilities like durable execution environments, error recovery from checkpoints, and streaming capabilities [13][14] - Langraph allows for deterministic steps and workflows, making it suitable for complex applications [39] Langchain 1.0 & Create Agent Abstraction - Langchain 1.0 aims to be the easiest way to get started with generative AI, specifically building agents [17] - The create agent abstraction simplifies agent creation with a few lines of code, leveraging a battle-tested pattern [18][19] - Middleware allows developers to add custom logic at any point in the agent loop, enabling extensibility [23] Models & Content Blocks - Dynamic model middleware enables dynamic selection of models based on context, allowing builders to stay on the bleeding edge [27][29] - Content blocks are introduced as a standard representation for message content, addressing the issue of varying formats across model providers [31][32] Langchain vs Langraph - Langchain is recommended for getting started due to its ease of use, while Langraph is suitable for extremely custom workflows [36][37] - Langraph is ideal for workflows that require deterministic components and agentic components [37]