Workflow
Avi Chawla
icon
Search documents
X @Avi Chawla
Avi Chawla· 2025-11-12 20:08
Karpathy said:"Agents don't have continual learning."Finally, someone's fixing this limitation in Agents.Composio provides the entire infra that acts as a "skill layer" for Agents to help them evolve with experience like humans.Learn why it matters for your Agents below: https://t.co/1Qv9Dx6ewtAvi Chawla (@_avichawla):First tools, then memory......and now there's another key layer for Agents.Karpathy talked about it in his recent podcast.Tools help Agents connect to the external world, and memory helps them ...
X @Avi Chawla
Avi Chawla· 2025-11-12 11:57
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs. https://t.co/rdRk4KxxWfAvi Chawla (@_avichawla):First tools, then memory......and now there's another key layer for Agents.Karpathy talked about it in his recent podcast.Tools help Agents connect to the external world, and memory helps them remember, but they still can't learn from experience.He said that one key gap https://t.co/gWg5y80UxI ...
X @Avi Chawla
Avi Chawla· 2025-11-12 06:31
GitHub repo: https://t.co/r9Y8dKjtaX(don't forget to star it ⭐) ...
X @Avi Chawla
Avi Chawla· 2025-11-12 06:31
First tools, then memory......and now there's another key layer for Agents.Karpathy talked about it in his recent podcast.Tools help Agents connect to the external world, and memory helps them remember, but they still can't learn from experience.He said that one key gap in building Agents today is that:"They don't have continual learning. You can't just tell them something and they'll remember it."This isn't about storing facts in memory, but rather about building intuition.For instance, when a human master ...
X @Avi Chawla
Avi Chawla· 2025-11-11 20:14
RT Avi Chawla (@_avichawla)Transformer and Mixture of Experts in LLMs, explained visually!Mixture of Experts (MoE) is a popular architecture that uses different experts to improve Transformer models.Transformer and MoE differ in the decoder block:- Transformer uses a feed-forward network.- MoE uses experts, which are feed-forward networks but smaller compared to those Transformer.During inference, a subset of experts are selected. This makes inference faster in MoE.Also, since the network has multiple decod ...
X @Avi Chawla
Avi Chawla· 2025-11-10 19:24
RT Avi Chawla (@_avichawla)25 most important mathematical definitions in data science.P.S. What else would you add here? https://t.co/iMNFip5kIC ...
X @Avi Chawla
Avi Chawla· 2025-11-10 06:31
25 most important mathematical definitions in data science.P.S. What else would you add here? https://t.co/iMNFip5kIC ...
X @Avi Chawla
Avi Chawla· 2025-11-09 19:41
RT Avi Chawla (@_avichawla)An MCP server to control Jupyter notebooks from Claude:It lets you:- Create code cells- Execute code cells- Create markdown cells100% open-source! https://t.co/wcujD281Hf ...
X @Avi Chawla
Avi Chawla· 2025-11-09 06:33
GitHub repo: https://t.co/7ouPUT7SWN.Get a free visual guidebook to learn MCPs from scratch (with 11 projects): https://t.co/yzmieK4Z0c: ...
X @Avi Chawla
Avi Chawla· 2025-11-09 06:33
An MCP server to control Jupyter notebooks from Claude:It lets you:- Create code cells- Execute code cells- Create markdown cells100% open-source! https://t.co/wcujD281Hf ...