Avi Chawla
Search documents
X @Avi Chawla
Avi Chawla· 2026-02-22 08:34
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs.Avi Chawla (@_avichawla):You can watch this ML course with your grandma.Making Friends with ML is the best non-technical intros to ML I’ve ever seen.A 6.5-hour course that covers:- Intro to ML- ML in practice- The 12 steps of AI- Intro to ML algorithmsRequires zero technical background. https://t.co/8J8cDhNeBh ...
X @Avi Chawla
Avi Chawla· 2026-02-22 08:34
Watch here: https://t.co/Q1t0JSX3IcGet a free visual guidebook to learn MCPs from scratch (with 11 projects): https://t.co/v3cQWlQtR4 ...
X @Avi Chawla
Avi Chawla· 2026-02-22 08:34
You can watch this ML course with your grandma.Making Friends with ML is the best non-technical intros to ML I’ve ever seen.A 6.5-hour course that covers:- Intro to ML- ML in practice- The 12 steps of AI- Intro to ML algorithmsRequires zero technical background. https://t.co/8J8cDhNeBh ...
X @Avi Chawla
Avi Chawla· 2026-02-21 06:30
A layered overview of key Agentic AI concepts.Let’s understand it layer by layer.1) LLMs (foundation layer)At the core, you have LLMs like GPT, DeepSeek, etc.Core ideas here:- Tokenization & inference parameters: how text is broken into tokens and processed by the model.- Prompt engineering: designing inputs to get better outputs.- LLM APIs: programmatic interfaces to interact with the model.This is the engine that powers everything else.2) AI Agents (built on LLMs)Agents wrap around LLMs to give them the a ...
X @Avi Chawla
Avi Chawla· 2026-02-20 21:37
RT Avi Chawla (@_avichawla)Here's a neural net optimization trick that leads to ~4x faster CPU to GPU transfers.Imagine an image classification task.- We define the network, load the data and transform it.- In the training loop, we transfer the data to the GPU and train.Here's the problem with this:If you look at the profiler:- Most of the time/resources will be allocated to the kernel (the actual training code).- However, a significant amount of time will also be dedicated to data transfer from CPU to GPU ...
X @Avi Chawla
Avi Chawla· 2026-02-20 06:30
Here's a neural net optimization trick that leads to ~4x faster CPU to GPU transfers.Imagine an image classification task.- We define the network, load the data and transform it.- In the training loop, we transfer the data to the GPU and train.Here's the problem with this:If you look at the profiler:- Most of the time/resources will be allocated to the kernel (the actual training code).- However, a significant amount of time will also be dedicated to data transfer from CPU to GPU (this appears under cudaMem ...
X @Avi Chawla
Avi Chawla· 2026-02-19 22:38
RT Avi Chawla (@_avichawla)4 must-know model training paradigms for ML engineers: https://t.co/CPkh94fTlV ...
X @Avi Chawla
Avi Chawla· 2026-02-19 06:30
4 must-know model training paradigms for ML engineers: https://t.co/CPkh94fTlV ...
X @Avi Chawla
Avi Chawla· 2026-02-18 18:12
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs. https://t.co/cFg4mLOtU3Avi Chawla (@_avichawla):DeepSeek fixed one of AI's oldest problems.(using a 60-year-old algorithm)Here's the story:When deep learning took off around 2012-2013, researchers hit a wall. They couldn't just stack layers endlessly because gradients either exploded or vanished.So training deep https://t.co/1BHGBoWHVG ...
X @Avi Chawla
Avi Chawla· 2026-02-18 06:30
paper: https://t.co/44aLGGkgliIf you want to learn AI/ML engineering, I have put together a free PDF (380+ pages) with 150+ core lessons. Download for free: https://t.co/sF1iVFFNNU ...