Avi Chawla
Search documents
X @Avi Chawla
Avi Chawla· 2026-05-06 11:12
A time-complexity cheat sheet of 10 ML algorithms:What's the inference time-complexity of KMeans? https://t.co/HjwOnqDXfE ...
X @Avi Chawla
Avi Chawla· 2026-05-05 21:10
RT Avi Chawla (@_avichawla)The most comprehensive RL overview I've ever seen.Kevin Murphy from Google DeepMind, who has over 128k citations, wrote this.What makes this different from other RL resources:→ It bridges classical RL with the modern LLM era:There's an entire chapter dedicated to "LLMs and RL" covering:- RLHF, RLAIF, and reward modeling- PPO, GRPO, DPO, RLOO, REINFORCE++- Training reasoning models- Multi-turn RL for agents- Test-time compute scaling→ The fundamentals are crystal clearEvery major a ...
X @Avi Chawla
Avi Chawla· 2026-05-05 20:33
If a 12M-token context window actually worked with full fidelity, it would be a money-printing machine.Here's a conservative example using only publicly available data.Take every company in the S&P 500.For each one:> Pull the latest earnings call transcript. These calls are public. A typical earnings call will be 45-60 mins with prepared remarks and analyst Q&A, producing a transcript of roughly 8k-12k tokens. For all 500 companies, that's about 5M tokens.> Now add a year of daily price and volume data for ...