Workflow
low-rank adaptation
icon
Search documents
X @Avi Chawla
Avi Chawla· 2026-03-23 09:03
TinyLoRA: LoRA scaled down to 1 parameter.Researchers from Meta, Cornell, and CMU just dropped a banger.They turned an 8B parameter model into a math and reasoning powerhouse by tweaking just 13 of those parameters.That's 26 bytes and takes up less storage than this sentence.The model hit 91% accuracy on GSM8K, up from 76% before the tweak.The method is called TinyLoRA, and it pushes low-rank adaptation to its absolute extreme.Some quick background on LoRA first:When you finetune a large model, you're updat ...