X @Avi Chawla
Avi Chawla·2025-12-07 19:14

Model Training & Context Expansion - Fine-tuning on longer documents with 128K context is an insufficient response in a Research Scientist interview at OpenAI [1] - The question focuses on expanding the context length of an LLM from 2K to 128K tokens [1]