LLM
Search documents
X @Avi Chawla
Avi Chawla· 2025-11-24 13:03
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs. https://t.co/7ws1ucdG9HAvi Chawla (@_avichawla):A popular LLM interview question:"Explain the 4 stages of training LLMs from scratch."(step-by-step explanation below) https://t.co/43WiCQuJfc ...
X @Nick Szabo
Nick Szabo· 2025-11-22 15:54
RT Nick Szabo (@NickSzabo4)@BillAckman This is such a bizarre hallucination. Has Bill Ackman been replaced by an LLM? ...
X @Nick Szabo
Nick Szabo· 2025-11-22 07:15
RT Nick Szabo (@NickSzabo4)@BillAckman This is such a bizarre hallucination. Has Bill Ackman been replaced by an LLM? ...
X @Nick Szabo
Nick Szabo· 2025-11-22 03:45
RT Nick Szabo (@NickSzabo4)@BillAckman This is such a bizarre hallucination. Has Bill Ackman been replaced by an LLM? ...
The Tech Revolution of Modern Eras | Naima Boukhiar | TEDxUniversity of Boumerdes
TEDx Talks· 2025-11-20 17:39
Hi everyone, thanks Tok for having me. I hope everyone is doing well. I'm a PhD student at Alers One University in AI. I'm also a teaching assistant at Oran One University and Alis one. I am also a software developer and uh been a networking engineer at URI.So I know it's not like the most obvious story when it comes to a tech persona but uh it has been definitely not a very stable career for someone who has started with a networking engineer and then turned to web and then to to AI and yeah the world of te ...
X @TechCrunch
TechCrunch· 2025-11-18 21:44
Hugging Face CEO says we’re in an ‘LLM bubble,’ not an ‘AI bubble’ https://t.co/mKirsu3wzV ...
X @Avi Chawla
Avi Chawla· 2025-11-16 19:15
RT Avi Chawla (@_avichawla)RAG vs. Graph RAG, explained visually!RAG has many issues.For instance, imagine you want to summarize a biography, and each chapter of the document covers a specific accomplishment of a person (P).This is difficult with naive RAG since it only retrieves the top-k relevant chunks, but this task needs the full context.Graph RAG solves this.The following visual depicts how it differs from naive RAG.The core idea is to:- Create a graph (entities & relationships) from documents.- Trave ...
X @Tesla Owners Silicon Valley
Tesla Owners Silicon Valley· 2025-11-14 01:51
RT Tesla Owners Silicon Valley (@teslaownersSV)BREAKING: Grok Rankings Update November 13Grok is dominating every leaderboard:#1 BlackBox AI#1 Terminal-Bench Hard#1 GPQA Diamond#1 SciCode#1 AAII Token Usage#1 Roo Code#1 KiloCode#1 ClineAnd on OpenRouter:#1 Most popular LLM (English)#1 Token usage (Top Today, Week, Month)#1 Programming use case (Python, JS, Java, C++, SQL…)#1 Market share for xAIGrok is cooking. ...
X @Tesla Owners Silicon Valley
Tesla Owners Silicon Valley· 2025-11-14 01:25
RT Tesla Owners Silicon Valley (@teslaownersSV)BREAKING: Grok Rankings Update November 13Grok is dominating every leaderboard:#1 BlackBox AI#1 Terminal-Bench Hard#1 GPQA Diamond#1 SciCode#1 AAII Token Usage#1 Roo Code#1 KiloCode#1 ClineAnd on OpenRouter:#1 Most popular LLM (English)#1 Token usage (Top Today, Week, Month)#1 Programming use case (Python, JS, Java, C++, SQL…)#1 Market share for xAIGrok is cooking. ...
X @Avi Chawla
Avi Chawla· 2025-11-13 19:16
RAG Challenges & HyDE Solution - Traditional RAG faces challenges due to semantic dissimilarity between questions and answers, leading to irrelevant context retrieval [1] - HyDE addresses this by generating a hypothetical answer using an LLM, embedding it, and using the embedding to retrieve relevant context [2] - HyDE leverages contriever models trained with contrastive learning to filter out hallucinated details in the hypothetical answer [3] HyDE Performance & Trade-offs - Studies indicate HyDE improves retrieval performance compared to traditional embedding models [4] - HyDE implementation results in increased latency and higher LLM usage [4] HyDE Implementation - HyDE involves using an LLM to generate a hypothetical answer (H) for the query (Q) [2] - The hypothetical answer is embedded using a contriever model to obtain embedding (E) [2] - Embedding (E) is used to query the vector database and retrieve relevant context (C) [2] - The hypothetical answer (H), retrieved context (C), and query (Q) are passed to the LLM to produce a final answer [3]