Hallucination

Search documents
X @Ansem
Ansem π§ΈπΈΒ· 2025-07-06 12:35
RT sankalp (@dejavucoder)fixing three things that i think will make learning with LLMs a much better and easier to trust experience -1. hallucination in long context scenarios2. reducing the agreeableness so LLM can call out my bullshit and other model's bullshit. correlated with hallucination3. better intent detection where the LLM asks follow up questions if the intent was not clear or if it just wants to underrated the preference of the user ...
Taming Rogue AI Agents with Observability-Driven Evaluation β Jim Bennett, Galileo
AI EngineerΒ· 2025-06-27 10:27
[Music] So I'm here to talk about taming rogue AI agents but essentially want to talk about uh evaluation driven development observability driven but really why we need observability. So, who uses AI? Is that Jim's stupid most stupid question of the day? Probably. Who trusts AI? Right. If you'd like to meet me after, I've got some snake oil you might be interested in buying. Yeah, we do not trust AI in the slightest. Now, different question. Who reads books? That's reading books. If you want some recommenda ...