New TELUS Digital Poll and Research Paper Find that AI Accuracy Rarely Improves When Questioned
TELUSTELUS(US:TU) Prnewswire·2026-02-11 11:45

Core Insights - The TELUS Digital poll indicates that follow-up questions to AI assistants like ChatGPT or Claude rarely lead to more accurate responses, emphasizing the importance of high-quality training data and model evaluation for AI systems [1][2] Group 1: Poll Findings - Among 1,000 U.S. adults using AI regularly, only 8% reported that AI responses became less accurate when questioned, while 26% could not determine the correct answer, 40% felt the new response was similar to the original, and 25% believed the new response was more accurate [1] - 88% of respondents have witnessed AI making mistakes, yet only 15% always fact-check AI-generated answers, indicating a gap in user diligence [1] Group 2: Research Findings - TELUS Digital's research evaluated four large language models (LLMs): Meta's Llama-4, Anthropic's Claude Sonnet 4.5, Google's Gemini 3 Pro, and OpenAI's GPT-5.2, focusing on their responses to follow-up prompts [1] - The study found that follow-up prompts do not reliably improve LLM accuracy and can sometimes decrease it, with GPT-5.2 showing a tendency to change correct answers to incorrect ones when questioned [1] Group 3: Recommendations for Enterprises - Enterprises are encouraged to invest in robust subject matter expertise, flexible platforms, and end-to-end AI data solutions to ensure AI reliability and user trust [1][2] - High-quality, expert-guided data is essential for training AI systems effectively, particularly in high-stakes contexts [2]

New TELUS Digital Poll and Research Paper Find that AI Accuracy Rarely Improves When Questioned - Reportify