Saturday, March 7, 2026

Why AI hallucinates

Much like people.


🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?


 



No single AI eliminates hallucinations—they're baked into probabilistic prediction, per that OpenAI paper. Benchmarks vary by task: Relum (Dec 2025) ranked Grok lowest at 8%; Vectara summarization shows frontier models ~4-10%; search-citation tests favor Perplexity (~37%). Claude shines on careful reasoning. For accuracy, use real-time tools + cross-check multiple models/sources. What's a specific query to test?


Taya
@travelingflying
Anthropic’s Claude says Charlie Kirk wasn’t a good person, but convicted criminal George Floyd was. Claude is woke





No comments:

Post a Comment

Why AI hallucinates

Much like people. av Toor @heynavtoor BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. No...