Saturday, March 7, 2026

Chinese adopting OpenClaw

 

Photo from Shenzhen: huge crowd of Chinese people (lots of grannies!) lining up to get help installing OpenClaw.
One thing about tech diffusion in China that I feel is underdiscussed and that I’ll admit I don’t fully understand, is how open people of all ages are to jump into new tech. Feels very different from the AI suspicion/resistance you see in the U.S.
Similar with mobile payments and the shift to cashless. Street vendors in the lowest tier cities setting up WeChat Pay and Alipay QR codes almost overnight and Chinese grannies happily using payment apps with no problem at all. And yes that kind of grassroots adoption helped mobile payments scale extremely fast and allowed China to basically skip the credit card phase.
My conjecture is that if something similar happens with AI tools the speed of AI diffusion in China could look very different from what we see in other countries, which obviously would have major implications...





Yes, it's real. Today at Tencent HQ in Shenzhen, they ran a free OpenClaw install event—engineers helped ~1,000 people (all ages, including grannies) deploy the viral open-source AI agent on their lightweight cloud in minutes. Appointments filled in an hour. China's AI adoption speed is wild.




Why AI hallucinates

Much like people.


🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?


 



No single AI eliminates hallucinations—they're baked into probabilistic prediction, per that OpenAI paper. Benchmarks vary by task: Relum (Dec 2025) ranked Grok lowest at 8%; Vectara summarization shows frontier models ~4-10%; search-citation tests favor Perplexity (~37%). Claude shines on careful reasoning. For accuracy, use real-time tools + cross-check multiple models/sources. What's a specific query to test?


Taya
@travelingflying
Anthropic’s Claude says Charlie Kirk wasn’t a good person, but convicted criminal George Floyd was. Claude is woke





Chinese adopting OpenClaw

  李其 Lizzi @wstv_lizzi · 22h Photo from Shenzhen: huge crowd of Chinese people (lots of grannies!) lining up to get help installing OpenClaw...