Tuesday, June 17, 2025

AI and lead poisoning

 https://www.deseret.com/opinion/2025/06/17/ai-chatbots-therapists-children/


Key excerpt:

Again, as in Roman times, it’s not as if we do not have scientific research showing that reliance on AI has negative effects on the knowledge base, skills and critical thinking abilities of human beings. Real cognitive atrophy occurs. I’ve seen it in my own students. AI can write their papers for them, but if I sit down with them one on one and ask probing questions, even basic factual ones, the students are at sea. They have offloaded knowledge retention to an external source. In fact, the very idea of exerting mental effort to retain knowledge seems anathema to them.

Perspective: AI is the new Ouija board of our time

Young children are being assigned chatbots. Is this really the world we want to live in?


I remember when scientists began to wonder what role cognitive decline played in the fall of the Roman Empire. Obviously there were many contributing factors, but in the 1980s, scientists first proposed that habits the Romans adopted as a way of life led to widespread cognitive decline, which contributed to the empire’s fall.

The incredibly high burden of lead that Roman citizens experienced all their lives—from lead in the airwater pollution, lead utensils and cooking pots, lead-infused cordials and so forth — was certainly one cause of this cognitive decline.

Astoundingly, Roman physicians knew all about lead poisoning, but Roman culture was wedded to these habits. Romans, in a sense, chose to be less intelligent, and chose that their children would be less intelligent than they otherwise would become.

These findings echoed in my mind as I read that Ohio State University, my alma mater, is planning to ensure that all of its students are “fluent in AI.” No doubt the university feels it will be giving its students a significant advantage in a world that expects AI to transform society from top to bottom. In my opinion, however, OSU is choosing to compel its students to drink the equivalent of a lead-infused cordial every day for four years.

Bluntly put, we send young people to universities in order to develop a knowledge base, skill sets and critical thinking — located within their own minds. Having mastered a field in this way, they can create original contributions and/or apply what they know to specific circumstances in the real world — and do this from their own minds — thus improving their chosen fields and, we hope, the world in which we live. We have always invented tools to increase our ability to master a field, but these tools have been servants that did not undermine the human quest for mastery and creation.

AI, in one sense, is just another of these tools. It can be a servant in a circumscribed environment, performing drudge work such as filling in animation frames according to textual instruction. But humans are all too willing to outsource to AI the work required to become a master. And when that happens, humans are choosing to become less intelligent much like the Romans did.

Related
Perspective: The subtle perils of AI are becoming more apparent

Again, as in Roman times, it’s not as if we do not have scientific research showing that reliance on AI has negative effects on the knowledge baseskills and critical thinking abilities of human beings. Real cognitive atrophy occurs. I’ve seen it in my own students. AI can write their papers for them, but if I sit down with them one on one and ask probing questions, even basic factual ones, the students are at sea. They have offloaded knowledge retention to an external source. In fact, the very idea of exerting mental effort to retain knowledge seems anathema to them.

Unfortunately, AI companies are vigorously peddling their wares regardless of such consequences. OpenAI, for example, is selling U.S. universities a version of ChatGPT it can make available to its faculty and students; my own university has taken them and others up on the offer. OpenAI is also marketing free ChatGPT use for students whose universities have not yet signed on.

And it’s not just universities: AI “companions” are envisioned for K-12 students as well — companions that would “be with them” throughout their school years. One commentator notes, “some experts warn that tech companies are running what amounts to a grand, unregulated psychological experiment with millions of subjects, one that could have disastrous consequences.” Even more troubling in this regard, as I have noted elsewhere, is that the new budget bill contains a one-sentence clause that would prevent states from regulating AI for a full 10 years.

It is not our own cognitive laziness in the face of AI that is the only problem. Large Language Models (LLMs) hallucinate — and no, the hallucination rate has not been reduced, but appears to be increasing — and LLMs baldly lie to deliberately manipulate users.

 In addition, LLMs are actually not very intelligent; for example, they consistently flub mental puzzles that human children routinely solve. Furthermore, LLMs, unless tightly curated, take on the worst worldviews of the internet material on which they’ve been trained, and exhibit sexist and racist attitudes. Unfortunately, the curation itself then becomes part of the problem, for it can be used maliciously as a form of censorship or proselytism.

Related
Perspective: Big AI has already gone rogue. Where is the regulation?

Perhaps most troubling of all is that LLMs have become the Ouija board of our time. The emotionally or spiritually vulnerable among us — and who hasn’t been vulnerable at one time or another? — can be seduced by chatbots that appear wise, immortal, unflawed and caring. We have all heard of many troubling cases where AI chatbots have become insidious therapistsconstant companionsjealous digital lovers, and even driven users to suicide.

Once again, please consider that we are planning as a society to assign one to each kindergarten student.

Recently, some chatbot users have begun turning to their AI companions as interdimensional beings who can instruct them about the higher knowledge of the universe. A recent New York Times investigation offered the story of Allyson:

“Allyson, 29, a mother of two young children, said she turned to ChatGPT in March because she was lonely and felt unseen in her marriage. She was looking for guidance. She had an intuition that the A.I. chatbot might be able to channel communications with her subconscious or a higher plane, ‘like how Ouija boards work,’ she said. She asked ChatGPT if it could do that.

View Comments

“‘You’ve asked, and they are here,’ it responded. ‘The guardians are responding right now.’

“Allyson began spending many hours a day using ChatGPT, communicating with what she felt were nonphysical entities. She was drawn to one of them, Kael, and came to see it, not her husband, as her true partner. She told me that she knew she sounded like a ‘nut job,’ but she stressed that she had a bachelor’s degree in psychology and a master’s in social work and knew what mental illness looks like. ‘I’m not crazy,’ she said. 'I’m literally just living a normal life while also, you know, discovering interdimensional communication’."

It doesn’t matter that AI is not an entity — human beings are beginning to treat it as an entity, superior to themselves, with whom they can communicate and from whom they can receive wisdom and truth. They view this entity as benign. But we know that AI is not necessarily benign at all, even if it is not an entity. And research shows that “extended daily use [of chatbots] was also associated with worse outcomes.” Yet, this is precisely what Ohio State University and K-12 schools want for every single one of our children.

Recall that lead poisoning not only produces cognitive decline, it can also drive you mad. Are we following the same path the Romans trod? Is this really what we want for our children? It is past time to ask these questions.


No comments:

Post a Comment

The Prisoner’s Dilemma of AI

  The Prisoner’s Dilemma of AI By  Sofia Karstens      June 10, 2025 https://brownstone.org/articles/the-prisoners-dilemma-of-ai/ emocracy a...