AI and Critical Thinking: A Difficult Mix We Need to Get Right

Albert Einstein

Image of Albert Einstein, who didn’t use AI, by Jackie Ramirez from Pixabay.

 

By James Myers

Human living is a tricky and sometimes messy business, with our biological needs and limitations often standing in the way of our desires. So when an artificial “intelligence” comes along and promises to take some burden off our minds, there’s a natural tendency to give in to the AI’s tempting offer.

We see evidence of this increasing reliance on AI all around us, and its dangers are plain to see too.

Perhaps the biggest danger is that, knowing our time in life is limited, we will think the machine is going to save us from all of time’s burdens. In this respect, we now have another example of misuse of technology in the legal realm.

Not long after the incident in which two New York lawyers were fined for submitting a court briefing written by OpenAI’s ChatGPT containing six fictitious case references, an attorney for Donald Trump’s former lawyer Michael Cohen made a court filing containing false references. Reuters reports that Cohen, who was disbarred in 2019, admitted that he had found the references in his own online research using Google’s Bard, a generative AI technology similar to OpenAI’s ChatGPT.

According to Reuters, Cohen stated that he had not expected his lawyer to “drop the cases wholesale into his submission without even confirming they existed.” Cohen also said he had “not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”

Will Cohen and his lawyer be the last to learn the lesson that the machine’s output can’t always be trusted? Will the rest of the world continuously educate themselves on emerging trends? Will we all heed the alert near the bottom of OpenAI’s webpage touting its GPT-4 technology, which warns of the machine’s social biases, hallucinations, and limitations when dealing with adversarial prompts?

Screenshot from OpenAI’s website, taken Jan. 4, 2024

 

Let’s remember our human superpowers for all time: our biology and imagination.

If we become complacent and use the machine to rid us of the burden of thinking, we would be forgetting that human thought is at least as much a benefit as it is a burden, and that most often taking time to think proves far more beneficial than its cost in energy and time. We must remember that critical thinking is essential for human biological survival and our capacity for imagination, because history shows that untested assumptions lead to dangers for body and mind.

Machines clearly have no biological needs, nor do they have imagination – that’s why they’re trained on our data to generate outputs predicting what we will do or say. But their mathematically calculated outputs, wrapped in compellingly coherent language, can seduce human users like the New York lawyers, and Michael Cohen and his lawyer, into accepting untested assumptions.

The thing the machines will never understand, as they process and parrot our data, is time and the meaning of time for thinking, biological humans. That’s because the machines will never experience the superpower that biology gives to us over the course of time: the power to procreate and the power to imagine.

Time is an essential Investment for human survival, and time for reflection gives us the best outcomes – especially against our newest enemy: “botshit.”

History is clear: mistakes happen when we engage in a race against time, when it would be far better to proceed with caution and take advantage of time to reflect and to learn. Technology can amplify the race against time.

Will the student rushing to complete her philosophy essay using ChatGPT or Bard going to take the time to check all the machine’s references, and will she have sufficient experience in the subject to detect suspicious content? Let’s get real.

It’s bad enough that we have to deal with human bullshit (which philosopher Harry Frankfurt defined as speech intended to persuade without regard to the truth), but now there’s a newly-coined term, “botshit,” for AI-generated hallucinations that we have to detect and defend against.

My own experience with ChatGPT’s philosophical references demonstrates how devoid the machine is of any critical thinking sense, when in my research for The Quantum Feedback Loop podcast episode What Would Socrates Say About ChatGPT? the machine spewed some botshit in two blatantly false references on Socrates.

Generative AI, like the technology of OpenAI and Google’s Bard, aren’t the only examples of applications that attempt to short-circuit the requirement of time to get things right.

Instead of time being taken to perfect driverless vehicle technology before it was put in use, residents of San Francisco and Austin were subjected to a plague of driverless taxis unleashed in the two cities. The social experiment demonstrated many technological flaws, and a risk to human life.

Witness the chaos on the roads of Austin in our October article on navigation technology. After a driverless taxi manufactured by General Motors division Cruise dragged and injured a pedestrian in San Francisco in October, the company took all 950 of its vehicles off the roads to install a software update. According to news reports, Cruise said the new software “should better guide the cars to come to a complete stop in the event of a crash, instead of automatically pulling the car over despite the situation.”

Note how Cruise hedged the possible outcomes of its update with the word “should,” when “will definitely” would be far more reassuring to human pedestrians who could be its potential victims. What other likely or unlikely situations have the software developers failed to predict, and which of them will emerge after the updated vehicles are once again free to roam the roads?

Critical thinking requires reasoning, which requires time. Machines don’t reason.

History shows that many great inventions derived purely from human thought, without the use of machines. Albert Einstein didn’t use a machine to devise the theory of special relativity in 1905 and, ten years later, the theory of general relativity. Sure, it required much time for Einstein to bring his thoughts together in a way that would revolutionize life on Earth, but without that time it’s unthinkable that we would be as technologically advanced as we now are, thanks to the genius of the man’s mind.

We have a tendency to apply terms like “intelligence” and “reason” to our machines, but they are neither intelligent nor do they reason. We really should stop using those words for machines. The only intelligence in the machine does not belong to the machine – it’s the intelligence, and sometimes errors, of the programmers. Those are the humans who design the machine’s algorithms, which the machine unthinkingly operates for as long as we feed electricity to its circuits.

OpenAI’s website claims that “GPT-4 surpasses ChatGPT in its advanced reasoning capabilities,” but nowhere does the company define what “reasoning” is.

Not that it’s up to OpenAI to define what reasoning is, in any event. What do you think reasoning is? I bet it’s not the same as what I think reasoning is. Ask anyone and you won’t get the same answer as yours. Whatever it is, surely “reasoning” is far more complex than a bunch of algorithms like GPT-4 and Bard generating mathematically calculated predictions.

It’s easy to forget, although we shouldn’t, that the machine isn’t receiving inputs in real-time, like we do. OpenAI’s earlier GPT-3 technology used data that was six and more months old by the time the application was introduced to the world in November 2022. The company won’t say how recent the data in GPT-4’s training set are, but however outdated it is, it isn’t as recent as today. If something earth-shattering happened today, the software wouldn’t know about it, but we surely would.

The machine is just not that timely.

Whatever reason is, humans reason in real-time. Image by Storyset on Freepik

 

Critical thinking is a skill we have to exercise and teach to our children.

In the lead-up to the sad anniversary of the January 6, 2021 attempted insurrection at the Capitol in Washington to overturn national election results, The Washington Post and University of Maryland conducted a poll of 1,024 Americans. The results show that “A quarter of Americans believe FBI instigated Jan. 6,” and “More than 3 in 10 Republicans have adopted the falsehood that the FBI conspired to cause the Capitol riot.” Falsehoods like this are easily spread like a virus on social media and other technological platforms, furthering the secret agendas of the liars preying on their many victims who fail to apply critical thought.

The Thinker by Rodin

The Thinker, by Auguste Rodin. Image: Wikipedia

Why do people believe lies like this, as implausible as they are? Failure to take time to reflect and to apply critical thinking is certainly among the many reasons.

We humans are story-tellers, and there’s not one among us who doesn’t like a good story. Stories of conspiracies, of titanic us-versus-them struggles, become particularly appealing, and the liars know it. World War II was launched on an us-versus-them lie, and tyrants continue to use the time-worn tactic to poison the minds of their followers and drive wedges between people. History shows that no good ever comes from zero-sum thinking in which one side wins all and the other loses all.

Tyrants will, unfortunately, continue to ensnare many victims with zero-sum thinking, unless people stop and take the time to reflect. The speed at which the stories appear on our screens seems to permit little time for thought, but then we need to remember it’s only a story on a screen.

A screen is only a two-dimensional representation of reality, when we need to concern ourselves with the real-time human experience in four dimensions of space and time.

The human experience in real-time is what’s important. It’s an experience that requires time to unfold, and critical thinking to be successful. Not everyone can be as revolutionary in their thinking as Albert Einstein was, but many minds united in a common cause for the greater good can be extraordinarily creative in advancing human life.

History proves time and again the good that can emerge from collective human thinking – whether it was to eradicate the killer smallpox in 1980, or build and launch the James Webb Space Telescope, or reduce global poverty.

If we don’t exercise critical thinking, then how will the children of the world and their children learn to think critically? We owe it to them, more than we owe it to ourselves, to take time for reflection and to apply reason in the use of our technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community