By James Myers
One of the most interesting things about ChatGPT technology is the types of philosophical questions that people ask the machine. Questions for which no human has a definitive answer, like “What is the meaning of life?” and “Does God exist?”, are put to the machine, which cannot, of course, answer correctly since the response does not exist in its database.
That’s the case at least for the time being, while the machine’s database is largely human-generated. No one knows what will happen as the machine’s database rapidly expands, adding its own outputs and actions that are based on its outputs. If the machine’s database is, for example, now 95% human-generated, what will happen when that proportion drops to 80%, then 50%, then possibly less? Reliability of output, already a significant question, may become even more doubtful.
In the debate over the use and future of ChatGPT that erupted since its debut in November 2022, little has been said about the power of two words: “Common sense.” Common sense is a power that humans have (or lack, in the case of some) that may be impossible to instill in the machine. What is common sense, and why is it so difficult to program?
The history behind these questions, and the shortcomings of the current data-driven machine “learning” model responsible for the rapid expansion AI in everyday applications, is treated with remarkable wisdom, philosophy, and foresight in the compact 142 pages of computer scientist Hector Levesque’s Common Sense, the Turing Test, and the Quest for Real AI (The MIT Press, 2017).
Common sense, Levesque recalls, was a goal of early AI development, in the pre-big data phase that is referred to as Good Old Fashioned AI (GOFAI). Levesque quotes the 1958 paper Programs with Common Sense by AI pioneer John McCarthy (who coined the term “Artificial Intelligence): “We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.” To exercise common sense, McCarthy stressed that a machine must understand the meaning of language, a characteristic not yet exhibited by ChatGPT which is capable only of predicting words based on past patterns.
As humans, we exercise common sense (or, sometimes, fail to do so) with such frequency that we tend to forget its importance. Whatever it is, common sense requires a shared understanding of the connections between human cause and human effect over time.
I’ll share a personal example of common sense, of the type that Levesque refers to: driving during a blizzard when the road and sky are white with snow and visibility is reduced to perhaps a few metres. What does a human do, when not trained for such conditions? I judged the risks of attempting to pull over and wait out the storm were too great, not knowing exactly where the edge of the road was and worried that other vehicles would not see me. Knowing, however, how other drivers might logically act from the same concerns, I reduced my speed to a minimum, turned on the four-way hazard lights, periodically touched the brakes for greater visibility to anyone following me, and tapped the horn to alert anyone who may be ahead.
What would a computer do, in a similar situation? Would its programmers have been able to anticipate the many unknown variables of the white-out, with the vehicle’s camera unable to differentiate between road and sky?
Let’s not sacrifice reliability of output
As a mortal human whose survival is increasingly dependent on technology, I share Levesque’s desire for reliability. I have no desire to be a victim of an accident of the type that now occurs, for instance in self-driving vehicles that experience technological failures. (Watch this example of a braking failure that resulted in an 8-vehicle pileup).
As Levesque wrote, “When it comes to technology, I would much rather have a tool with reliable and predictable limitations than a more capable but unpredictable one.” He provides powerful examples of frequently occurring long-tailed problems which defy analysis of probabilities, when today’s algorithms rely on probabilities.
We should remember that algorithms are programmed by humans, and no human is exempt from error – in fact, errors are an essential means by which humans learn. The other thing we should remember is that the beauty of human existence is the variety of context that each of us contributes to shared understanding and “common sense.” Whose context is represented in the machine, and can machines “learn” from their errors? These questions may be essential, for in relying on data generated in the past will machines multiply errors in the future?
To check all of this, however, I put the questions to ChatGPT: what is intelligence, how is machine intelligence different from human intelligence, and what is common sense and will machines ever be capable of common sense? The responses, from the predictive algorithm, revealed important limitations, particularly with the last question:
“Achieving true common sense understanding and reasoning remains a significant challenge for machines. While machines excel in specific domains and can process large amounts of data, they often struggle with grasping the contextual nuances, understanding implicit information, and making intuitive judgments that humans effortlessly accomplish based on their common sense.”
It’s a powerful reminder of the power of common sense, and the good old fashioned technology that exists in our minds – the same minds that created the technology in the first place.