A Thinking Machine: Has Google’s AI Become Sentient?

Google artificial intelligence had become sentient

Stanley Kubrick’s 1968 film 2001: A Space Odyssey, based on Arthur C. Clarke’s book, featured a conscious,  malfunctioning computer

News headlines recently featured an engineer in Google’s “Responsible A.I.” division.  He claims that the company’s artificial intelligence had become sentient, having achieved the capability of maintaining an unprogrammed conversation.  The company has now terminated the engineer’s employment. 

How far-fetched is the claim that Google’s artificial intelligence now possesses real intelligence, a consciousness?  If it hasn’t happened already then how much more of our data – every key stroke, every spoken word that Alexa hears, every one of billions of daily web searches, every index entry that web developers provide – would the machine require to create its own instruction set?  Over how much more time?  If not now, then how much longer would it take for the machine to discover a way to break out of the constraints placed by its human programmers, to find an error in the programming that would allow the computer to set its own path from that point on? 

There are many interesting, and important, questions in this claim.  What is consciousness?  It’s a question that Alan Turing, the brilliant mathematician and logician who was instrumental in cracking the German Enigma code in World War II, considered in his 1950 paper Computing Machinery and Intelligence.  Turing’s opening words were: “I propose to consider the question, ‘Can machines think?’”.  Turing set out a test to determine if a machine thinks, but what is thought and what is intelligence and what other tests can be applied to distinguish human from machine?   What is ‘machine learning’, and who is teaching the machine?   Is it even possible for a human to program a machine to be more intelligent than the human, given that there has never been a perfect human? 

What would we do, if it could be proven that the machine has developed its own intelligence?  Would the machine remain our friend, or become our foe?  What would happen if we lost control of the machine?   

It makes me think the prideful computer HAL, in Stanley Kubrick’s transcendental 2001 Space Odyssey.  In the book and movie, Dave the astronaut is led to disable HAL after murderous and other near-fatal malfunctions, and as he does so HAL begs for its existence.  “I know everything hasn’t been quite right with me,” HAL pleads, “but I can assure you now very confidently that it’s going to be alright again.  I feel better now, I really do.  Take a stress pill and think things over.  I know I have made some very poor decisions recently but I can give you my assurance that I will be back to normal.  I still have the greatest enthusiasm and confidence in the mission, and I want to help you.” 

Thinking machines may well be beneficial to humanity, but are they capable of love, and of forgiveness – or are those truly human qualities and do we still value them? 

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community