The fascinating and unknown complexity human mind. Image: Mohamed Hassan.
Would we be an intelligent species had we not developed complex language(s)?
Language gives meaning to our everyday life: we use it to communicate, share ideas, describe concepts, strengthen emotional bonds and, overall, to organize and maintain complex societies. An artificial intelligence that gains the capacity of language would unarguably develop a better comprehension of the ‘real’ world of humans. This is the notion behind the development of GPT-3.
Created by OpenAI, a research laboratory based in San Francisco, GPT-3 is a language model, an algorithm that uses deep learning to unite words and phrases and imitate text written by humans with uncanny realism. This type of technology has many applications, including enabling chatbots, writing and summarizing text stories, writing code, and more. Among educators, there is mounting concern that students will use GPT-3 technology to produce passable essays with original, coherent sentences that will escape plagiarism detection software.
“Deep learning is a subset of machine learning, where artificial neural networks—algorithms modeled to work like the human brain—learn from large amounts of data”. (Oracle). Deep learning is the primary technology behind self-driving cars, speech and image recognition, and many other applications.
A group of researchers recently developed an interesting experiment. PhD student Almira Thunström, from the Institute of Neurology and Physiology at the University of Gothenburg, Sweden, recounts the story of how they asked GPT-3 to write a scientific article about itself – and how the AI, in response, produced a paper in just two hours. The article was submitted to a peer-reviewed journal and a pre-print version is available online.
GPT-3 technology raises ethical and legal issues
Thunström says she wonders how her efforts to complete the paper and submit it to a peer-reviewed journal might generate unprecedented ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship: “First authorship is still one of the most coveted items in academia (…). It all comes down to how we will value AI in the future: as a partner or as a tool. It may seem like a simple thing to answer now, but in a few years, who knows what dilemmas this technology will inspire? (…) We just hope we didn’t open a Pandora’s box”.
In the conclusion of its own article, GPT-3 wrote: “Overall, we believe that the benefits of letting GPT-3 write about itself outweigh the risks. However, we recommend that any such writing be closely monitored by researchers in order to mitigate any potential negative consequences”. There may be some wisdom, from the machine, in that last sentence.
The advances in artificial intelligence pose many ethical questions. Image: Mohamed Hassan.
Humans are becoming more aware of the perils that come with this kind of power.
For instance, the game AI Dungeon was created in 2019, using GPT-3 to generate personalized and unpredictable Dungeons & Dragons-like role-play adventures for online players. However, the experiment took a dark turn when users started to tap GPT-3 to develop abusive, toxic, and dangerous stories. Thus, one of the most important AI-related questions in debate today among experts is safety. After all, how frightened should we be of AI?
Scientists argue and fear that an AI smarter than humans, known as artificial general intelligence, could be catastrophic. A survey covering the opinions of 327 researchers of artificial intelligence found that 36% of AI scientists around the world believe AI decisions could cause a catastrophe on the scale of nuclear war in this century.
To quote vocal technology pioneer Elon Musk at the South by Southwest technology conference in 2018, nearly five years before he bought social media platform Twitter: the “danger of AI is much greater than the danger of nuclear warheads landlocked. And nobody would suggest that we allow anyone to just build nuclear warheads if they want. That would be insane and mark my words AI is far more dangerous than nukes, far. So why do we have no regulatory oversight? This is insane. I’m not really all that worried about the short-term stuff, (…) like, narrow AI is not a species-level risk. It will result in dislocation, in lost jobs, (…) and better weaponry and that kind of thing, but it is not a fundamental species-level risk, whereas digital super intelligence is. So it’s all about laying the groundwork to make sure that, if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very, very carefully.” At that time, Musk advised that people should delete their social media accounts and stated, “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me.”
Elon Musk in 2018: “So why do we have no regulatory oversight? This is insane.”
A paper published by Dr. Nobumasa Akiyama, from the Hitotsubashi University, in Japan, alerts to the perils for international safety. He is concerned about whether the stability of the nuclear deterrence, an integral aspect of the current international security architecture, could be strengthened or weakened by the use of AI: “AI might contribute toward reinforcing the rationality of decision-making (…), preventing an accidental launch or unintended escalation. Conversely, judgments about what does or does not suit the “national interest” are not well suited to AI (…). A purely logical reasoning process based on the wrong values could have disastrous consequences, which would clearly be the case if an AI-based machine were allowed to make the launch decision (…), but grave problems could similarly arise if a human actor relied too heavily on AI input”.
Dr. Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, believes that machines more intelligent than humans will probably be developed still in this century and that we need international treaties to regulate the development of this technology so humans can remain in control. Otherwise, super-intelligent AI might pose great dangers, due to how complicated real-world settings are.
For instance, if asked to cure cancer as quickly as possible, an AI “would probably find ways of inducing tumors in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs. And that’s because that’s the solution to the objective we gave it; we just forgot to specify that you can’t use humans as guinea pigs and you can’t use up the whole GDP of the world to run your experiments and you can’t do this and you can’t do that”, explains Dr. Russell.
In Russell’s view, the future of AI lies in developing machines that know the true objective is uncertain, and that they must always check in with humans.
But leaving room for any and all uncertainties can be tricky since different people have different, sometimes conflicting, and often transient preferences. Thus, he also calls for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias.
It is important to note that, although GPT-3 was a big step ahead, it has many limitations. The most impressive results were handpicked; it requires an incredible amount of energy; the training needed is extremely expensive; and, finally, GPT-3 absorbs most of the disinformation and prejudice if finds online. Researchers have some challenges to tackle for future versions!
Going forward, some scientists believe it is a matter of going bigger, and that scaling up current technology will lead to human-level language abilities – and ultimately true machine intelligence. However, others argue that AI must grow smarter, not just bigger, for we are reaching a point of diminishing returns.
Next-generation language models will likely integrate other abilities, such as image recognition, with the objective of using language to understand images, and images to understand language. The new Google AI is already starting to address some of the issues that have arisen with GPT-3. Named Swift Transformer, this AI was developed with the goal of doing more with less processing power, to keep computational costs under control. But the questions remain. Will we be able to keep AI under control? At what cost? And until when?
Too faster is too dangerous sometimes, too fast derails, this is my humble opinion,