In an earlier TQR article, we discussed the philosophical implications of the increasing integration of technology into our lives, and argued that the line between human and machine is becoming increasingly blurred as technology advances.
We talked about how we need to be more aware of the ways in which technology is changing us and our relationship with the world around us and discussed some of the ethical issues that arise from this integration of technology and BMIs into our lives.
In this article, we comment on recent developments in the intersection of AI and neuroscience, from algorithms that mimic conversations with ghosts, to mind-reading technologies and more. We also highlight some warnings being made by experts, and how governments and the market are responding to the new demands in the era of AI.
By Mariana Meneses
Recently released, an AI-powered chatbot allows you to simulate a conversation with Einstein.
Called the “Digital Einstein Experience”, the chatbot was developed by a startup called Aflorithmic in partnership with Uneeq, an AI company that develops autonomous digital human platforms designed for customer interactions.
Uneeq also partnered with the Hebrew University of Jerusalem to create the first-ever interactive digital version of Einstein – and this information was given to The Quantum Record orally, by Uneeq’s digital human, Sophie, in the company’s frontpage.
According to the project’s homepage, “Digital Einstein is a lovingly recreated version of his namesake, using cutting-edge CGI (Computer Generated Imagery) and animation to “clone” him down to the most subtle movements. As an AI, he can recount tales of his life and core works, give you a daily science quiz or tell you one of his favorite jokes. As always, he’s here to teach, inspire and engage.”
Introducing Digital Einstein | A UneeQ AI companion
Another recent development has been the release of Sanctuary AI’s sixth-generation general-purpose robot, named Phoenix™, which is powered by Carbon™, an AI control system.
Phoenix is designed to possess “human-like general intelligence” and perform a wide range of tasks to address labor challenges across industries. With a maximum payload of 55 lbs., or 25kg, and “industry-leading robotic hands,” Phoenix aims to be a versatile and capable humanoid robot. Sanctuary AI says its emphasis on creating a “general-purpose” robot sets its product apart, focusing on the integration of physical work capabilities. The company has assembled a coalition of partners and investors and received substantial funding to support its mission of creating human-like intelligence in robots, although few would agree that artificial general intelligence is achievable in the near-term.
“We designed Phoenix to be the most sensor-rich and physically capable humanoid ever built and to enable Carbon’s rapidly growing intelligence to perform the broadest set of work tasks possible,” said Geordie Rose, co-founder and CEO of Sanctuary AI. “We see a future where general-purpose robots are as ubiquitous as cars, helping people to do work that needs doing, in cases where there simply aren’t enough people to do that work.”
Oxford professor Michael Wooldridge, in a Big Think YouTube video, raises the question of whether conscious machines are even possible.
According to him, AI has the potential to bring to life machines that possess consciousness similar to humans, fulfilling the long-standing aspiration of creating living entities.
Developing consciousness in AI is being approached in two ways.
One, called symbolic AI, involves encoding human expertise into machines by translating knowledge and reasoning into computerized sentences. Machine learning is a different approach. Instead of explicitly instructing machines, they are provided with examples of situations which allow them to generate predictions from those instances. For example, in translating French to English, instead of encoding all the rules, we simply provide numerous examples of desired translations, which the machine can then apply by itself in the future to different word combinations. In the past 15 years, we have witnessed significant progress in machine learning due to the increased availability of computational power and vast amounts of data.
Present-day AI systems excel in performing narrow tasks, such as driving cars, with remarkable efficiency, although not flawlessly, and may outperform humans in specific areas, but they lack the broader range of capabilities that humans possess. The “grand dream of AI,” referred to as Artificial General Intelligence (AGI), aims to create machines that exhibit the same intellectual capabilities as humans. However, achieving AGI is a complex and ongoing challenge.
Wooldridge explains that recent AI research has focused on the social aspects of AI, such as cooperation, teamwork, and negotiation, to enable AI systems to work more effectively. However, while progress is being made in these areas, the concept of consciousness remains elusive. As discussed in an earlier TQR article, understanding how consciousness arises from the complex interactions of neural networks in the human brain is one of the great scientific mysteries.
The intersection of AI and neuroscience is booming.
For instance, according to Reuters, Elon Musk’s Neuralink has recently received approval from the US Food and Drug Administration (FDA) to conduct its first tests of implantable brain–computer interfaces (BCIs) on humans.
The company has the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than current devices. The implant aims to help patients with severe paralysis regain their ability to communicate by controlling external technologies using only neural signals. So far, the company has only tested the brain implants on animals, such as pigs and monkeys. The brain chips could allow people to complete tasks using only their minds.
Another important development is in the field of wearable brain-machine imaging technologies.
According to Photonics Media, researchers at Washington University in St. Louis are developing an alternative to the current gold standard of brain imaging, which is functional magnetic resonance imaging (fMRI). While fMRI requires the patient to remain still in the MRI machine, researchers are developing a cap that can be worn while moving around normally that will generate, using the power of light, high-resolution images of the brain in action. The project is supported by the U.S. National Institutes of Health (NIH).
The understanding of neural representations from the brain is advancing rapidly.
A 2023 paper published in Nature, reports a new method that combines behavior and brain data to understand how the brain functions during adaptive actions. This could allow understanding of the correlation between actions and brain activity, reliable decoding of brain signals, testing of hypotheses, analysis of unlabeled data, understanding of spatial representation in the brain, and accurate interpretation of natural videos from the visual cortex. Research like this opens new possibilities for advancing our knowledge of the brain’s functioning.
But will machines be able to listen to our thoughts?
Scientists working on non-invasive BCIs for language seem to hope so. In a 2023 paper published in Nature Neuroscience, scientists have developed a special technology that can understand and interpret language using non-invasive methods. The technology uses brain scans via fMRI to detect patterns in the brain that represent words and phrases. With this technology, the researchers were able to reconstruct complete sentences from these brain patterns, whether the words were heard, imagined, or read onscreen. In other words, they could translate thoughts into text. In doing so, they also discovered that different areas of the brain are involved in processing language.
The mapping of the brain with AI is also helping human physical mobility, as this technology has just allowed a paralyzed man to walk.
According to the New York Times, a man who had been unable to walk for over a decade has regained control over his lower body. This is thanks to implants in his brain and spinal cord, creating a “digital bridge” that bypassed injured sections and enabled him to walk. The implants captured his thoughts and translated them into stimulation of the spinal cord, restoring voluntary movement. He was able to stand, walk, and even ascend a steep ramp with the assistance of a walker. Remarkably, he retained these abilities even when the implant was switched off and exhibited signs of neurological recovery.
And that’s not even close to all that AI is making possible for human biology, as another important development has been in gene research.
According to UC San Diego Today, AI is revolutionizing gene activation research, as scientists have harnessed machine learning to uncover rare and customized DNA sequences. Led by Professor James T. Kadonaga, the team used AI to identify “synthetic extreme” DNA sequences with specific functions in gene activation. “Synthetic extreme” DNA sequences refer to artificially created sequences of DNA that have been designed to have specific roles in controlling the activation of genes. These sequences are intended to have strong and precise effects on whether a gene is turned on or off.
By studying these sequences, scientists aim to understand how genes can be controlled and regulated in living organisms.
Through millions of DNA sequence comparisons between humans and fruit flies, they discovered rare sequences that are active in humans but not in fruit flies, and vice versa. These findings pave the way for practical applications in biotechnology and medicine, enabling the identification of synthetic DNA sequences tailored to activate genes selectively in different conditions or tissues. By leveraging AI, this groundbreaking approach opens new possibilities in designing customized DNA elements and signifies the early impact of AI technology in the field of biology.
One of the most widespread and rapid changes of AI is in the job market.
For instance, recent news reported IBM’s plans to replace 7,800 jobs with AI over time. According to Challenger, Gray & Christmas, AI caused around 4,000 job losses in the US in May. Although worries about AI-related job losses continue, experts also point to the possibility of job creation and economic growth in the expanding AI industry. The big question is: which one will happen more quickly?
Meanwhile, according to Reuters, five AI companies are responsible for the entire 2023 year-to-date growth of the S&P 500 stock market index.. Despite concerns about AI stock bubbles, investors perceive AI as a game-changer with long-term earnings growth potential.
The economic importance of AI is also prompting governmental changes.
For instance, according to Technomancers, the Japanese government has declared that it will not enforce copyrights on data used in AI training, aiming to boost the nation’s progress in AI technology and propel economic growth. While some artists express concerns about the devaluation of their work, the government believes that the relaxed data laws will attract high-quality training data and help Japan achieve global AI dominance.
One important distinction to be made when discussing the latest advancements in AI is between narrow, generalized, and super-intelligent AI, as well as between prescriptive and holistic technologies.
Narrow AI is designed for specific tasks, while general AI aims to perform any intellectual task a human can, and super-intelligent AI would surpass human intelligence. Narrow AI, like Siri or Google Assistant, is commonly used today. General AI and super-intelligent AI remain largely theoretical and not yet achieved.
Prescriptive and holistic technologies are different approaches to technological applications.
Prescriptive technologies target specific outcomes with defined requirements that are not adaptable to evolving human needs, whereas human users retain the ability to adapt holistic technologies to future needs. Prescriptive technologies can be associated with narrow AI, following specific instructions, while general and super-intelligent AI have the potential to be designed for either prescriptive or holistic application.
The development of general AI could enhance holistic technologies, enabling advanced cognitive capabilities for comprehensive analysis and integrated solutions.
However, ethical concerns, potential misuse, and the need for safeguards arise with general AI, as it may lead to unintended consequences like loss of control, bias, privacy breaches, or existential risks.
Oxford Professor Nick Bostrom, like other experts, is particularly worried about achieving AI that can write its own code, and the possibility of a super intelligence overriding human civilization with its own value structures. AI pioneer Geoffrey Hinton, who recently quit his high-profile job at Google so he could speak freely about the risks of AI technology, said in an interview with The New York Times that he has been warning about the potential dangers of AI for years, but Google was reluctant to take his concerns seriously.
Many experts have warned that the technology could pose a serious threat to humanity if it is not developed and used responsibly.
Some of the risks associated with AI include job displacement, privacy violations, and the potential for autonomous weapons to be developed. For these and other reasons, it’s important for researchers and developers to work together to ensure that AI is developed in a way that is safe, beneficial for everyone, and sustainable.
The concerns are pressing.
For instance, with its digital humans project, Uneeq is offering its customers the ability to “Automate digital humans into your sales workflows and have face-to-face customer conversations in your sleep! When customers feel confident that they’re making the right decision, they buy more. This is what digital humans do well with friendly, warm conversations that build lasting customer relationships.”
A crucial ethical and potentially legal concern is whether companies utilizing these virtual beings in customer interactions will openly disclose their use, so that customers are aware when they are interacting with robots rather than humans.
OpenAI-rival AI21 Labs released the results of a social experiment, according to which a third of people (already) can’t tell a human from an AI.
The study found that people who are more familiar with AI are more likely to be able to distinguish between AI and humans, but they are also more likely to trust AI. A recent incident in which a lawyer used OpenAI’s ChatGPT to generate a court motion which, unknown to him, contained fictitious references to legal precedents on which his case relied, underscores how readily even a highly-trained professional can be fooled by the output of an AI.
If you’d like to test your own ability to distinguish, just go to Human or Not? A Social Turing Game: “Chat with someone for two minutes, and try to figure out if it was a fellow human or an AI bot. Think you can tell the difference?”.
Looking for more fascinating reads? Don’t forget to explore these TQR articles:
- Revolutionizing Human-AI Interaction: ChatGPT and NLP Technology
- Minding the Future: The State of Global AI Regulations
- Human Creativity in the Era of Generative AI