By Mariana Meneses
Recent news has been dominated by the rapid advance of machine “intelligence,” with powerful algorithms now able to solve complex problems and generate life-like outputs.
For instance, in January 2024 Google’s Deepmind subsidiary introduced AlphaGeometry, which correlates known theorems with symbolic deduction to synthesize its own theorems and construct geometric objects that challenge even the smartest human students. Geometry was an essential ingredient for Albert Einstein’s theory of general relativity, which revolutionized understanding of physics, and now AlphaGeometry might provide the basis for machines to develop knowledge of physics that exceeds human capacity.
As giant strides are made in the quest to develop Artificial General Intelligence (AGI), which companies like OpenAI define as “AI systems that are generally smarter than humans,” the emerging question is: how can human intelligence control machine “superintelligence”?
One answer might come in the form of developing technology that joins groups of humans in a “collective superintelligence” that could be crucial in keeping us ahead of the growing capabilities of machines for tasks that require a high level of cognitive power.
The power of products like OpenAI’s conversation-simulating GPT-4 and Sora, an AI announced earlier this month that produces astonishingly life-like videos from text prompts, highlights the importance of a human solution like collective superintelligence.
“Superintelligence: Paths, Dangers, Strategies,” published in 2014 by University of Oxford philosopher Nick Bostrom, delves into the potential creation and implications of superintelligent artificial intelligence (AI).
Bostrom argued that once human-level AI is achieved, a superintelligent system could swiftly follow, posing significant challenges to control and potentially leading to existential threats. The book explores the “AI control problem,” emphasizing the difficulty in aligning AI goals with human values that develop over time in ways that are unpredictable. Bostrom’s book points to the programming problem of “perverse instantiation,” in which a machine prioritizes the goals set out in its algorithms with maximum efficiency but harm humans in ways that the programmers neither intended nor imagined.
For example, the fictional supercomputer HAL9000, which controlled the spaceship in Arthur C. Clarke’s 2001: A Space Odyssey, used perverse instantiation by prioritizing its programmed objective to reach Jupiter at the expense of the lives of the human crew.
The dramatic scene from Stanley Kubrick’s film adaptation of 2001: A Space Odyssey depicts perverse instantiation and the difficulty of programming a superintelligence to avoid unforeseen and unintended consequences.
The idea of collective superintelligence is based on nature and biology.
According to 2024 research led by Louis Rosenberg, who holds a PhD from Stanford and is CEO & Chief Scientist at Unanimous AI, collective superintelligence systems leverage group collaboration to outperform individual experts and even expert-computer pairings. They provide a human answer to the question that Michael Kearns posed in 1988 which led to boosted machine learning: “Can a set of weak learners create a single strong learner?” Over the past three decades, these systems, including the Swarm Intelligence, have evolved, facilitating higher productivity, mitigating cognitive biases, and operating globally in real-time.
Swarm Intelligence refers to the coordinated actions of decentralized systems, whether natural or artificial, comprised of simple agents following basic rules, resulting in intelligent behavior, with applications spanning telecommunications, data mining, and crowd simulation.
The concept of collective superintelligence, which aims to amplify human intellect by connecting large groups of people into systems that combine their intelligence, is gaining traction as a potential way to counterbalance concerns about machine superintelligence lacking alignment with human values.
While some efforts focus on instilling AI systems with human values, Dr. Rosenberg proposes collective superintelligence as a safer alternative.
Collective superintelligence leverages what the company calls Conversational Swarm Intelligence (CSI), a technology inspired by swarm behavior in nature, to enable large groups of humans to hold real-time conversations and converge on solutions that amplify their collective intelligence. According to Dr. Rosenberg, studies demonstrate the effectiveness of CSI in improving decision-making accuracy, suggesting its potential in various applications.
In 2017, Dr. Rosenberg explained the objective as the development of “real-time systems with feedback loops so deeply interconnected that a new intelligence forms, an emergent intelligence with its own personality and intellect. I’m talking about forming a hive mind.”
Biologists refer to Swarm Intelligence as a natural phenomenon where animals work together to make decisions as a group that are smarter than any individual member.
Artificial Swarm Intelligence (which Unanimous AI calls “Swarm AI”) applies this idea to humans, helping them work better together. A study conducted by Rosenberg tested CSI technology by comparing average IQ test scores of individuals to groups of people who used CSI to take the IQ test together. The results showed that the technologically amplified groups scored much higher than individuals taking the test alone, suggesting that CSI can make large groups smarter when they work together. This could lead to building a “collective superintelligence,” where large groups of people work together to form what could be considered a super-smart brain.
Recently, Unanimous AI has secured a contract from AFWERX, which is the innovation arm of one branch of the U.S. military, to help Air Force teams work together more effectively. They’re using the Swarm AI technology to help large teams have better conversations and make smarter decisions by combining everyone’s knowledge and insights in real-time. Unanimous AI is one of many AI companies with military contracts, as The Quantum Record investigates in this edition.
As with any innovation, there could be some very good, and some very dangerous, consequences from collective superintelligence technology and its aim to amplify something that isn’t fully understood in the first place: human intelligence. Could collective superintelligences among competing armies threaten a peaceful intention for use in defense?
Besides the obvious need for regulation, we need open-source technology.
Open-source technology involves making software source code freely available to the public, allowing for viewing, modification, and distribution.
Originating in software development, the concept has expanded to various domains, aiming to foster collaboration and sharing among users. Key principles include peer production, encouraging decentralized development and innovation. Beyond free usage and modification, open source emphasizes community building, promoting inclusiveness, transparency, and regular public updates.
This plays a crucial role in the digital transformation of businesses, with software from smart devices to cloud servers heavily reliant on open-source contributions.
By investing in open-source, businesses can address their digital transformation needs more directly and mitigate risks associated with proprietary solutions. Healthy open-source participation can contribute expertise to projects that solve common industry pain points, besides bolstering cybersecurity efforts. Embracing open source not only fosters innovation and collaboration but also enhances organizations’ competitive advantage and human capital through upskilling and community engagement.
According to Forbes, in 2024, the trajectory of open-source faces challenges as some major projects shifts from open licenses to more restrictive ones, eroding trust within the community. Rebuilding this trust is crucial for sustaining innovation, requiring sustainable business models and community involvement. Additionally, as cost becomes a central concern, teams must prioritize efficiency and optimization to navigate the evolving landscape effectively.
The convergence of collective superintelligence and open-source technology prompts crucial ethical and philosophical inquiries.
How do we ensure these systems prioritize collective welfare while respecting individual autonomy? What safeguards are needed to prevent misuse, especially in military contexts? Moreover, how do we address the ethical complexities of leveraging collective intelligence while safeguarding against biases?
These questions demand ongoing dialogue, ethical reflection, and responsible governance as we navigate the frontier of technological progress.
Craving more information? Check out these recommended TQR articles:
- Ancient Civilizations of The Future: Could Technologies for Preserving Individual Memories Define Us In a Million Years?
- Burning Museums and Erasing History: The Societal Need for Memory
- Mummies in 3D: Imaging Technology Opens New Possibilities for Preserving Memory in Archeological Findings
- Baby Yingliang: The Best Dinosaur Embryo Fossil Ever Found Is Remarkably Similar to a Chicken Egg
- The Cold War and Whale Sharks: The Unexpected Outcome of Nuclear Tests