Four Questions for OpenAI’s Mission to Benefit All of Humanity

We might all agree from experience that, even with the best of intentions, sometimes it’s impossible to avoid unintended consequences.

Led by CEO Sam Altman, OpenAI is the hybrid for-profit and non-profit company that developed ChatGPT and most recently launched GPT-4 for public use.

GPT-4 is an AI application that responds to both text and visual prompts to generate an output of web-based information, but unlike a search engine it interacts with the user in a way that emulates human language.  When prompted for its reasoning, the AI can attempt to justify its responses.

Some believe that a “large language model” (LLM) such as GPT-4, trained on publicly available data as well as data that OpenAI licenses from sources that may include data brokers, has the potential to overtake Google for web searches.  Google used its predominance as a web search provider to generate $283 billion in revenue in 2022, primarily from advertising.

The eventual uses of a technology like GPT-4 are not yet known, but its risks are acknowledged by OpenAI.

In the “Safety and Alignment” section of its website, OpenAI notes, “We incorporated more human feedback, including feedback submitted by ChatGPT users, to improve GPT-4’s behavior. We also worked with over 50 experts for early feedback in domains including AI safety and security.”  Errors are possible, but the company states, “GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

Mission of OpenAI

OpenAI’s mission statement, from the company’s website

 

While OpenAI’s mission to benefit “all of humanity” is positive, many are questioning the trustworthiness and reliability of GPT-4’s output, particularly for inexperienced users or those unfamiliar with the subject matter of the response generated by the AI.

Sam Altman acknowledges the issues as well as the need for continuous improvement.  He wrote in a tweet that GPT-4 “is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”  In an interview with ABC News, Altman referenced the technology’s potential to reshape society and, as a warning, said, “I’m particularly worried that these models could be used for large-scale disinformation.” He added, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”

In a warning about its system near the bottom of its website, OpenAI states, “GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models.”

Protecting Humanity and OpenAI’s Mission

OpenAI operates as a for-profit limited partnership (LP) that is governed by the directors of a non-profit company.  Originally, the venture was operated entirely on a not-for-profit basis, but as Altman explains, it became necessary to create a for-profit operation to reward employees and attract needed investment capital.  Investors might receive a return of up to 100 times their investment. As OpenAI’s website explains with respect to the future profits of the LP, “economic returns for investors and employees are capped (with the cap negotiated in advance on a per-limited partner basis). Any excess returns go to OpenAI Nonprofit. Our goal is to ensure that most of the value (monetary or otherwise) we create if successful benefits everyone, so we think this is an important first step. Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.”

It’s worth noting that Microsoft has reportedly already invested $1 billion in OpenAI, and is said to be considering a further $10 billion investment.  If Microsoft is granted a 100-multiple return on this amount, its investment could potentially be worth over $1 trillion.  It would mean that OpenAI would need to generate more than $1 trillion in profits before any are available to the non-profit venture, and this might be difficult to imagine, at least given the present risks.

OpenAI’s mission seems born from noble intent, but I have four questions about its ability to resist the powerful impulses of an economic system whose goal is to capitalize and commercialize perceived value:

  1. Who is smart enough to define what is “smart” and what is “not smart”?
  2. How is it that a human could create something smarter than the human – would that mean that the human is outsmarting the human?
  3. Who determines what “benefits all of humanity” and what does not?
  4. Who is smart enough to gauge whether the outcomes of OpenAI’s mission benefit every human, and whether a benefit might be either temporary or permanent in its extent? After all, we have all experienced temporary benefits that turn out to be damaging in the long run.

Commercial considerations are already driving some of OpenAI’s actions.  The company has not disclosed the nature and scope of the data on which GPT-4 was trained.  In its technical report on GPT-4, OpenAI states that, “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

The report further states the company’s commitment to a future audit of its technology, and “to make further technical details available to additional third parties who can advise us on how to weigh the competitive and safety considerations above against the scientific value of further transparency.”

The timing and scope of any planned future audits is not disclosed.

The lure of profit can threaten the best of intentions

GPT-4 technology is already being commercialized.  As Bloomberg reported, Microsoft has pledged a further $10 billion investment in OpenAI, in addition to $1 billion invested in 2019 and 2021.  “Microsoft is competing with Alphabet Inc., Amazon.com Inc. and Meta Platforms Inc. to dominate the fast-growing technology that generates text, images and other media in response to a short prompt.”

Surely it is to the benefit of all of humanity, as well as to investors, to ensure that a benefit-generating technology will deliver the benefits not just in the present but without limit in the future.  Delivering future benefit will require significant cooperation today, since none of us has a crystal ball to see into time, and because we are the ones who are building that future together.

I am sure readers of The Quantum Record have views on this technology and suggestions to keep it safe, as do so many among my LinkedIn contacts for whom ChatGPT and GPT-4 are a dominant concern. Will the technology be prescriptive, or will it be holistic?  The Quantum Record explored this fundamental distinction for technology last October from the perspective of Dr. Ursula Franklin, the physicist, humanist, and Holocaust survivor who advocated for technology that responds and adapts to the needs of human users for peaceful purposes.

If you would like to discuss this technology, drop me a line at jmyers@thequantumrecord.com. Together, let’s explore its potential, and figure out a way to control the risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community