What Comes Next, After Profit-Driven OpenAI Cracks the Intelligence Code?

Bloomberg headline on OpenAI

Recent opinion published by Bloomberg

 

By James Myers

If I were ever to merit a few minutes of attention from Sam Altman, CEO of ChatGPT-maker OpenAI, I would ask him one question.

“Since part of the OpenAI mission statement is to create artificial general intelligence (AGI) that’s ‘generally smarter than humans’,” I would begin by way of preface to my question, “once you crack the intelligence code how do you plan to fulfill the other part of the company’s mission which is to ensure that AGI ‘benefits all of humanity’?”

If I were granted an opportunity for a follow-up question, it would be: “Sam, you have said (as you told Lex Fridman), ‘Like, we want society to have a huge degree of input here’ – what are you doing to ensure that people like me and the rest of society will have huge input on AGI to ensure it’s for our benefit, before your technology outsmarts us?”

 

Sam Altman on the Lex Fridman podcast.

 

So far, society seems to have had little to no input in OpenAI’s plans, no visibility on the company’s finances, and no say in the selection of the company’s directors (which was apparent after the previous directors were ousted a few days after firing Sam, who was quickly reinstated last November by a friendlier board).

My questions would have been easier for Sam to answer before he ditched the non-profit charter by which OpenAI was founded in 2018 and turned the company into a profit-driven business. Even as he’s now armed with a profit motive, Sam might have been able to provide a plausible response to my question about the second part of the company’s mission statement, before his latest announcement.

In his recent blog on OpenAI’s GPT-4o release, which will incorporate realistic voice conversation with text and images, Sam wrote,

“There are two things from our announcement today I wanted to highlight.

“First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that. 

“Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world. Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from. 

“We are a business and will find plenty of things to charge for, and that will help us provide free, outstanding AI service to (hopefully) billions of people.”

We give credit to Sam for his enthusiasm about the potential of artificial intelligence to deliver “all sorts of amazing things that we all benefit from.” However, now that OpenAI has ceded application development to “other people” (meaning presumably other companies), how can the company possibly fulfill the second part of its mission statement – you know, that thing about benefiting all of humanity?

It’s not as easy as Sam might think to separate the benefits from the harms.

There’s a very subjective issue inherent in the benefits Sam imagines will naturally emerge, and it was one of four questions for Sam Altman that we asked in a previous editorial: who will decide what’s a benefit to the world and what’s not?

Short-term benefits often turn into long-term harms, so how far into the future does a benefit have to endure before it’s globally accepted? Are the benefits (for any length of time) going to be determined by a profit-driven company like OpenAI or, for another, Google, whose idea of a global benefit isn’t necessarily yours or mine? Or would a tyrant gaining control of the technology declare a global benefit that’s the exact opposite of what you and I would consider good?

It seems that Sam is engaging in wishful, not practical, thinking that “other people will use it to create all sorts of amazing things that we all benefit from.” The world doesn’t always work that equitably, which is fair to say has been the experience of us all at one point or another in our lives. Sam’s blog appears more than a bit naïve in not mentioning the harms that AI has already enabled, and the further harms that will doubtless ensue when powerful new applications make their way into the hands of the many humans less scrupulous than he.

Those kinds of people – and it’s plain to see there are lots of them – don’t care much about creating benefits for the world when they have the means to benefit themselves, their friends, and their investors.

Even if such self-interested types were somehow kept under control, any powerful technology comes with risks. Although generative AI like ChatGPT may well have benefits, even while it still remains in Sam’s well-intentioned hands we’re already witnessing its damage to human creators. Evidence of this is in the lawsuits against OpenAI by newspapers, authors, and artists for uncompensated use of intellectual property. And, as a result of a recently-announced agreement with Reddit, data freely provided by Reddit users will now be incorporated in OpenAI’s product.

 

To err is human, and none of us is exempt from errors – including human programmers. Bugs are inevitable, and OpenAI is not immune.

 

As with existing AI we have been using for years, there are also privacy risks that are still to be addressed when users share personal information with OpenAI’s chatbot. Then there are serious concerns about the technology’s effects in classrooms, where students are already proving to be less inclined to develop the skill of independent thought when, without effort, ChatGPT generates coherent summaries for them. That’s not to mention the rest of us who might not exercise critical thinking to detect “botshit” from the less-than-perfect technology, and place undue reliance on its outputs in matters that deserve serious reflection on their consequences.

Elsewhere, Sam and OpenAI have acknowledged the serious risks of their undertaking. Take for example the following statements they have made:

  • “GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.” (From OpenAI’s website in January 2024)
  • In an interview with ABC News, Sam referenced the technology’s potential to reshape society and, as a warning, said, “I’m particularly worried that these models could be used for large-scale disinformation.” He added, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”
  • “We are likely to eventually need something like an IAEA [the International Atomic Energy Agency which has helped to safeguard humanity from nuclear war for the past 66 years] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.” (From Sam’s May 22, 2023 blog)

Particularly troubling in Sam’s latest blog is the admission that OpenAI is ceding control over application development to others who “will use it to create all sorts of amazing things that we all benefit from.”

Frankly, that’s a huge stretch. What guarantee does Sam Altman have that others will use OpenAI’s technology to create only things for the global benefit of humanity? Sam should recall that, in the capitalist system in which his company is now a major player, the primary goal of investors is to make money; altruism is not a natural feature of capitalism (in fact it’s quite abnormal).

 

 

Who are the “other people” Sam refers to, who will be creating technological wonders for our collective benefit? It’s not hard to imagine they will include OpenAI’s major investor Microsoft, whose exclusive data arrangement with OpenAI could yield significant returns on its $10 billion investment. The returns could be as enormous as $1 trillion, because after OpenAI abandoned its non-profit founding principle it can now provide investors with a return of up to 100 times invested capital. This clearly gives OpenAI a motivation to ensure that Microsoft, on which OpenAI is dependent, earns a handsome return.

In the capitalist system, when a business earns a handsome return it’s at the expense of its customers. That’s the law of accounting: money isn’t created out of thin air, but it moves from the hands of the buyers into the hands of sellers like Microsoft. In the capitalist ideal, there should always be an equal trade, with the buyers receiving as much benefit from the product as the money given up for the purchase. But as we all know in human practice, ideals rarely play out as intended, especially when the sellers become particularly powerful.

Last year, Google generated 77% of its revenue from advertising by monetizing data gleaned from the 90% of global web searches and other activities conducted on its platforms, without compensating users like you and me who provide the data. In our February 2024 feature story, we noted the U.S. Government is now suing Google for monopolistic practices in a case that will likely be decided this fall. In the meantime, Google’s US$307 billion of revenue is the same as the Government of Canada’s, which is a lot of power for one company.

This month, Google announced that a new feature called AI Overviews is being incorporated in its globally dominant search engine, in which searchers will be presented with an AI-generated summary of information in response to a query. While the company touts this as a time-saver for us (saying, “Google will do the googling for you”), websites (including The Quantum Record) are understandably deeply concerned that web searches will now begin and end on Google’s site. Will web searchers have any need to investigate the independent sites that actually generated the information, when Google’s algorithms package the details for them using algorithmic methods that only the company knows?

Microsoft made a profit of $72 billion in its last fiscal year, which was similar to Google’s 2023 profit of $74 billion. Meta, which owns Facebook and WhatsApp, made $39 billion last year, but Apple’s net income of $97 billion outpaced them all. The gross revenues of the four companies last year totaled $1,037,000,000,000: that’s one trillion and 37 billion dollars.  That’s a lot of economic power for four companies.

Has OpenAI given up on its ability to fulfill the second part of its mission statement, for an AGI that “benefits all of humanity”?

If the company still somehow intends to fulfill that promise, how will it do so now that it’s only planning to deliver the technology but not the applications?

Will Sam change the company’s mission statement, to reflect its now very different business model?

That’s what I would be interested in learning, if I had a few minutes of Sam’s time, or if I were lucky enough to be among those members of society with the “huge input” that Sam has called for.

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community