OpenAI’s Technology Tests Legal Limits

Robot judge and jury

Image generated by Microsoft’s DALL-E generative AI-powered chatbot in response to our prompt to “generate an image of an AI lawyer deciding on AI legislation.” Note the machine’s failure to spell “legislation” correctly, even after several prompts. DALL-E was created by OpenAI, whose major investor is Microsoft.

 

By James Myers

Generative AI became the focus of worldwide attention when OpenAI unleashed ChatGPT in November, 2022. Since then, many commentators have expressed both optimism and concern for the future direction of the technology that creates text, images, sound, and other media by correlating user prompts with patterns in the vast quantity of data on which the AI was trained.

The potential for generative AI, both good and bad, is evident in images like the one that headlines this editorial. Having little artistic ability, I certainly couldn’t draw anything that approaches the level of detail in the image, nor would I have thought to put headphones on the robot lawyer, as the machine did.

Whatever it was that led the machine to insert the headphones in the image may reflect the creative potential of generative AI, but it also highlights a concern that the machine operates as a “black box,” in ways that the programmers might not even know. The AI’s misspelling of “legislation” in the image is another example of its black box operation, in spite of our attempts to correct it.

There are many factors that will influence the future of generative AI, and legal challenges are emerging as a potentially significant constraint to the technology’s applications and training.

An early indication of legal concerns was the accusation by the Italian Data Protection Authority (GPDP) in March, 2023 that OpenAI’s technology was unlawfully collecting user data and lacked controls for underage users.  OpenAI was given twenty days to respond, a deadline that led to the company’s temporary suspension of ChatGPT in Italy. Service was restored in April, after OpenAI incorporated a method to verify the age of Italian users and a form allowing Italians to request removal of personal data according to the terms of the European Union’s General Data Protection Regulation.

In April, the Privacy Commissioner of Canada launched an investigation into OpenAI and the effects of its ChatGPT technology on individual privacy. The investigation is ongoing.

On December 27, The New York Times announced that is suing OpenAI and the company’s data partner Microsoft because “millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.” The lawsuit demands the destruction of chatbot models and their training data that use copyrighted material from the news outlet, claiming that the “Defendants seek to free-ride on The Times’s massive investment in its journalism,” and that they are “using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”

The Authors Guild and numerous authors, including George R.R. Martin and John Grisham, are suing OpenAI in a class action alleging copyright infringement in the company’s for-profit training of its AI on their works. Actor, comedian, and author Sarah Silverman is part of a group suing OpenAI and Facebook parent company Meta for being “industrial-strength plagiarists that violate the rights of book authors,” in the words of the plaintiffs’ lawyers.

The same lawyers have launched a class action lawsuit against OpenAI, together with Microsoft and its subsidiary GitHub, claiming that GitHub’s coding tool Copilot replicates large portions of licensed software code without crediting the creators.

There are, of course, many other factors shaping the direction of generative AI. These include economic incentives that are already driving the technology’s production of a significant volume of news articles; as we have already noted, News Corp. is using generative AI to produce 3,000 news articles each week in Australia. Schools around the world are creating policies to address student cheating with the use of generative AI for tests and written assignments. And algorithm designers are taking advantage of generative AI’s remarkable ability to exceed the coding output of human programmers, holding the potential that a far greater proportion of future software will be generated by AI itself.

As the world grapples with striking a balance between the benefits of AI and its responsible development, the laws of many jurisdictions may prove to have the greatest effect on the future direction of generative AI.

How will AI operate within existing laws, or will laws be reformed to accommodate AI?  Time, and the attitudes of users, judges, and legislators, will decide the questions.

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community