The increasing popularity of artificial intelligence systems that convert text to images might make us question what art is.
Trained on billions of images from the internet, AI-generated art is rapidly advancing. While some are fearful about what it might mean for the future of artistry, while others believe that, as in other endeavors, AI is nothing more than an advanced (and increasingly available) tool. The technology raises many questions, among which is: ‘what gives value to art?’
From cave paintings to street art and AI-artwork, visual art has been present throughout our history and across all cultures. Our creativity has a lot to do with the unique makeup of the human brain and how its 86 billion neurons are organized.
In a 2016 paper published in the journal Nature, Dr. Nicola de Pisapia, from the University of Trento, Italy, and co-authors showed that when humans are being creative, two brain networks typically viewed as being in opposition interact in a balanced way: the executive control network and the default mode network, which generates spontaneous thought.
“You know, flipping back and forth between these two, internally focused and generating new ideas, and externally focused, kind of monitoring the situation. (…) it’s not like a Jazz musician is playing random notes, … it has to make sense, it has to have a certain appeal, so you do have to monitor it at some level”, explains Dr. Heather Berlin, a neuroscientist at Mount Sinai Health System in New York.
Furthermore, when people are being creative, the dorsal-lateral prefrontal cortex, which is the part of our brains that has to do with our sense of self-awareness and monitors our ongoing behavior, is turned down. In contrast, the medial prefrontal cortex, the part responsible for the internal generation of ideas, increases activation.
“It’s coming from within, it’s stimulus-independent. (…) a similar pattern of brain activation happens during dreams, or during day-dreaming, or some types of meditation, or hypnosis, where you lose your sense of self and time and place, and it allows the filter to come off so that novel associations are ok” (Dr. Berlin).
Human potential for creativity is a defining characteristic of our species. “Your brain is curious. It’s constantly looking for the next new thing, and that desire for novelty has led us to innovate and create”, explains Australian television presenter, producer, and science communicator Vanessa Hill.
And as a defining feature of our time, human creativity has brought us robots. Today, we use artificial intelligence for personalized shopping, fraud prevention, autonomous vehicles, voice assistants, and meteorological forecasts, among many other applications. But can we use it to make art too?
With the continuing development of AI, we have upgraded our machines from their previous limitation of data analysis to the point now that they are able to create new things.
This is the era of generative AI, machine learning algorithms that enable computers to use existing content to generate new things, such as text, images, videos, and audio files that could be mistaken for human-made.
Prominent sources of generative AI include the DALL-E application, so-named after artist Salvador Dali and now in its second version, which is a deep learning model developed by Open AI that uses a modified version of GPT technology to generate images from natural language descriptions. Also widely distributed are platforms offered by Midjourney, an independent research lab, and Stability AI, a start-up that has become popular for its open-source image-generator Stable Diffusion.
The use of text-to-image artificial intelligence has already spurred many debates – for instance, when Open AI’s GPT-3 authored an academic paper submitted to a peer-reviewed journal. Now, this AI is of concern to some artists and makes many people question the fairness and value of AI-produced art.
Last year, game designer Jason Allen won first place in the annual Colorado State Fair fine arts competition with an AI-generated piece, using Midjourney algorithms. Titled “Théâtre D’Opéra Spatial”, the image shows three figures dressed in renaissance-like robes and staring out of a giant window. The AI-generated combined traditional elements and science fiction,and has spurred curiosity and amazement among viewers.
Text-to-image tools allow users to generate simulations of people, objects and locations, and mimic entire visual styles on command. These AI have quickly increased in sophistication, and the technology becomes more powerful every day.
Critics have taken to Twitter and other platforms to argued that such technology could undervalue art.
“Slop produced as cheaply and quickly as possible to be consumed in bursts of a few microseconds as it glides by on the infinite feed”, said one user. However, others argue that it is instead a natural process: “Even photography was not considered an art form for a long time; people said it was just pushing a button, and now we realize it’s about composition, color, light. Who are we to say that AI is not the same way?”
The fast-paced development of AI technologies and the increasing power of these tools places many perils to society’s normal functions. Such technology could, for instance, be used by students to cheat, and to produce fake scientific results.
In a 2016 paper, Dr. Elizabeth M. Bik and co-authors looked for inappropriate image duplication in scientific publications. They analyzed images from over 20,000 papers published in 40 scientific journals from 1995 to 2014 and found that, “overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation”. Furthermore, they concluded that these instances have “risen markedly during the past decade”.
Dr. Bik has since then encountered even more frauds like these, and she does so primarily by eye. It is possible, however, that the free and easy access to generative AI could allow deceitful individuals to remain hidden, since frauds committed with generative AI would be even harder, if not impossible, to spot than those committed by humans only.
In dealing with (and preparing for) the perils of AI technology, many actors favor increasing national and international regulations.
Others argue instead that we need more transparency. Open-source code is one such method. Stable Diffusion, from Stability AI, which differs from its competitors by having open-source code and only a basic safety filter that can be disabled by users. Despite critics who say that the algorithms have been used to generate violent content later shared on the internet, the company’s founder and chief executive Emad Mostaque says he believes he is promoting the democratization of AI, as opposed to the “centralized, unelected entity,” as he calls big tech companies. In his view, transparency is what will keep AI from becoming ever-more dangerous.
But there are also many benefits that can arise because of these tools, particularly in the medical field. For instance, generative AI has proven useful for healthcare in early detection of brain tumors, and some believe that by 2025, more than 30% of new drugs and materials will be discovered using generative AI techniques.
For Shelly Kramer, principal analyst at Futurum Research, “Generative AI can help reduce bias in machine learning models, deliver higher quality outputs, and help make data analysts’ jobs easier by doing some of the heavy lifting”. She believes that funding organizations are still starting to understand the value of this technology.
Creativity involves many parts of the brain, and many interacting processes.
Humans are so good at creating new things that we created artificial intelligence based on algorithms that mimic our brain’s architecture. With that, our creativity allowed to develop technology with its own creative capacity. Where will it lead next? To John Koetsier, writing in Forbes, the future is AI writing its own code. And where could that take us?