Rise of Virtual Reality Tech Increases Risks of Entering AI’s Third Dimension, and the Need for Immersive Rights

Image by Gerd Altmann, of Pixabay

 

By James Myers

Seeing is believing, as the saying goes, but perception of the same shapes, colours, and objects can vary significantly from person to person. The identical visual image can produce sometimes major differences in belief, and the technological origins of clashing beliefs is becoming a focus for legislators.

The causes of visual differences, and the brain’s function of perception, are receiving increasing attention as generative AI and virtual reality (VR) technologies are combining to produce static and moving images so realistic that many viewers can’t distinguish between artificial and real.

Increasing knowledge of the causes of perceptual differences could help defend against bad actors who are exploiting misperceptions of “deepfake” images for criminal and manipulative purposes. The question, in the age of hyper-realistic AI and VR, is how to identify the bad actors.

The effects of deepfake technologies are pervasive and damaging.

Last August, The Quantum Record investigated the increasing capabilities of technologies for producing deepfake images, and considered the potential for the emerging power of quantum computers to magnify the problem.

 

Deepfake of Pope Francis

Deepfake images of Pope Francis circulate on the internet. It is often impossible to determine the source of these manufactured images.

 

The term “deepfake” was coined in 2017 on a Reddit channel, and a 2025 European Parliament brief forecasts that 8 million deepfakes will be shared in 2025, a 16-fold increase from 2023. Citing the problems of fake news and reduced trust in digital media, the brief stresses that, “Deepfakes pose greater risks for children than adults, as children’s cognitive abilities are still developing and children have more difficulty identifying deepfakes. Children are also more susceptible to harmful online practices including grooming, cyberbullying and child sexual abuse material.”

Deepfakes are easily circulated on social media platforms like X, where pornographic AI-generated images of singing sensation Taylor Swift have proliferated. X is owned by the world’s wealthiest human and “free speech” advocate Elon Musk.

While children are most susceptible, adults can also be easily duped by deepfakes, sometimes at a significant financial cost. For example, in early 2024 Hong Kong police reported that a finance professional at a multinational firm was conned into transferring $25.6 million to fraudsters after participating in an online meeting with what he thought were human staff but were in fact AI fabrications.

Concerns about the proliferation of deepfakes has prompted Denmark to advance amendments to copyright laws that could soon give Danish citizens legal rights over images of their bodies, facial features, and voices. As the Danish Minister for Culture told The Guardian, “Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”

The U.S. National Conference of State Legislatures has catalogued the growing number of state laws implemented to provide individual protection against impersonation. In 2023, for example, the State of Minnesota passed a law banning the use of deepfakes to influence elections. The law defines a deepfake as the technological “representation of speech or conduct […] that is so realistic that a reasonable person would believe it depicts speech or conduct of an individual who did not in fact engage in such speech or conduct.”

X challenged the 2023 Minnesota law as unconstitutional, on the basis of an amendment known as Section 230 that was appended in 1996 to the U.S. Communication Act of 1934. Enacted when the internet was in its infancy, Section 230 establishes that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The 26 words of Section 230 have been interpreted to give broad immunity to social media platforms like X for hosted content. (For more on Section 230, see our November 2024 feature Legal Perils and Protections for Online Consumers are Rapidly Evolving).

In its constitutional challenge, X stated, “While the law’s reference to banning ‘deep fakes’ might sound benign, in reality it would criminalize innocuous, election-related speech, including humor, and make social-media platforms criminally liable for censoring such speech.”

Laws against visual manipulation aren’t just a recent phenomenon. Bans against subliminal messages in advertising have been in place for decades, since a cinematographer claimed in 1957 to have increased beverage sales by inserting frames in his films with the words “Drink Coca-Cola!” that affected the subconscious while passing too quickly for the eyes to register the words.

 

 

June 8, 2024 headline from British broadcaster BBC .

 

Although X challenged the Minnesota law against deepfakes, on some occasions the social media platform has been made to act in the public interest.

For example, during the course of the United Kingdom’s 2024 general election, BBC reporters uncovered a number of deepfake posts that included a doctored video of a male candidate’s response on a televised interview. As the interviewer discussed a female candidate from the opposing party, the footage is made to sound as though the male candidate said “silly woman” under his breath, although he never did so. The deepfake elicited a chorus of outrage, some from fake X accounts which were subsequently deleted by the platform.

The interplay between privacy and impersonation: loss of the former causes the latter.

Ask anyone who isn’t famous whether they’re concerned about their digital privacy, and a common response is something like: “I don’t care, there’s nothing about me that they would be interested in anyway.”

To a limited extent this could be true, however such responses don’t account for the effects of ceding privacy to companies like Meta, owner of Facebook and Instagram, and Alphabet, which owns Google, Chrome, and YouTube. In 2024, advertising revenue contributed the lion’s share of Meta’s $62.3 billion profit and Alphabet’s global record-smashing net income of $100.1 billion: in Meta’s case, advertising contributed 97.6%, and Alphabet generated over 72% of its revenue from advertisements on its platforms.

 

When does the social fabric begin to imitate social media? Image by Gerd Altmann, of Pixabay

 

With their massive global reach, the giant tech companies freely use our data to target us with advertisements. Last year, a judgment was issued against Alphabet for operating an illegal monopoly with Google search, which is used for 90% of web searches globally. The court is now considering remedies to protect consumers, which could include requiring the company to divest its dominant web browser Chrome.

Freely donating our private data to advertising companies can result in the loss of our human agency.

Agency has many particular definitions. In the context of giving up one’s privacy to an advertising company, loss of agency could mean being unable to make our own choices as individual economic actors. The loss might be as minor as coming under the influence of a seller to buy something you wouldn’t otherwise have bought, but the repercussions don’t stop there. In the context of virtual reality, the risks of loss of privacy and agency are clearly illustrated in this 3-minute video:

 

 

It’s not difficult to imagine coming under the influence of powerful visual simulations like those presented in the video, and falling prey to a sales pitch from a waiter. Many things beyond food, however, are being traded and sold online – including political influence and the people we associate with.

Data brokering, which is the buying and selling of data, is a $200 billion per year and growing industry, as The Quantum Record has previously addressed. There’s a reason data brokers don’t advertise their activities.

When immersed in virtual reality, a deepfake is particularly compelling. That’s because, among other emotional appeals (like sex appeal, as illustrated in the video), we have a natural affinity and willingness to believe what we see in space and time. It’s logical that we’re wired that way: if we had to divert a great deal of cognitive effort to questioning what our eyes tell us, we would have little remaining energy to exercise our agency.

Image by Gord Johnson, from Pixabay.

The natural bias to place credence in the visual has been the subject of many studies, including a 2021 paper by behavioural scientist Priska Breves entitled Biased by being there: The persuasive impact of spatial presence on cognitive processing. The paper assesses the results of an experiment in which people were exposed to differing levels of spatial presence in a particular context; for example, watching images on a two-dimensional screen involves less spatial presence than being physically in the three dimensions of those images.

The paper concludes, “The results indicated that individuals who experienced high levels of spatial presence evaluated the content more positively because they used heuristic processing.” Processing that is heuristic isn’t based on theoretical soundness, but rather on an iterative method of trial and error in which various probabilities are tested and the unsuccessful results are eliminated from what is considered reality.

The author notes, “The positive evaluation consequently led to biased systematic processing, resulting in persuasive effects, even when the arguments were weak.”

The conclusion is notable because virtual reality shifts our algorithmic engagement from two dimensions to three, where we are naturally more susceptible to believing what we see.

Mounting concerns about deepfakes are prompting a renewed focus on the brain’s mechanisms for perception.

The eye is easily deceived, and producing images that aren’t what they seem has long been an art of magicians, illusionists, and manipulators. Technology has given the power of illusion to many, who compete in events like the Best Illusion of the Year Contest run by the non-profit Neural Correlate Society since 2005.

 

 

The Quantum Record’s podcast, The Quantum Feedback Loop, recently discussed virtual reality technology with VR pioneer Louis Rosenberg, who founded the company Unanimous AI that’s now developing technology to amplify human group intelligence. In his 2024 book, Our Next Reality, Rosenberg and co-author Alvin Graylin set out the case for legislative action over six categories of human immersive rights.

Principal among these rights is the ability to know what’s real and what’s not real. “If I don’t have that context, I lose my sense of cause and effect,” Rosenberg states. “I assume everybody’s seeing those things, and these systems could break that down.” He concludes, “Once I lose that, I lose a sense of agency and I lose a sense of autonomy and I just start to be a manipulated test subject in a Truman Show.”

The Truman Show is a 1998 film that depicts the story of Truman Burbank, a man who is unaware that he is living his entire life on a colossal soundstage, and that it is being filmed and broadcast as a reality television show with a huge international following.

Highlighting the fact that the commercialization of VR is being driven by large tech companies motivated to increase their already record-smashing profits, Rosenberg states, “We need to move away from worlds where selling influence is the currency, and instead go to worlds where the currency is providing value to consumers.”

When commercial social media has played a central role in radicalizing and polarizing societies and individuals (for more, see our February editorial An Urgent Appeal for Separation of Tech and State), it’s important to acknowledge history’s lessons before unleashing VR technology that’s far more immersive than the words and images of social media on a two-dimensional screen.

 

 

With the scalability of VR, the addition of a realistic third dimension to AI-generated content is cause for regulatory action.

Rosenberg states, “We’re going into this new world where content is going to be very conversational and very immersive and that the AI systems are going to be able to adapt to each one of us individually. And so what I really push hard on is to try to get regulators and policymakers to realize the bigger problem is that these AI systems, if they’re allowed to close the loop around us – they’re allowed to adjust their pitch to us, in real time as we’re interacting – they will be able to optimize their ability to influence us, each one of us individually. And that’s just a whole other level of influence that we’re not prepared for, and we, in a lot of senses, we can’t even imagine what it will be like. But we will be in a place in the not so distant future where an AI will be able to persuade us at levels that exceed what any human could persuade.”

The challenge with legislation is that it evolves more slowly than technological innovations.

For the rate of legislation to match innovation, we appear to have two choices with increasingly powerful virtual reality technology.

One option, which has so far failed and been frustrated by industry lobbying, is that legislators rapidly increase their rate of understanding and oversight. The other option is that major changes to three-dimensional immersive technologies could be withheld from widespread use until society agrees on proper limits to social consequences.

Either option would entail economic constraints on the big tech companies that are designing and marketing VR. The question is: is avoiding the economic consequences better than incurring the social consequences?


 

Your feedback helps us shape The Quantum Record just for you. Share your thoughts in our quick, 2-minute survey!

☞ Click here to complete our 2-minute survey

One comment on “Rise of Virtual Reality Tech Increases Risks of Entering AI’s Third Dimension, and the Need for Immersive Rights

  1. Andy on

    Since the laws enacted to regulate the development of VR + AI can not catch up with the intrusion on individual digital privacy/ownership on the Internet, we might as well replace the traditional Internet with a revolutionary platform that provides data security, sovereignty, lineage, content creator controllability, and quantum computing and AI hacking resistance. US10972256 is a blueprint for such a platform.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community