Who Is Pulling the Strings? Human Agency and Manipulation of Opinions on the Road to the Quantum Era

Image of a puppet theater of The Beatles’ members.

 

By Mariana Meneses

This article is part of The Quantum Record’s ongoing series on Quantum Ethics, with six issues introduced in our May 2024 feature Quantum Ethics: There’s No Time Like the Present to Plan for the Human Future With Quantum Technology.

Throughout history, influencing others’ opinions has been integral to social interaction, whether for collaboration or competition. Today, with data-driven tools and complex predictive models, public and private actors have new means to shape public sentiment. This article explores how these tools may be used to sway opinions and the potential impact of increased prediction accuracy as our technology evolves on the road to quantum computing. 

Public opinion research has increasingly turned to predictive models fuelled by big data and machine learning to forecast trends in opinions and behaviour more accurately. 

Traditionally reliant on surveys, the field has expanded to include vast datasets generated from online behaviour, social media activity, and consumer habits. Machine learning algorithms can process and analyze these complex data patterns to predict shifts in public sentiment, voter preferences, and policy support with unprecedented precision. 

These models allow researchers and decision-makers to anticipate how opinions evolve, offering a deeper, data-driven understanding of societal dynamics. However, this growing reliance on predictive models also raises questions about the accuracy of predictions, the ethical use of personal data, and the potential for manipulation through targeted messaging.

Predicting without manipulating is an option.

Prediction tools, such as electoral polls and data analytics, are designed to forecast trends and inform decision-making by providing a snapshot of public sentiment. When used transparently, these tools can serve legitimate purposes, offering insights for policymakers, researchers, and the public to make informed decisions. 

Manipulation occurs when these tools are employed for hidden agendas, shaping opinions through targeted misinformation or micro-targeted advertising that plays on individual biases.

Electoral polls play a key role in democratic processes when they are publicly available. 

Public polls can indeed influence voter behaviour by altering voting intentions, increasing turnout, and affecting voter expectations and preferences. For instance, in “How Elections Polls Shape Voting Behaviour”, published in 2017 in the academic journal Scandinavian Political Studies, Professor Jens Jörn Dahlgaard, from Linköping University and co-authors show that voters tend to move their voting intentions in the direction of a party experiencing increased support, as shown by opinion polls.

Another (of many) example comes from the Journal of the European Economic Association, in 2018. In “What Makes Voters Turn Out: The Effects of Polls and Beliefs”, Professor Marina Agranov, from Caltech, and co-authors show that people are more likely to vote when they think their preferred option is ahead, which leads to higher turnout from the expected majority and more landslide victories (elections in which one candidate or party wins by a very large margin). Others have also shown that polls decrease turnout when polls predict non-competitive elections.

Given the power to mobilize or demobilize voters which affects electoral outcomes, many democratic countries have witnessed a rapid increase in the amount of opinion polls that are made public during electoral campaigns. The variety of results, which may stem from genuine differences in methodology, but may also not, adds another layer of complexity to our understanding of the impacts of opinion polls on elections.

A study published in 2022 in The International Journal of Press/Politis, by Professor Stephen Dawson, from the University of Gothenburg, and titled “Poll Wars: Perceptions of Poll Credibility and Voting Behaviour” shows that opinion polls can have considerable demobilizing effects when polling environments are conflicting or deemed not credible, affecting voter behaviour.

The relationship between polls and electoral behaviour is well-studied and full of compelling evidence. However, even though public polls can influence voter behaviour, withholding them could lead to even greater information imbalances, as private actors—who already have access to sophisticated data—would still use these insights to sway opinions.

When such data is accessible to all, it limits the power of select groups to shape public sentiment in secrecy.

A key example is the scandal that erupted in 2018 when it was revealed that Cambridge Analytica had harvested personal data from millions of Facebook users without their consent to influence voter behaviour in political campaigns, including the 2016 U.S. presidential election and the Brexit referendum in the U.K. 

The Cambridge Analytica scandal involved the unauthorized collection and use of personal data from millions of Facebook users. Cambridge Analytica, a British consulting firm, harvested this data through a seemingly innocuous app called “This Is Your Digital Life,” which collected information not only from users who installed it but also from their Facebook friends. 

The data was then used to create detailed psychological profiles, primarily for political advertising purposes. By using detailed psychological profiles based on users’ online activity, Cambridge Analytica deployed micro-targeted ads designed to appeal to specific biases and emotions, swaying opinions and actions on a large scale.

The scandal raised significant concerns about data privacy, social media’s influence on politics, and the ethical use of personal information. It led to widespread criticism of Facebook, regulatory investigations, and fines for the company. The incident sparked public outrage, prompted the #DeleteFacebook movement, and ultimately contributed to Cambridge Analytica’s bankruptcy. The aftermath of the scandal has had lasting effects on discussions about data protection, digital privacy, and the role of social media in democratic processes.

The company’s techniques exemplified how targeted, personalized content could manipulate opinions and behaviors outside of the public’s awareness. 

 

Image: Microsoft Designer.

 

There is a lot that we don’t know.

University of Amsterdam Assistant Professor Dr. Tom Dobber, in the chapter “Microtargeting, Privacy, and the Need for Regulating Algorithms” of The Routledge Handbook of Privacy and Social Media (2023), makes explicit how algorithms used for micro-targeting are often “proprietary, opaque and ill-understood,” which means that even experts have limited insight into how these systems function and evolve. 

This secrecy increases uncertainty about how much control companies, governments, or malicious actors have over shaping public opinion. Additionally, while manipulation campaigns can be tracked post hoc, many go unnoticed in real time, leaving questions about how often they occur and how many remain undiscovered.

It is difficult to measure the long-term effects of manipulation tactics on public behaviour, given the complex interplay between psychological biases, social networks, and real-world events.

Also, the public’s susceptibility to manipulation varies, and not all individuals react the same way to targeted content, making it challenging to assess the full scope of influence. This variability, combined with the lack of transparency in the deployment of these tactics, leaves significant gaps in our understanding of how much power these tools wield over democratic processes and public discourse.

The rapid evolution of data-driven technologies, including the rise of machine learning and the potential of quantum computing, makes us wonder about the future capabilities of predictive models of opinion and behaviour. 

The use of AI models in public opinion research is increasing rapidly. A 2023 preprint titled “From Values to Opinions: Predicting Human Behaviors and Stances Using Value-Injected Large Language Models,” led by master’s student Dongjun Kang from Sungkyunkwan University in South Korea, is one example. Kang, who studies the use of large language models (LLMs) for persuasion, shows how value-injected LLMs—trained to understand human values—are more cost-effective than traditional survey methods for predicting opinions and behaviours.

There is also the issue of causality.

Researchers using observational data (such as data collected from social media) face challenges in distinguishing between correlation and causation due to the influence of confounding variables that can obscure true relationships. 

 

 

Confounding variables are unknown factors that can influence both what you want to explain (i.e., your response or outcome of interest) and what you are using to explain it (i.e., the explanatory factor), potentially leading to misleading conclusions about causal relationships. 

For example, in public opinion research, the level of education may act as a confounding variable, as is shown by the University of Sussex Professor Matthew J. Easterbrook and co-authors in “The Education Effect: Higher Educational Qualifications are Robustly Associated with Beneficial Personal and Socio-political Outcomes”, in Springer’s Social Indicators Research (2015).

The study analyzes how education influences a wide range of important outcomes, such as political interest, trust, health, and attitudes toward other groups, using data from three surveys covering 1986–2011. The results show that higher education, especially a university degree, is linked to more political interest and trust, better health, and more positive views of immigrants while reducing political cynicism.

If researchers, using the scientific method, fail to account for education, they might incorrectly conclude that other factors, like media consumption, drive political interest, when, in reality, education may be influencing both media use and political engagement, obscuring the true relationships between variables.

 

 

These overlapping influences make it challenging to determine whether changes in public sentiment stem from a perceived cause. While statistical methods and machine learning can help, the complexity of human behavior and the dynamic nature of public opinion often limit the ability to isolate specific causes, complicating interpretations and conclusions drawn from the data.

Could quantum computing impact all of this?

Quantum computing has the potential to enhance the accuracy of predictions in various fields, with no exception to public opinion research, by leveraging its potential to process and analyze large datasets more efficiently than classical computers. 

Quantum computing utilizes quantum bits (qubits) that exist in multiple states simultaneously, allowing for parallel processing and the exploration of more complex problems and relationships between probabilities. As a result, the application of quantum computing could lead to more accurate predictions regarding public sentiment by allowing for more sophisticated modeling of human behavior. 

There may be only one way a quantum computer could more accurately predict an election’s outcome than a professional poll: by knowing perhaps too much about you. Luckily, that’s a feasibility.” – Scott Fulton III, for ZDNet, in 2021.

Besides affecting data processing capacity, quantum mechanics may contribute to transforming the fields in other ways. 

“Social sciences go quantum”?

In “Social sciences go quantum: explaining human decision-making, cognitive biases and Darwinian selection from a quantum perspective”, researchers elaborate on quantum decision theory, by incorporating the idea of superposition. 

This allows for the possibility that individuals can simultaneously consider multiple options and their associated probabilities, leading to interference effects similar to those observed in quantum mechanics. This framework can better account for cognitive biases and anomalies, as it recognizes that decision-makers may not always settle on a single, rational choice but instead navigate a complex landscape of competing possibilities. This perspective helps explain why people might make seemingly irrational decisions or exhibit behaviour that contradicts classical economic predictions, particularly in uncertain or ambiguous situations.

In another example, Matt Swayne, for the Quantum Insider, notes that quantum economics, as advocated by mathematician David Orrell, critiques traditional economic models that rely on equilibrium and fail to capture the complexities and uncertainties of real-world markets. Instead, it applies quantum models, which reflect a dualistic and probabilistic approach akin to quantum mechanics, to better understand economic behaviours. 

While quantum economics doesn’t imply direct connections to subatomic particles, Orrell argues that these models, with their ability to handle uncertainty, provide a more accurate and nuanced framework for predicting financial trends. This field, although niche, has a history dating back to 1978.

Although quantum computing promises transformative capabilities, practical applications are still in the early stages of development. 

It is essential to note that while quantum computing holds promise, it is still in the experimental stages, and practical applications in fields like social science are yet to be fully realized. There are few, if any, studies on its potential uses and effects in measuring public opinion.

While quantum computers have demonstrated impressive feats in specific tasks, they are not yet robust enough to be applied broadly in fields like public opinion research. Major hurdles, such as error rates and qubit stability, contribute to limiting their immediate use. 

As recently reported by Analytics Insight, the integration of quantum computing with traditional supercomputers and artificial intelligence offers innovative solutions to complex societal issues and enhances various sectors. However, constructing practical quantum computers poses significant challenges that require advancements in scalability, fidelity, speed, durability, and programmability. 

The global market size of quantum computing is projected to grow dramatically, according to one estimate from USD 885.4 million in 2023 to USD 12,620.7 million by 2032. This highlights the need for ethical development and equitable distribution of benefits in this rapidly evolving technological landscape.

However, progress is accelerating, with significant advancements in both hardware and algorithms. 

This timeline allows some room for policymakers, researchers, and society to prepare for its potential impact, but it also underscores the urgency of developing ethical frameworks before these powerful tools become commonplace.

 

Image: Microsoft Designer.

 

There is also a lot of good that it could do.

The knowledge gained from analyzing large datasets using machine learning and AI offers powerful opportunities to accomplish goals that would otherwise be difficult, perhaps, especially by increasing collaboration in society. 

For instance, these technologies can help to optimize strategies for environmental and charitable campaigns, such as through targeted marketing. By understanding people’s values and preferences, AI can fine-tune outreach efforts to boost donations of all kinds—whether financial, material, or time-based—ultimately fostering greater community involvement and support for important causes.

The organization Nonprofit Tech for Good: A Digital Marketing and Fundraising Resource for Nonprofits has recently featured how “The Use of AI in Fundraising: Maximizing Donations with Data-Driven Asks” highlights the critical role of AI in transforming fundraising strategies for nonprofit organizations. 

The article emphasizes that AI is not just a trend but an essential tool for organizations aiming to enhance their impact. By leveraging AI and predictive analytics, nonprofits can analyze vast amounts of data to understand donor behaviour, personalize engagement, and optimize donation requests. This data-driven approach allows charities to identify potential donors more effectively, tailor their campaigns, and ultimately maximize donations, leading to stronger relationships with their donor base and greater overall success in advancing their missions.

 

Image: Microsoft Designer.

 

Transparency is key, and we do need regulations.

In the EU, the European Artificial Intelligence Act (AI Act) entered into force on August 1, 2024. This act aims to foster responsible AI development and deployment across the EU, introduces a risk-based framework approach, addressing potential risks to citizens’ health, safety, and fundamental rights. The EU AI Act categorizes AI systems into different risk levels, with stricter rules for higher-risk applications.

In the US, the rapid advancement of AI is transforming various sectors and prompting widespread legislative action. In 2024, numerous states introduced bills related to AI, reflecting a growing recognition of its potential impact and the need for regulation. While many states are taking steps to address specific concerns, The National Institute of Standards and Technology is working “to develop federal standards for the creation of reliable, robust and trustworthy AI systems.”

In July, Brazil’s federal government launched the Brazilian Artificial Intelligence Plan (PBIA) 2024-2028, titled “AI for the Good of All,” which focuses on enhancing public services in health, education, public security, and energy through technological innovations. The plan outlines initiatives and includes the development of the Santos Dummont supercomputer. It emphasizes ethical AI use, training for marginalized groups, the establishment of a national sovereign cloud for data protection, and the creation of a robust AI governance framework to promote innovation and protect citizens’ rights.

In Australia, The National Framework for the Assurance of Artificial Intelligence in Government was released in June, establishing cornerstones and practices for AI assurance. It promotes a consistent approach to AI use, emphasizing the identification of benefits, risk mitigation, lawful use, and accountability. It aligns with Australia’s AI Ethics Principles and complements local and global initiatives aimed at ensuring safe and responsible AI deployment. Key frameworks include the NSW Artificial Intelligence Assurance Framework and international agreements like the Bletchley Declaration and the Seoul Declaration, which advocate for trustworthy and human-centric AI practices.

To navigate some of the challenges ahead, we need a culture of transparency and accountability in both the development and deployment of these technologies. 

This includes engaging a diverse range of stakeholders—policymakers, technologists, ethicists, and the public—in discussions about ethical guidelines and regulations that prioritize democratic integrity and protect individuals from manipulation. 

Furthermore, educational initiatives aimed at increasing public literacy around data privacy, algorithmic bias, and the implications of predictive modelling can empower citizens to critically assess the information they encounter. 

By fostering an informed and engaged society, we can harness the potential of these technologies while safeguarding democratic processes and ensuring that public opinion research serves the collective good, rather than the interests of a select few.

Craving more information? Check out these recommended TQR articles: 


 

Join our community of readers shaping the future of The Quantum Record. Share your thoughts in just 2 minutes!

☞ Click here to complete our 2-minute survey

 

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community