
Image: Gerd Altmann, Pixabay.
By Mariana Meneses
Digital identity is increasingly becoming part of the basic infrastructure of modern life. As more services move online, from banking to healthcare to government interactions, the ability to prove who you are in a secure and reliable way is becoming as essential as a physical ID. Governments are beginning to treat digital identity as a foundation for accessing rights, services, and participation in society.
At the same time, these systems are becoming politically contentious. In the United Kingdom, recent proposals to introduce digital IDs that are particularly linked to employment and immigration outcomes, have triggered public backlash and raised concerns about privacy, surveillance, and state control. What is at stake is not only how identity is verified, but how it can be used to regulate access, monitor populations, and reshape the relationship between individuals and the state.
Against this backdrop, the European Union has developed a structured approach to digital identity, based on a layered system combining national tools, legal frameworks, and digital instruments.
In the European Union, digital identities are being addressed through a layered system combining national tools, legal frameworks, and digital instruments.
Rob Hoeijmakers, Digital and AI Strategist, explains that understanding the EU’s system requires distinguishing between three core elements: electronic identities (eID), the regulatory framework (eIDAS), and the emerging European Digital Identity Wallet (EUDI).
The Electronic Identity (eID) functions as a digital version of a traditional ID card and is issued by individual EU member states. Countries have developed their own versions, which are used to access a wide range of services, from filing taxes to logging into banking platforms.
Electronic Identification, Authentication and Trust Services, eIDAS is the regulatory framework that allows these national systems to function across the 27 member states of the European Union. Rather than creating a single identity system, it establishes common standards and legal validity for digital interactions. Under eIDAS, electronic signatures, digital certificates, and authentication processes gain legal recognition throughout the EU, enabling contracts and official documents to be executed entirely online.
Building on this foundation, the European Digital Identity Wallet (EUDI) is designed to allow individuals to store and share multiple verified credentials in one place. These can include not only a national eID, but also documents such as driver’s licences, diplomas, and medical records. A key feature is user control: grounded in principles of self-sovereign identity, the wallet is intended to let individuals decide what information to share, with whom, and in what context.
Hoeijmakers explains that the ambition is to move toward a system where identity verification becomes seamless, secure, and consistent across borders while also reducing reliance on private intermediaries such as large technology platforms. In that sense, the EU’s approach is not just technical but institutional: it treats digital identity as a matter of governance, trust, and control over how individuals exist and act in digital space.

Image: ar130405, on Pixabay.
But the EU Member States are not alone in these efforts. Many other countries have also implemented Digital Identity Systems, such as in Brazil.
In Brazil, digital identity is evolving through the digitization of a widely used national identifier into a more secure and interoperable system. The country is transitioning toward a fully digital version of this identifier, supported by a legal framework that enables both public and private entities to issue and verify digital identities while ensuring data security and user control. This approach focuses on improving efficiency, such as reducing fraud and simplifying access to services, by integrating identity verification into digital platforms rather than building an entirely new system from scratch.
The system combines a national identification database, largely based on biometric data (fingerprints) collected for electoral purposes, with a single digital portal that allows citizens to authenticate themselves and access services in one place. This model has rapidly expanded, reaching a large share of the population, but it also introduces important trade-offs: while it increases efficiency and visibility for the state, it raises concerns about large-scale data abuses and the risk of excluding individuals who cannot access or navigate digital systems.
The system has been in use since 2019, and has 153 million registered users, performing 250 million authentications per month, in 4,500 digital services from more than 1,000 public agencies. Citizens create an account and are classified into three levels, bronze, silver, and gold, depending on how strongly their identity is verified. Basic access (bronze) relies on validating personal data against existing government records, while higher levels introduce stronger authentication methods, such as facial biometrics linked to official databases or verification through banking systems and national digital certificates. This layered model allows broader access at lower levels while reserving more sensitive services for identities that have undergone more rigorous verification, reflecting an approach where trust is progressively built through additional data validation rather than assumed upfront.
The UK’s proposal for digital IDs is controversial.
Following significant public backlash, the UK government shifted its approach to digital identity, moving from a more top-down proposal to a consultative process. As noted in the consultation materials, the initial plan announced in 2025 included making digital ID mandatory for right-to-work checks, but it quickly became controversial, with nearly three million people signing a petition opposing the scheme. In response, the government stepped back in early 2026, clarifying that digital IDs would not be legally required for citizens and launching a formal consultation to gather public input.
E-petition debate relating to digital ID – Monday 8 December 2025 | UK Parliament
The UK’s proposal to introduce mandatory digital identity cards is emerging at the intersection of technological, governance, and political pressure. As reported by Al Jazeera, the initiative is framed not only as a modernization of public services, but also as a response to rising concerns about immigration and shifting electoral dynamics. The policy reflects a broader pattern in which digital infrastructure is deployed to address political problems: identity systems become tools not just for authentication, but for regulating access to work, services, and, indirectly, borders.
The government argues that a unified digital ID could streamline everyday interactions with the state. By allowing individuals to verify their identity quickly through a smartphone-based system, the proposal aims to simplify access to services such as education, social assistance, banking, and even voting. It is also presented as a way to reduce identity fraud and administrative friction, replacing slow, document-heavy processes with near-instant verification.
However, Al Jazeera emphasizes that the proposal is equally tied to immigration enforcement. By making digital ID mandatory for employment, the government seeks to make it harder for undocumented migrants to find work, thereby reducing incentives to enter or remain in the country without authorization. U.K. government ministers argue that, compared to other European countries where identity systems are more established, the UK’s relatively flexible labor access has driven the proposal to make digital IDs mandatory for work. The digital ID, in this framing, becomes a filtering mechanism embedded within the labor market itself.
The proposal has nevertheless triggered significant backlash. Civil liberties groups and political opponents have raised concerns about privacy, surveillance, and the expansion of state power. As cited by Al Jazeera, critics argue that requiring individuals to store and share personal data through a government-controlled system risks normalizing a form of everyday monitoring.
There are also concerns about unintended social consequences. Some organizations warn that digital ID systems may deepen exclusion, particularly for vulnerable populations such as individuals with limited access to technology. Rather than resolving structural issues, critics suggest the system could push already marginalized groups further into invisibility, increasing risks of exploitation and poverty.
One of the world’s largest market research firms, Ipsos, has been engaged by the UK government to facilitate a “People’s Panel” that will involve citizens directly in the design of a future digital ID framework. The goal is not only technical functionality, but legitimacy: a system that is trusted, useful, and inclusive must be built with input from those who will use it. Ipsos is responsible for structuring this process, guiding participants as they learn about the issue, deliberate on trade-offs, and formulate recommendations.
The structure of the People’s Panel reveals a deliberate attempt to treat digital identity not just as a technical rollout, but as a problem of collective decision-making. According to the UK government’s consultation materials, the panel will bring together around 120 participants randomly selected, to approximate the diversity of the population and reduce biases in participation. Over a series of workshops, participants will be given balanced information, exposed to competing perspectives, and asked to deliberate on trade-offs before producing shared recommendations.
The process is overseen by institutions including the UK Cabinet Office, Ipsos, and Sortition Foundation, with additional guidance from an expert oversight group spanning technology, civil liberties, and democratic practice.

Image: Gerd Altmann, on Pixabay.
There is justification for civil fears of losing privacy.
According to The Guardian, emerging AI-driven surveillance systems, such as those developed by companies like Palantir, are creating a largely invisible infrastructure capable of tracking, targeting, and influencing individuals at scale, with profound implications for civil liberties and human rights.
These systems integrate vast amounts of personal and behavioral data, from biometrics and location tracking to social networks, into analytical frameworks that can identify patterns and generate actionable “targets,” whether for law enforcement, migration control, or military operations. While their power lies in their ability to operate seamlessly and often unnoticed, this same invisibility raises serious concerns about accountability, consent, and the erosion of fundamental rights, as individuals become embedded in data ecosystems they neither fully see nor control.
According to The Conversation, platforms like Palantir’s Gotham are transforming how governments organize and act on information by integrating vast, previously fragmented datasets into unified systems that can map individuals, relationships, and behaviors in real time. While this dramatically increases efficiency, allowing analysts to connect data points across agencies in hours rather than weeks, it also enables forms of surveillance and profiling at an unprecedented scale. Because these systems rely on proprietary algorithms that are not publicly transparent, their conclusions, such as identifying someone as a risk or target, can be difficult to scrutinize or challenge, raising concerns about accountability, bias, and democratic oversight.
“Palantir Technologies, a private tech contractor with deep ties to military and intelligence agencies, has reportedly partnered with the White House to implement a sweeping data integration program called Foundry. At first glance, Foundry appears to be a neutral software platform designed to unify and streamline data across government agencies. But behind its unassuming branding lies a project that is dangerously out of step with American values and smacks of authoritarianism. Foundry is not just a tool. It’s more like a factory designed to melt down the raw material of private lives and recast them into profiles, scores, and state-controlled narratives.” – Clarkson Law Firm, July 8, 2025
More fundamentally, as The Conversation’s article argues, this marks a broader transition in governance: from decisions based on concrete evidence to decisions increasingly shaped by patterns detected in data, introducing a “preemptive” logic where individuals may be acted upon based on predicted risk rather than proven actions.
According to the American Immigration Council, U.S. Immigration and Customs Enforcement (ICE) is expanding its use of AI-driven surveillance through a new platform, ImmigrationOS, developed by Palantir. ImmigrationOS is designed to aggregate vast datasets and identify, track, and prioritize individuals for immigration enforcement. The system integrates information from multiple government sources, ranging from tax and social security records to biometric and location data, to generate profiles and flag individuals based on predefined criteria, effectively streamlining decisions about detention and deportation. While presented as a tool to increase efficiency and target high-priority cases, the Council argues that such systems blur the line between technology and policy: the way data is selected, weighted, and interpreted converts human judgments into automated processes.
According to Arthur Piper, Technology Correspondent for the International Bar Association, the growing integration of advanced data systems into government functions is reshaping the relationship between technology, state power, and the rule of law.
Using the example of companies like Palantir, Piper describes how software platforms designed to aggregate and analyze vast datasets are now embedded across domains ranging from healthcare to national security. These systems enable governments to process information at unprecedented scale and speed, supporting tasks such as military coordination, public health management, and immigration control, but they also concentrate decision-making power within opaque technical infrastructures. In this sense, digital identity systems can be understood as part of a broader shift: identity is no longer just verified, but continuously processed within interconnected data environments that shape how individuals are seen and acted upon by institutions.
According to Piper, this expansion raises unresolved legal and ethical tensions, particularly because regulatory frameworks are still catching up with technological capabilities. He notes that existing efforts to govern artificial intelligence, such as risk-based regulatory models, often exempt areas like national security, where some of the most powerful and least transparent applications are deployed. At the same time, public concerns over data use, surveillance, and “mission creep” are growing, especially when contracts and data practices lack transparency.
The result is a paradox: while these systems promise efficiency and improved public services, their legitimacy depends on trust that is difficult to sustain without clear accountability. If citizens begin to withdraw consent or resist participation, Piper suggests, even the most advanced data-driven systems may struggle to function effectively.

Image: Gerd Altmann, on Pixabay.
Digital identity systems are often presented as tools of convenience: ways to log in faster, access services more easily, and reduce fraud. But across different models, from the EU’s interoperable framework to Brazil’s centralized platform and the UK’s contested proposal, a broader pattern emerges: identity is becoming a foundational layer through which states organize access, participation, and decision-making. As these systems integrate with large-scale data infrastructures and AI-driven analytics, they begin to extend beyond verification into continuous monitoring, classification, and prediction.
The central challenge, therefore, is not only technical but institutional. Digital identity systems define how individuals are recognized by the state, and increasingly, how they are evaluated, included, or excluded. Their long-term impact will depend less on their efficiency than on how they are governed: what safeguards are in place, who controls the data, and whether transparency and accountability can keep pace with their expanding capabilities.
Craving more information? Check out these recommended TQR articles:
- Thinking in the Age of Machines: Global IQ Decline and the Rise of AI-Assisted Thinking
- Cleaning the Mirror: Increasing Concerns Over Data Quality, Distortion, and Decision-Making
- Not a Straight Line: What Ancient DNA Is Teaching Us About Migration, Contact, and Being Human
- Digital Sovereignty: Cutting Dependence on Dominant Tech Companies
We would appreciate your feedback on The Quantum Record and similar content.
Have we made any errors?
Please contact us at info@thequantumrecord.com so we can learn more and correct any unintended publication errors. Additionally, if you are an expert on this subject and would like to contribute to future content, please contact us. Our goal is to engage an ever-growing community of researchers to communicate and reflect on scientific and technological developments worldwide in plain language.

