
Are humans becoming digital objects? Image by Gerd Altmann, on Pixabay
By James Myers
In a split second, artificial intelligence can make a decision with long-lasting and sometimes lifelong consequences for people who are often left with no means to challenge the outcomes. Increasingly, lower-cost AI applications are operating without a human in the loop to decide which applicants for employment or immigration will be chosen for an interview, to cite two common situations with potential to upend human lives.
In such cases, the risks of an unjust AI decision don’t always end with the selection process. As companies and governments adopt AI for cost reduction and efficiency, humans are sometimes subjected to interviews by AI agents that judge outcomes based on undisclosed criteria and biases embedded in their programming.
Governments are beginning to take notice of the potential harms and some, like the government of the province of Ontario, in Canada, have enacted legislation to put humans on a more equal footing with AI in employment matters. In effect since January, an amendment to Ontario’s Employment Standards Act requires employers with 25 or more employees to disclose in all publicly advertised job postings whether AI is used to screen, assess, or select applicants.
While laws like Ontario’s can provide a measure of human protection, the rapid pace and wide extent of AI implementation has left employers in the province wondering at the exact meaning of the terms “AI,” “screen,” “assess,” and “select” in the context of evolving practices in the job market. Nonetheless, employers found not in compliance with the new law face significant financial penalties.

Image by Gerd Altmann, on Pixabay.
Hiring practices have been fundamentally transformed by the increasing prevalence of online job-posting platforms such as LinkedIn that attract large numbers of applicants from around the globe with a volume that is beyond human recruiters to manage. While estimates vary by region, many indicate that online job boards now account for three-quarters or more of hires.
The expansion of recruitment beyond traditional local markets provides further incentive, beyond cost-saving, for employers to use AI in hiring. AI is also being tasked with compiling candidates’ social media profiles, which are increasingly used to vet the suitability of a potential hire.
Compounding the challenges for job seekers is the rise of so-called “ghost jobs” that are posted online. Forbes reported in November 2025 that “30% of job postings are fake,” with many employers advertising more positions than they’re intending to hire. The various reasons for ghost job postings include employers testing market conditions, gauging salary expectations, signalling growth, satisfying internal human resources quotas, and building candidate pools for roles that might later open.
Implementing AI in key human resource practices can pose risks not only to employers and prospective employees but also to the makers of the AI applications.
A class action lawsuit has been proposed against California-based Eightfold AI Inc. for discriminatory practices. Eightfold offers a product called AI Interviewer which the company’s website claims can “run 1 million interviews in 1 hour.” The lawsuit challenges Eightfold under three laws, including the U.S. Fair Credit Reporting Act which requires that an employer notify any individual who faces an adverse action on the basis of information obtained from credit reporting agencies.
Eightfold markets its Talent Intelligence Platform that, as the company states (see pdf), “features deep learning AI that delivers rich talent insights by analyzing data from SAP SuccessFactors customers and public sources like career sites, job boards, and resume databases (LinkedIn, Hoovers, Crunchbase, GitHub, etc.). Eightfold’s proprietary global data set is the world’s largest, self-refreshing source of talent data. It encompasses more than 1 million job titles, 1 million skills, and the profiles of more than 1 billion people working in every job, profession, industry, and geography.”

Eightfold Inc. promotes its AI-driven hiring solutions. Image from eightfold.ai website.
The class action complaint asserts (see pdf) that Eightfold’s system applies AI “to collect sensitive and often inaccurate information about unsuspecting job applicants and to score them from 0 to 5 for potential employers based on their supposed ‘likelihood of success’ on the job. Eightfold’s technology lurks in the background of job applications for thousands of applicants who may not even know Eightfold exists, let alone that Eightfold is collecting personal data, such as social media profiles, location data , internet and device activity, cookies and other tracking, to create a profile about the candidate’s behavior, attitudes, intelligence, aptitudes and other characteristics that applicants never included in their job application.”
The complaint concludes that “These job applicants have no meaningful opportunity to review or dispute Eightfold’s AI -generated report before it informs a decision about one of the most important aspects of their lives—whether or not they get a job.”
In an interview with the New York Times, one of the lead plaintiffs, Erin Kistler, stated, “I think I deserve to know what’s being collected about me and shared with employers. And they’re not giving me any feedback, so I can’t address the issues.” The newspaper reported that among the thousands of jobs that Ms. Kistler has sought, “which she has meticulously tracked, only 0.3 percent of her applications have progressed to a follow-up or interview. Several of her applications were routed through Eightfold’s software system.”
The Eightfold case is one among a number of actions in the U.S. that are challenging the use of AI in hiring.
In May 2025, a federal court in California granted preliminary approval for the case Mobley v. Workday Inc. to proceed as a class action. Workday markets a popular system for screening job applicants, and the lead plaintiff, Derek Mobley, claims that the company’s algorithms illegally discriminate against certain classes of job seekers such as older individuals, Black applicants, and individuals with disabilities. The company’s motion for dismissal was rejected by the judge based on evidence that included a rejection notice received by one job seeker at 1:50 a.m., less than an hour after submitting his application.

The Mobley v. Workday action was launched in 2023. The company lost its 2024 bid to avoid class action. Headline from Reuters .
Workday’s platform uses AI to compare applicant skills with requirements listed in job postings, recommending to employers the candidates likely best suited for an available position. The plaintiffs in Mobley v. Workday Inc. are five individuals aged over 40 years who “applied for hundreds of jobs using Workday’s system and were rejected in almost every instance without an interview,” as reported by law firm Quinn Emanuel Urquhart & Sullivan. The law firm indicates that “lawsuits premised on AI bias have been successful in stating claims for discrimination based on disparate impact,” which is a basis for claim under which applicants are required to demonstrate a causal relationship between specific practices and negative consequences to a protected class or group of people.
The judge explained that “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.”
Large language models, like ChatGPT, produce biased assessments of employment applications.
A widely-cited study published in July 2024, entitled Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval, by Kyra Wilson and Aylin Caliskan from the University of Washington, tested the ability of large language models (LLMs, like the popular ChatGPT) to screen job applicants’ resumes fairly. The researchers simulated job screening for nine occupations, using 500 publicly available resumes and 500 job descriptions, and found bias “significantly favoring White-associated names in 85.1% of cases and female-associated names in only 11.1% of cases.”
The simulation also found that Black males “are disadvantaged in up to 100% of cases, replicating real-world patterns of bias in employment settings,” and that the frequency of terms used in resumes and their lengths “play a significant role in the performance and outputs of language models.”
The study’s authors conclude that “While there are a number of factors contributing to biased outcomes in resume screening via LLMs, one naive approach to mitigation might be removing names from resumes altogether. However, resumes from real-world job seekers differ on many additional dimensions which can signal social group membership,” including educational institutions, locations, and even specific words in applications. They cite a study that found women’s resumes were more likely to use words like “cared” or “volunteered” while men used words like “repaired” or “competed,” differences that correlated with hiring outcomes.
Algorithmic biases persist despite longstanding evidence of programming issues that have yet to be fully addressed. For instance, in 2018 Reuters reported that Amazon discovered its automated employee recruiting platform was consistently under-rating female candidates for software developer jobs. That’s because the company’s computer models “were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.” Amazon deactivated the system after the problems became public knowledge.
The Amazon case demonstrates that even if it proves practically impossible to eliminate bias at the source, corrections can only be made if the problems are brought to light and not hidden
AI used for screening immigration applicants can produce unfair outcomes.
The increasing trend of remote working, mobile workforces, and regional needs for highly trained specialized talent is combining with other factors like climate change, poverty, and war to drive unprecedented levels of immigration applications in many countries. Faced with overwhelming volumes and lengthy verification processes, some countries are turning to AI for assistance.

Headline in Maclean’s Magazine
In March, the Toronto Star reported that an AI reviewer for the Canadian Immigration Department rejected a permanent residence application from a health sciences post-doctoral research fellow and guest teacher at McMaster University, in Hamilton, Ontario. The applicant, named Kémy Adé, holds a PhD in the immunology of aging from the prestigious Sorbonne University in France, and was shocked to receive a rejection letter that cited her current job duties as wiring and assembling control circuits and building control and robot panels. The letter claimed these duties didn’t match the Canadian work experience she claimed.
The problem is that nowhere had Adé falsely described her job duties. “I saw this language about this job description that has nothing to do with me,” Adé told the newspaper. “I was disoriented how this could happen,” she said, until she noted a disclaimer at the bottom of the letter that referred to the use of generative AI to support application processing. Although the disclaimer stated that all generated content was verified by a human officer and that generative AI was not used to make or recommend a decision, it is clear that insufficient human review was provided in her case.
With a backlog of nearly one million immigration applications that have exceeded the Government of Canada’s own time limitations for processing, AI offers the potential for speeding decisions by a limited number of human reviewers. The problem for human oversight is lack of time to vet the vast amounts of information provided by AI, while AI is well-known for hallucinations of the kind that now threaten to derail Kémy Adé’s permanent residence application.
The failure of AI in this instance, compounded by human error in its oversight, provides an example of how well-intentioned policies for responsible AI use can be insufficient.
The Department of Immigration, Refugees and Citizenship Canada (IRCC) recently published its first-ever AI strategy, which acknowledges that “While AI has immense potential, it also poses risks. We have seen that AI systems can perpetuate bias and discrimination, mistrust, a lack of accountability for decisions, and issues with privacy and data protection. They can also be misused by bad actors. When these problems occur, they can cause harm to individuals and groups, particularly to the most vulnerable among us. But IRCC’s approach to automated decision-making is deliberately transparent and governed.”
IRCC’s strategy includes using AI to enhance the productivity of employees by performing tasks such as “triaging applications, creating summaries, producing documents, and responding to client enquiries.” Program productivity will be enhanced by “identifying anomalies, matching data, and making assessments and recommending options.” The latter task includes “flagging straightforward, low-risk files for expedited officer decision,” provided the “tools do not refuse or recommend refusing any applications” (the boldface emphasis is the department’s).
The department states that the risk of AI “necessarily means a slow and cautious approach to incorporating AI into our operations. Given the far-reaching implications of IRCC’s mandate and the life-altering consequences of some of our decisions, we cannot afford to move forward with new technologies and tools until they’ve been thoroughly tested and proven reliable, safe and secure.” The department’s deputy minister acknowledges that, “While AI excels at data processing, it lacks nuanced understanding and ethical judgment.”
AI interviewers gauge human responses, but does the same approach apply in all cases?
Eightfold’s website claims that “Manual hiring is over. Our AI-native digital worker is ready now to conduct bias-conscious interviews at scale, freeing your recruiters to focus on final decisions.” This statement doesn’t acknowledge the sometimes-numerous machine-driven decisions that lead to a single, final decision – such as determination of criteria for judging applications, the selection of applications for interview, the questions posed to applicants, and the criteria to analyze responses.

Image from eightfold.ai website.
In its marketing, Eightfold is careful to promise “bias-conscious” rather than bias-free interviews. The difference is significant, given the difficulty of eliminating bias in AI interpretation of human responses.
AI interviewers are trained to recognize and reward structured responses, for example those that clearly describe employment experience using the so-called STAR method in which the applicant addresses “situation,” “task,” “action,” and “result.” Robert Manfredi, scholar, AI researcher, and instructor in Rhetoric and Composition at Lanier Technical College, explains that candidates who haven’t received training on structuring responses for AI interviewers are at a disadvantage because of the system’s inability to recognize qualification in an unexpected form. As a result, the machine might allow less qualified candidates to proceed to the next step in the process.
AI interviewers are sometimes trained to anticipate key words without necessarily detecting appropriate substitutions. As Manfredi explains, “For example, an employment AI interviewer might expect a good candidate to use a term like ‘cross-functional collaboration’ and not recognize that ‘working across teams’ means the same thing. Conversely, AI interviewers are often on the lookout for ‘keyword stuffing,’ and penalize for excessive use of terms that aren’t expressed in natural language.”
Language differences could pose a significant disadvantage in cases where, for example, an AI conducts an interview in English when it’s not the applicant’s native language. Even a person who is fluently bilingual don’t necessarily use the same terminology and phrasing as a native English speaker would, which are differences that could count against the human.
In particularly high-stakes interviews, the human is likely to experience stress and emotional responses that result in errors, hesitation, or uncertainty in responses. In these cases, the AI interviewer will give a lower score to the human who hasn’t met programmed expectations. In a highly controversial practice, some AI interviewers even track and score eye movements, facial motions, and pauses, and as a result risk rewarding overly confident candidates over those who take the time to consider a wider range of options before responding.
Some governments, notably the European Union, are introducing stricter controls on AI’s use in employment. The EU categorizes AI by risk level and applies more stringent controls to systems ranked as “high risk.”

Image by Gerd Altmann, on Pixabay
The EU’s AI Act classifies as high risk “AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships … since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights.”
What does the future hold for AI in life-changing decision-making?
With no way of knowing an AI-powered system’s criteria for judgment, and no way of addressing errors, job candidates, immigration applicants, and other people facing decisions with significant and potentially long-term effects risk unjust outcomes. The case of Kémy Adé’s application for Canadian permanent residence demonstrates that policies requiring human oversight will not ensure justice if the overseers lack either the time or training to identify algorithmic errors and biases.
AI decision-making systems can’t improve their accuracy and shed their biases unless they’re trained to recognize and avoid repeating their own errors. Identifying serious errors requires, however, that the human applicant knows and can challenge the criteria used for judgment, but many AI decision-making systems now operate as ‘black boxes’ that hide the decision-making process.
When some light is allowed into the black box of decision-making, however, injustices can become clear.
For example, a Spanish job applicant who was recently interviewed by an AI discovered that his score had been reduced for a response that a human interviewer would have interpreted differently. In this case, the AI interviewer sent questions by WhatsApp messages, to which the applicant was required to respond with voice notes. The system determined that “lack of adaptability” was a character flaw because, when asked what internet browser he uses daily and why, the applicant replied that he uses Google’s Chrome “mostly out of habit.”
The applicant’s discovery of the injustice was enabled by the European Union’s General Data Protection Regulation, which forced disclosure of the decision-making components. The EU has been a leader in regulating AI and protecting consumers, but European companies are facing competitive pressure from businesses in jurisdictions with less regulation – particularly the United States.

Image by Alex Schuler, on Pixabay.
Can AI decision-making be improved, its errors discovered and corrected, and appropriately regulated to protect the public against injustices from bias and error?
The outcome of a cooperative effort to empower human overseers and open avenues for redress when injustices are committed could be a win-win for both the public and AI decision-making. That’s because processes assisted by truly effective AI decision-making could be used more broadly, improving lives in many ways that include helping to match qualified applicants with employers and relieving immigration backlogs.
Craving more information? Check out these recommended TQR articles:
- Thinking in the Age of Machines: Global IQ Decline and the Rise of AI-Assisted Thinking
- Cleaning the Mirror: Increasing Concerns Over Data Quality, Distortion, and Decision-Making
- Not a Straight Line: What Ancient DNA Is Teaching Us About Migration, Contact, and Being Human
- Digital Sovereignty: Cutting Dependence on Dominant Tech Companies
We would appreciate your feedback on The Quantum Record and similar content.
Have we made any errors?
Please contact us at info@thequantumrecord.com so we can learn more and correct any unintended publication errors. Additionally, if you are an expert on this subject and would like to contribute to future content, please contact us. Our goal is to engage an ever-growing community of researchers to communicate and reflect on scientific and technological developments worldwide in plain language.

