An Increasingly Digital Future Raises Urgency for Rooting Out Algorithmic Biases in Software Development

artificial-intelligence

The process of software coding embeds the programmer’s conscious and unconscious biases in the algorithms, with real-life consequences to human users and subjects. Image by Gerd Altmann, on Pixabay.

 

By James Myers

“We want the world to remember that who codes matters, how we code matters, and that we can code a better future.”

This statement by the Algorithmic Justice League spotlights important questions often overlooked as companies are now rushing to develop and commercialize powerful AI applications. It’s not just about the code, it’s about who writes the code and how the code is written. When software is now typically evaluated by users only after it has gone to market, the Algorithmic Justice League aims to raise public understanding of the issues and choices we face in the development phase of software, before applications are released for sale to individuals, businesses, and governments.

 

Computer scientist Joy Buolamwini is the founder of the Algorithmic Justice League and author of Unmasking AI. Image: Algorithmic Justice League.

 

The four core principles of the Algorithmic Justice League, which was formed in 2016 by computer scientist Dr. Joy Buolamwini, focus on user consent, transparency in software development, accountability and oversight, and continuous research for public engagement. “Everyone, especially those who are most impacted, must have access to redress from AI harms,” the organization’s website states. “Moreover, institutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation.”

The story of Dr. Buolamwini and the League is told in the documentary Coded Bias, which was an official selection of the 2020 Sundance Film Festival and six other festivals that year. The documentary sets out examples of racial and gender bias coded into the AI systems of big tech companies and warns of the risks of unchecked artificial intelligence.

Examples of coded bias include facial recognition algorithms that are notoriously error-prone in identifying people of colour, because their coding is biased for recognition of white skin that is predominant in the training data. The problems for daily living under an algorithm with a skin colour recognition bias are evident in many examples, like when a building management company in Brooklyn planned to implement facial recognition allowing residents to access their homes. Many widely-publicized cases of facial recognition coding errors have resulted in criminal misidentification and wrongful imprisonment, a fact that helped the tenants in their action against implementation of the algorithms.

 

 

As public awareness increases, facial recognition software is continuing to improve, although not quickly enough for potential victims of errors.

The U.S. National Institute of Standards and Technology (NIST) has provided larger and more diversified datasets for use by academic and industrial researchers, and improved recognition is being enabled by faster processing speeds, increasing camera resolution, and other software and hardware enhancements.

University at Buffalo computer science associate professor Ifeoma Nwogu notes that “In 2022, the biometrics and cryptography company Idemia correctly matched 99.88% of 12 million faces in the mugshot category tested by NIST,” with the error rate of 0.02% being far less than the 4% error rate that existed in 2014. The error rate on larger and more diverse datasets could be higher than 0.02%, but even a 0.02% error rate among 12 million faces means about one-quarter of a million people would be misidentified – equivalent to practically the entire population of Buffalo, New York. Globally, the software wouldn’t recognize 160 million people.

The risk of facial recognition software errors is so high that it is one of the reasons the European Union’s newly-enacted A.I. Act prohibits “placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.”

 

surveillance camera

Cameras for facial recognition and surveillance are in widespread use. Image by Peggy Marco, on Pixabay .

The potential for facial recognition algorithms to misidentify human targets in law enforcement and battle is an increasing concern.

As Dr. Boulamwini explained to The Markup, “One of the reasons I continue to resist biometric surveillance technologies is because of how easily face data can be incorporated into weapons of policing, weapons of war. A few months ago, I came across a video of a quadruped robot with a gun mounted to its back. These systems can be equipped with facial recognition and other biometric technologies, raising grave questions. What happens when civilians are confused for combatants? Which civilians are more likely to be labeled suspicious?”

There is also risk for abuse of facial images that are harvested for commercial use without permission from and compensation to the person whose image it is, at the same time that criminals and other unscrupulous actors could misuse images with highly realistic deepfake technology. As we noted in our article Rise of Virtual Reality Tech Increases Risks of Entering AI’s Third Dimension, and the Need for Immersive Rights, the heightened risks have moved Denmark to give its citizens copyright protection over images of their bodies, facial features, and voices. Other countries may follow.

The risk of rapidly multiplying coding errors increases with new practices like vibe coding.

Risk extends to all applications that are developed quickly, if speed comes at the sacrifice of the cost and time required to put the product through rigorous tests for safety, accuracy, and absence of bias. When investors reward higher revenues and lower costs in a globally competitive market, companies have reduced user experience and usability tests and are now moving to adopt AI for testing in place of humans.

 

the word Error displayed on a person

Image by Dominic Swain, on Unsplash.

 

While they can come with risks, businesses are attracted to practices like vibe coding because of significant cost savings as well as the speed enabled by reducing human programming.

The attraction of increased profitability is so great that SoftBank CEO Masayoshi Son, who foresees AI that will be “ten thousand times more intelligent than human wisdom,” is planning to use AI agents for all of the company’s coding and programming, doing away with the need for human programmers. (For more, see our August article A Deep Dive Into Machine Superintelligence: Why are Companies Racing for It, and What Would Motivate a Machine that Outsmarts the Brain?)

Vibe Coding is the name of a recent trend to speed development of applications by piecing together software components from open-source code already written. The rise of vibe coding is becoming a concern to a number of security specialists. As Alex Zenla, chief technology officer of the cloud security firm Edera told Wired, “We’re hitting the point right now where AI is about to lose its grace period on security. And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that’s available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”

If vibe coders rely on freely-available software developed by large language models (LLMs) like ChatGPT, which are increasingly being used for their coding speed, the code will require continuous monitoring over its life cycle. That’s because the outputs of LLMs can vary, sometimes significantly, when they are asked to repeat a task, so that vibe coding may unintentionally incorporate a particular variance that could result in unanticipated errors at some point in the future.

Can bias be rooted out of software code?

The question is at the heart of the Algorithmic Justice League’s mission. A commentator in Coded Bias notes that “Everybody has unconscious biases and people embed their own biases into technology,” and so it may be that bias can never be eliminated at the outset. How much of our subconscious drives our decisions? Vibe coding amplifies the risks of bias multiplication and the problem of rooting out biases that have been either consciously or unconsciously embedded in the code before it goes to market and potentially inflicts harm and injustice.

Raising public awareness to the choices that are inherent in software coding, and the potential for coding done by AI agents to multiply errors rapidly, the Algorithmic Justice League and other AI safety advocates are fighting an uphill battle. They are confronting very well-funded and large technology companies at a time when regulations like the EU’s AI Act are only just coming into force for a limited number of the planet’s nearly 8 billion people.

 

Digital Civil Rights Award

Joy Buolamwini received the 2024 NAACP-Archewell Foundation Digital Civil Rights Award, presented by the Duke and Duchess of Sussex. Image: Algorithmic Justice League.

 

For the time being, at least, consumers are still able to exercise choice in the selection of software applications, and choice is enhanced by the knowledge of risks that the League and others bring to public awareness and continue to research. As the complexities and features of applications rapidly multiply, consumers stand a fighting chance on a levelled playing field when they’re armed with knowledge. Appreciating that who codes matters, how we code matters, and that we can code a better future, may be the key to designing a better future.

Your feedback helps us shape The Quantum Record just for you. Share your thoughts in our quick, 2-minute survey!

☞ Click here to complete our 2-minute survey

Leave a Reply

Your email address will not be published. Required fields are marked *

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community