Artificial Intelligence and Inequality: Is Innovation Fueling a New Era of Bias?

Medium | 27.11.2025 06:14

Artificial Intelligence and Inequality: Is Innovation Fueling a New Era of Bias?

Ryan Antunez

10 min read

·

Just now

--

A breakdown regarding law and economics.

“Inequalities are not only driven and measured by income, but are determined by other factors — gender, age, origin, ethnicity, disability, sexual orientation, class, and religion. These factors determine inequalities of opportunity which continue to persist, within and between countries” -United Nations

The exponential growth of artificial intelligence and its abilities has been celebrated as a catalyst for innovation and economic development. However, inequality’s multidimensional nature raises a pressing question for law and economics: what happens when a technology that reflects legal gaps and current market incentives begins to automate decision-making systems already influenced by persistent inequality?

Press enter or click to view image in full size

Fairness on the Line

“No state shall… deny to any person within its jurisdiction the equal protection of the laws” -the Fourteenth Amendment of the U.S. Constitution

At its core, the law aims to structure society around principles of fairness, equal protection, and nondiscrimination. These commitments stand precisely because inequality has historically shaped how institutions distributed rights and opportunities. Legal frameworks, such as constitutional protections, were designed to correct these disparities by limiting biased decision-making. Still, as AI systems increasingly moderate access to credit, healthcare, public benefits, and employment, the question arises of whether these protections can withstand the shift from human to algorithmic decision-makers.

Artificial Intelligence’s Western Bias

“AI has the potential to transform lives, bridge gaps, and create opportunities, but only if it works for everyone. When AI systems overlook the rich diversity of cultures, languages, and perspectives worldwide, they fail to deliver on their promise” -Dr. Assad Abbas

AI’s Western bias was never intentional. Its development, research, and innovation have been concentrated in Western countries, leading to a lack of cultural awareness. This ignorance of culture causes asymmetric representation within language and physical features.

Physical Features

Training datasets are often comprised of individuals with Western features, such as light skin, making facial recognition technology susceptible to inconsistencies when used with minority subjects.

In 2020, Robert Williams, a Black male, was wrongfully convicted following an incorrect facial recognition match.

The systemic inequality reaches beyond race. It expands to gender, age, and other factors. A 2018 study showed that the error rate for light-skinned males is 0.8%, while for dark-skinned women, it’s 34.7%. Similarly, a 2019 study by the federal government proved that facial recognition technology was most effective when used on middle-aged white males. The accuracy rates were particularly inaccurate in cases of women, children, the elderly, and people of color.

Press enter or click to view image in full size

A large dispute regarding AI facial recognition technology has been over DHS’s usage of it. The Department of Homeland Security (DHS) and its sub-agencies, like ICE and Border Protection, use facial recognition to their advantage, simultaneously violating numerous people’s Fourth Amendment rights. In 2017, DHS and its organizations used facial recognition technology to apprehend 400 family members of solitary migrant children. Using AI as a randomized selection system for deportation not only tears apart families but does so unjustly.

Language

Artificial intelligence language systems are generally less accurate when translating, transcribing, and detecting languages other than English. AI models tend to “think” in English, leading sentence structures and grammatical techniques in other languages to incorrectly mirror those of English. English detection accuracy is typically 95–97%, whereas in other languages, it drops to 70–80%.

When using translators or simply writing text, users often notice discrepancies between translations when using different idiomatic expressions. Being highly prolific, English is used for model training, and with lots of exposure in the media, it is what AI has been the most exposed to. Expressions and references in other languages are often not understood by artificial intelligence as a result of the lack of exposure.

Models typically fail to capture the complexity of local, less common dialects. This is occasionally overcompensated for, as is the case with African American Vernacular English (AAVE).

AI language models heavily stereotype dialects like AAVE and have been proven to be more likely to suggest that an individual speaking with AAVE should:

  • Be assigned less prestigious jobs.
  • Be convicted of crimes resulting in harsher sentences, such as the death penalty.

As shown in the diagram below, these models typically associate AAVE with negative traits, which is incredibly harmful to the communities that speak this dialect.

Press enter or click to view image in full size
The green textbox is in SAE (Standard American English). The purple textbox is in AAVE (African American Vernacular English).

The lack of linguistic representation in advancing technology not only halts equality within language but also perpetuates it. Indigenous languages are at a particularly high risk since there are fewer resources for students who attempt to learn these languages, they are being left out of advancements, and with the world’s centralization around English, parents are no longer teaching their kids native languages.

Press enter or click to view image in full size

Profit, Efficiency, and the Hidden Price of Bias

Firms increasingly adopt AI systems since they promise efficiency and low costs. Algorithmic decision-making can facilitate credit-scoring, hiring procedures, writing loans, and running analytics. These take away massive amounts of human labor, decreasing the amount businesses have to pay employees, which is arguably the largest turning point within commerce. However, evidence suggests that an increase in efficiency comes with a “hidden price.” Algorithms learn from humans, who are often unknowingly, unintentionally biased, and historical data that is structured upon inequality. AI reproduces and even amplifies these disparities for marginalized groups, turning what businesses believe is a cost-effective measure into mechanisms of systemic inequality through economic exclusion, the disappearance of fair opportunity in hiring, and market concentration.

Economic Exclusion

“On the other hand, since machine learning is based on past decisions recorded in the financial institutions’ datasets, the process very often consolidates existing bias and prejudice against groups defined by race, sex, sexual orientation, and other attributes. Therefore, the interest in identifying, preventing, and mitigating algorithmic discrimination has grown exponentially in many areas, such as Computer Science, Economics, Law, and Social Science” -Garcia et al.

Decision-making, done by artificial intelligence, has institutionalized patterns of discrimination. A 2024 study found that credit systems using machine learning often produced biased results, affecting marginalized groups by gender, race, and socioeconomic status.

Additionally, research from mortgage lending institutions that use these new algorithms shows these disparities. A 2019 study showed that African American and Latinx borrowers were charged higher interest rates by lenders, resulting in the earning of 11–17% higher profits on those loans, robbing minority borrowers of hundreds of millions of dollars every year.

Press enter or click to view image in full size

While firms profit from machine learning on the surface, marginalized borrowers pay the price. AI’s “efficient” client risk-rating and application processing only reinforces structural inequality in credit, wealth, and capital.

Fair Opportunity in Hiring

Businesses often use AI tools to streamline resume readings, candidate rankings, and overviewing of applications, since they assume that using AI reduces human bias and makes the process quicker. Alternatively, there is a considerable amount of evidence contrasting this standing.

A 2024 study followed large-language models’ (LLMs) screening of 361,000 fake resumes, and concluded that the AI gave higher scores to candidates with particular demographic traits. The model tended to favor female candidates in particular and gave the most rejections to Black male candidates, even when the education levels, skills, and other qualifications were identical.

On digital freelance marketplaces, new research uncovered structural bias: women, especially women of color, were ignored by algorithms and received fewer job offers compared to white men, despite near identical resumes.

Press enter or click to view image in full size

Market Concentration

“The AI supply chain is characterised by vertical integration and significant concentration at multiple leels… the development and deployment of AI models depend on access to specialized hardware, cloud computing infrastructure, proprietary training data, foundation models, and downstream applications” -Foucault et al.

Access to top-notch AI tools is given to currently dominant firms, as they have the funds to purchase and support them, control of large datasets, and advanced computational resources. These businesses can also normalize biased practices, since these AI tools will eventually become the hiring, lending, pricing, and service standards throughout the public sector.

Smaller firms, especially ones located in or serving underrepresented communities, often cannot compete with larger firms, since the market power is centralized with them. Access to top-tier AI tools is only available to those with substantial finances, infrastructure, and datasets. By stopping non-dominant businesses from having access to advanced technology like AI, the firms that are currently in power will stay in power, leaving the market giants unaltered for a variable amount of time.

Press enter or click to view image in full size

Beyond Law and Economics

AI’s effect on equality reaches far beyond law and economics. When thinking about inequality, things that come to mind are often food scarcity and disease in less developed countries, both of which are incredibly important subjects that are influenced by artificial intelligence as well.

Discrepancies in Disease Diagnosis

Healthcare AI systems are typically trained with data from Western hospitals, leading to difficulties in disease detection when used on individuals from different ethnic backgrounds. This is especially prevalent in dermatological studies.

A 2021 study found that healthcare AI models, when detecting skin disease on darker-skinned patients, had a 29–40% decrease in accuracy.

The underlying cause of this deficit in diagnosis accuracy is the lack of diverse racial representation in training data. As AI tools continue to spread across the world, the abundance of Western-centric data and the lack of minority-based training become concerning, as they pose legitimate risks to the health of people around the globe.

Press enter or click to view image in full size
FST I-II represents lighter skin tones, and FST V-VI represents darker skin tones.

Disregarded Regional Characteristics

Although often overlooked, artificial intelligence’s ability to accentuate geographic disparities is contributing to unequal resource distribution. Environmental differences across the globe are ignored in the development of AI agricultural tools. An example of this was the failure of several tools designed to detect pests and predict harvests in Southeast Asia and Sub-Saharan Africa.

Press enter or click to view image in full size

There are several other examples of algorithmic bias in agriculture:

  • Targeting large, commercial farms rather than smallholder farms.
  • Prioritizing yield size rather than long-term soil health and biodiversity.
  • Recommending resource-intensive practices.

These issues have the potential to worsen inequality exponentially through food apartheid, food deserts, and more. By restricting new, innovative technologies like AI from being effective in less developed countries, the existing global hierarchy will remain unchanging.

What does this mean?

“A 2023 McKinsey report estimated that generative AI could contribute between $2.6 trillion and $4.4 trillion annually to the global economy. However, realizing this potential depends on creating inclusive AI systems that cater to diverse populations worldwide” -Dr. Assad Abbas

There is no denying the prowess of AI. Its ability to transform the economy, and therefore, the functions of the world, is remarkable and inevitable. Yet, this power comes with significant consequences for inequality. The integration of AI into the legal system, finance, and other critical sectors often reflects its biases, created by the design and consumed data. Artificial intelligence’s concentration on Western-based standards threatens the livelihoods of marginalized groups, and consequently, presents the idea that the solution is a broader perspective.

AI’s nuanced nature requires an adaptable response. Whether it means collecting data from every corner of the world, changing our societal framework, or becoming more empathic with the use of AI, artificial intelligence has proven that it must be used wisely.

Abbas, Dr Assad. “Western Bias in AI: Why Global Perspectives Are Missing.” Unite.AI, 23 Jan. 2025, www.unite.ai/western-bias-in-ai-why-global-perspectives-are-missing/.

“AI Detection Accuracy in Multilingual Texts.” Detecting-Ai.com, 2025, detecting-ai.com/blog/ai-detection-accuracy-in-multilingual-texts.

An, Jiafu, et al. “Measuring Gender and Racial Biases in Large Language Models.” ArXiv.org, 2024, arxiv.org/abs/2403.15281.

“Artificial Intelligence (AI) in Agriculture Market Size | Forecast 2032.” Market.us, market.us/report/artificial-intelligence-ai-in-agriculture-market/.

Blair, Amanda, and Karen Odash. “New Study Shows AI Resume Screeners Prefer White Male Candidates: Your 5-Step Blueprint to Prevent AI Discrimination in Hiring.” Fisher Phillips, 2024, www.fisherphillips.com/en/news-insights/ai-resume-screeners.html.

ccwebstaff. “The Prejudice of Algorithms — Haas News | UC Berkeley Haas.” Haas News | UC Berkeley Haas, 15 Mar. 2019, newsroom.haas.berkeley.edu/magazine/spring-2019/the-prejudice-of-algorithms/. Accessed 26 Nov. 2025.

Cristina, Ana, et al. “Algorithmic Discrimination in the Credit Domain: What Do We Know about It?” AI & Society, vol. 39, 17 May 2023, pp. 2059–2098, https://doi.org/10.1007/s00146-023-01676-3.

Fergus, Rachel. “Biased Technology: The Automated Discrimination of Facial Recognition | ACLU of Minnesota.” Www.aclu-Mn.org, 29 Feb. 2024, www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition.

Hofmann, Valentin, et al. “AI Generates Covertly Racist Decisions about People Based on Their Dialect.” Nature, vol. 633, 28 Aug. 2024, pp. 1–8, www.nature.com/articles/s41586-024-07856-5#Sec5, https://doi.org/10.1038/s41586-024-07856-5.

Lee, Jaeah. “New Study Finds Startling Race and Gender Gaps in the Mortgage Market.” Mother Jones, 28 July 2015, www.motherjones.com/politics/2015/07/race-gender-interest-rates-mortgages/. Accessed 26 Nov. 2025.

Myers, Andrew. “AI Shows Dermatology Educational Materials Often Lack Darker Skin Tones.” Stanford HAI, 5 Sept. 2023, hai.stanford.edu/news/ai-shows-dermatology-educational-materials-often-lack-darker-skin-tones.

Raghavan, Manish, and Solon Barocas. “Challenges for Mitigating Bias in Algorithmic Hiring.” Brookings, 6 Dec. 2019, www.brookings.edu/articles/challenges-for-mitigating-bias-in-algorithmic-hiring/.

Richter, Felix. “Infographic: AWS Stays Ahead as Cloud Market Accelerates.” Statista Daily Data, Statista, 4 Nov. 2025, www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/?srsltid=AfmBOor0EBCAhc8a_Ttrq0tDZtHp2SY6vhx9-8YiLvs_pELA4xsh9ibk. Accessed 26 Nov. 2025.

The SimulTrans Team. “Limitations of Language Models in Other Languages.” Simultrans.com, SimulTrans, 25 Apr. 2024, www.simultrans.com/blog/limitations-of-language-models-in-other-languages.

Thierry, Foucault, et al. “How Artificial Intelligence Is Transforming Finance.” IESE Insight, 12 June 2025, www.iese.edu/insight/articles/artificial-intelligence-finance-regulation/.

Trautwein, Yannik, et al. “Opening the “Black Box” of HRM Algorithmic Biases — How Hiring Practices Induce Discrimination on Freelancing Platforms.” Journal of Business Research, vol. 192, Apr. 2025, p. 115298, https://doi.org/10.1016/j.jbusres.2025.115298.