Artificial intelligence exacerbates inequity

Health care? An algorithm decides health care access for over 200 million Americans. National security? An algorithm decides who should be flagged as a suspicious individual. Economic equality? Algorithms have become incredibly prevalent in the banking system for deciding loans of all kinds. Employment? Algorithms have become the first filter in most companies’ job hiring process. 

Everything, from our Google searches to even Instagram ads, are controlled by algorithms. Algorithms have massive implications for nearly every single issue in the U.S. today. And while they have the potential to be additive to society, studies have also begun shedding light on their potentially biased nature.

The problem with many algorithms is that they are built on the premise that they eventually learn to think by themselves, an idea known as artificial intelligence (AI) and machine learning. Problematically, when algorithms begin to replicate the social preferences of the community surrounding them, they can exacerbate existing discrimination.

This is especially true when algorithms are used in sectors of America that are known for historic discrimination against certain subgroups. Today, the U.S. justice system uses a risk assessment algorithm known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to decide the likelihood that an individual will re-offend, known as their recidivism score. The score that an incarcerated individual receives then becomes instrumental in life-changing choices that the judge makes, such as whether and to what extent the defendant is given rehabilitation and the severity of their sentences.

Karen Hao explains in a Technology Review article in January 2019 that machine learning algorithms aim to identify patterns in the data they are given. However, these patterns are usually correlations within data, not causations. 

Hao takes the example of the justice system to contextualize this: “If an algorithm found that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime … they turn correlative insights into scoring mechanisms.” 

COMPAS is provided with data collected over decades on rates of recidivism for different individuals. By analyzing the data, COMPAS found correlations between low-income and minority races and higher rates of recidivism. However, while this is only a correlation, as Hao explained, COMPAS presents this as a causation that factors into the scoring criteria COMPAS gives when deciding an individual’s recidivism score. Thus, COMPAS has been recorded giving low-income individuals and minorities higher recidivism scores. 

But this is not an accurate measurement. Researchers have attributed these communities’ higher recidivism rates not to their wealth status but their historic victimization by law enforcement at higher rates than their wealthier or white counterparts. Thus, COMPAS has been recorded giving minority and low-income individuals more severe sentences for similar crimes, fueling the violent cycle that places these individuals within the justice system in the first place. 

While technological innovation is necessary and has the potential to alleviate some of the leading issues in the world today, the U.S. justice system cannot become a litmus test for poorly-monitored algorithms. And neither can job searches, loan decisions, health care or any of the other hundreds of places where algorithms are already being used. 

Ironically, many developers view algorithms as the saving grace of public programs. They believe that using machines can help eliminate race, gender, sexuality, etc. as a factor in decisions, reducing implicit bias that has long plagued nearly every public program in the U.S.

Inadequate oversight and testing has resulted in quite the opposite. Rather than empowering oppressed communities, algorithmic bias has amplified existing discrimination.

Ultimately, the true problem lies less in the concept of algorithms and more in the lack of oversight and testing needed to improve AI. The rise of algorithms has not been matched by commensurate legislative action, and that is preventing them from becoming assets rather than barriers to equality. 

Rayid Ghani, a computer scientist at Carnegie Mellon University, conducted numerous studies on algorithm bias in many industries. In an interview with Nature magazine, he concluded that while machine learning technology still has bias, it is less than that of humans. By eliminating implicit – or even explicit – bias, machine learning technology has the potential to uplift communities rather than oppress them. 

The main solution lies in conducting more audits and studies. While these have definitely become more common since fears associated with algorithmic bias have risen, these tests need to become federally mandated. Just like all foods and drugs undergo rigorous testing before they are made available to the public, mandatory testing and checks on various data sets need to occur before an algorithm begins deciding who gains access to something as life-changing as health care. 

In the case of the justice system algorithm, testing would have revealed this bias and could have prompted the changes to the program much sooner and far before the algorithm had already impacted millions of people.

This idea of mandated testing gained legislative backing when Senators Cory Booker and Ron Wyden introduced the Algorithmic Accountability Act earlier this year that would mandate companies to test algorithms for bias. 

Another solution that is gaining increasing traction in academia and legislative bodies is an AI “bill of rights.” In “A Human’s Guide to Machine Intelligence,” Kartik Hosanagar of the University of Pennsylvania proposes this bill of rights as a method of protection for citizens. 

This bill of rights would prioritize transparency, not just in how the algorithm is operating but also when. If a company intends to use an algorithm to determine who qualifies for certain programs, jobs, etc., then consumers must be informed of the algorithm’s existence, the factors it uses, the data sets it receives, and what information it uses from you to determine its final decision. Part of the “bill of rights” would ideally also include a “consent” clause that gives citizens the opportunity to opt out of algorithm usage without any repercussions. 

Organizations such as the Algorithmic Justice League and Data for Black Lives have already had success in raising concerns and awareness for these issues. But as they continue to fight for meaningful legislative reform, it is the job of citizens to remain cognizant of not just the presence of algorithms, but also the presence of bias.

The widespread nature of algorithms means that health care, mortgages, jobs and information are becoming dictated more and more by AI. Regardless of your age, gender, race, geographic location or any other factor, algorithms are likely to begin shaping your lives if they haven’t already. Whether the introduction of AI into our lives acts as an equalizer or exacerbates existing disparities rests on our ability to acknowledge, combat and overcome bias.