Data-driven applications are deeply rooted in today’s society. We are flooded with targeted advertisements, some that know exactly what products we talked about in our last conversation. In many industries, it is common to use chatbot support rather than talking directly to another human. Robo-advisors are forecasted to become a $1.4T industry by the end of 2020.
Machine learning (ML) and artificial intelligence (AI) are everywhere, but many are unaware of the more impactful use cases. The same technology that fuels the targeted advertisements we see every day is prolific in more high-stakes industries such as healthcare, finance, criminal justice, and safety. Use of cutting-edge artificial intelligence allows humans to predict genetic diseases beyond human capability for early diagnosis and treatment. ML has improved cancer screening and diagnosis while accelerating cancer drug discovery. Fields like computer vision have allowed us to build toward self-driving cars. Undoubtedly these technologies are advancing society in countless remarkable ways.
Unfortunately, if we aren’t cognizant of the implications, there can be significant adverse effects. Improper or careless design and implementation of this technology can lead to dramatic effects on the fabric of society like the propagation of systemic racism. Systemic racism is defined as, “Systems and structures that have procedures or processes that disadvantage BIPOC (Black, Indigenous, and People of Color).” This blog post will discuss how AI/ML propagates systemic racism today, bias in AI, and highlight techniques we can use to ensure this behavior does not continue.
How Can a Computer Be Racist?
How can this technology that predicts diseases and automates driving propagate systemic racism? The uncomfortable answer is it happens in many ways. Self-driving cars are more likely to recognize white pedestrians than Black pedestrians, resulting in decreased safety for darker-skinned individuals as the technology becomes more widely adopted. Criminal risk assessment technology has led to Black individuals being sentenced to harsher criminal sentences. A major healthcare company used an algorithm that deemed Black patients less worthy of critical healthcare than others with similar medical conditions. Financial technology companies have been shown to discriminate against Black and Latinx households via higher mortgage interest rates.
These examples show astonishing negative impacts to the treatment and equality of BIPOC causing prolonged inequality. Automated decisions disproportionately affecting BIPOC fuel systemic racism. The intent behind the algorithms may not be inherently discriminatory as scientists and engineers working to build these projects likely do not have racist intentions or actively consider race when constructing algorithms. Unfortunately, the repercussions of not actively designing and testing a system for racial equality can have life-changing negative impacts on the affected parties.
Implicit and Unconscious Bias in AI and ML
The phrase “I don’t see color” seems nice in theory; meaning that one sees all people with the same value regardless of skin color. However, in reality not seeing color is naïve. Harvard has an insightful racial implicit bias test that has produced some powerful insights on the prevalence of implicit bias in the United States. The example of self-driving cars being more likely to recognize white pedestrians is implicit bias that results in disparate safety between skin colors. More frequently misclassifying Blacks as likely to repeat crime is implicit bias that results in harsher criminal sentences.
To show why “not seeing color” is not effective, let’s dig into some details of the crime example a bit more. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an algorithm that predicts risk scores for pretrial release, general re-offense, and violent re-offense of criminals that has been used in New York, Wisconsin, California, Florida, and other areas. COMPAS is based on 137 survey questions on topics including criminal background, family history, mood behavior, money, etc. It is important to recognize COMPAS does not have access to any direct racial information and therefore is suffering from implicit bias in AI and ML.
In 2016 ProPublica, an independent, nonprofit investigative journalism newsroom, did extensive analysis on the COMPAS algorithm. This blog post will just go through highlights of general re-offense predictions, but you can read the full, in-depth analysis and methodology in its entirety here. Along with the analysis, you will also find a link to the Jupyter notebook that can be used to follow along with the analytics discussed in this post as well. COMPAS puts out risk scores on a scale of one to 10, with 10 being the most likely to re-offend. Plotting a distribution of the risk scores broken down by race shows some surprising information.
(Source ProPublica, can also be found in cell 14 of referenced jupyter notebook)
These graphs do not look alike. The histogram of Black defendant’s scores is relatively constant across scores but the histogram of white defendant’s scores is heavily skewed left toward lower scores. While this disparity is suspicious on a surface level, there could be other underlying factors. A logistic regression model predicts a probability of an outcome based on feature variables by fitting coefficients to each feature variable. A logistic regression trained on the outcome of the COMPAS data based on features that may introduce bias is shown in cell 17 of the notebook. The coefficients of this model show us an odds ratio of 45%. Black defendants are 45% more likely to receive a higher score than white defendants with similar criminal severity and history.
Through this analysis it is clear that is not enough to “not see color” or not be explicitly racist. We must be proactive against implicit bias and be anti-racist. We must “see color,” recognize our bias, and actively work to mitigate any bias to act with true equality.
Mitigating Racial Bias in AI and ML
There are numerous methods for mitigating implicit bias in AI, ML and other data-driven solutions. Unsurprisingly, one of the most crucial pieces is the quality of the data and ensuring that bias has been controlled for and quantified with statistics. To better understand our path forward to equality we must also better understand explicit bias.
Explicit Conscious Bias
In many countries, there are laws forbidding discrimination based on protected attributes such as sex, religion, and race. But as data becomes increasingly available, many algorithms have begun to use seemingly innocent characteristics such as home address and wealth as proxies to predict protected characteristics, thus skirting the intent of discrimination laws. For example, address and wealth can often be reliably correlated with race. If a neighborhood has a significant white majority, then it is probable that a given person from that neighborhood is also white. Similarly, wealth can be used as a proxy for race as the average white family has about 10 times greater wealth than the average Black family.
Wealth and address have strong predictive powers in many cases. Excluding attributes like wealth and address can have huge predictive performance implications. If we commit to building an anti-racist solution these attributes can be used in an unbiased way. When designing an anti-racist solution, racial equality must be addressed beginning in the training dataset.
Quality of Training Data
A common phrase in computer science is “garbage in, garbage out” in relation to the quality of the data. Poor quality input data will produce poor quality results. The same holds for racism or bias, inputting biased data will result in biased results without further diligence. The best way to lessen the adverse outcomes of low-quality data is to ensure algorithms are trained on data with equitable distribution across information on all racial groups. This task can be challenging and, in some cases, not feasible considering the costs of data acquisition. When this occurs, sampling techniques should come into play to compensate for an imbalanced dataset.
Sampling Techniques
Sampling techniques provide a basic method to try to balance out our racial groups.
Oversampling and Undersampling
Oversampling randomly duplicates data points in the minority (orange) dataset to the quantity of the majority (blue). Undersampling randomly reduces the number of data points in the majority to the minority. The goal of these methods is to make our dataset have the same rate of outcome given a characteristic, in this case, race. For the example of COMPAS, the training set for the algorithm should have equitable quantities of each racial group with a similar number of positive and negative outcomes (re-offending).
Synthetic Minority Oversampling Technique
In addition to this basic approach of randomized oversampling and undersampling methods, there are many methods to increase the minorities of our dataset with non-duplicated data. The most common method to do this is Synthetic Minority Oversampling Technique (SMOTE). SMOTE imputes and perturbs existing data to create new, realistic data points. In the figure below, the orange sample is a minority to the blue sample. SMOTE creates additional (green) data points from knowledge it has on the existing orange points. These green points can now be added to the orange minority set to create a stronger balance.
Generative Adversarial Networks
The final and most powerful method for data creation is with generative adversarial networks or GANs. GANs are neural networks that are designed to generate new authentic data. A GAN has two neural networks (generator and discriminator) that compete to produce realistic data. The generator network creates new data and the discriminator network predicts if the data is from the original dataset or created by the generator network. The goal is for the generative network to produce data that is indistinguishable from the original dataset by the discriminative network.
With all sampling techniques it is important that the variable distributions remain similar between the sampled dataset and the original dataset to preserve data quality.
Quantifying Fairness
How do we know if we’ve removed bias from our data? We can use statistical parity to quantify bias. Statistical parity uses conditional probability to show that bias exists when the outcome is not independent of a given attribute, in this case, race.
P(outcome | race) = P(outcome)
The above formula shows an example of an outcome that is unbiased with respect to race. The probability of the outcome is independent of race; the outcome is just as likely among all racial groups. If we do not have direct access to race it is advisable to substitute a proxy for race that is not otherwise related to the outcome.
We should strive for statistical parity in our input dataset, but the most important goal is to achieve unbiased predictions. Bias in predictions can be measured with the same statistical parity. After training a model on our high quality dataset we should use the model to make predictions on data the model has not seen during training. Equal parity in our predictions will ensure fairness in our algorithm. There is extensive research in academia that goes into much greater technical detail on measuring and enforcing fairness in machine learning. For example, this comparative study of fairness-enhancing interventions in ML provides multiple granular examples.
Designing Solutions With Intentionality
Machine learning is not innately racist as data-driven applications are often only applying statistics to existing historical data. But unless the owners of these models take a closer look and identify the impacts seemingly harmless application of data can have on BIPOC, then we will continue to see unjust systemic problems.
In this article, I have outlined a few ways this can be accomplished, supported by academic research and thought. Remedying the bias in ML and AI can enable a reversal of systemic racism for BIPOC through diligence and commitment to designing solutions with an anti-racist mindset. At Credera, we are committed to equality and incorporating these principles into the way we design data applications for our clients. To learn more, email us at findoutmore@credera.com.
Contact Us
Ready to achieve your vision? We're here to help.
We'd love to start a conversation. Fill out the form and we'll connect you with the right person.
Searching for a new career?
View job openings