Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Algorithmic Bias: What is it, and how to deal with it?

Algorithmic bias is a huge barrier to fully realizing the benefit of machine learning. We cover what it is, how it presents itself, and how to minimize it.

Jun 08, 2023 • 10 Minute Read

Please set an alt value for this image...
  • AI & Machine Learning

In this article, I break down what algorithmic bias is, how it presents itself in machine learning systems, and how it can be mitigated. I’ll get into the finer details of the different types of bias and the negative business impacts of bias.

And if you’re interested in the tools provided by Amazon Web Services (AWS) to help improve your machine learning models by detecting potential bias, I’ve got you covered too. 

What is algorithmic bias?

Machine learning algorithms are computer programs that analyze complex datasets to uncover trends and patterns. Each algorithm represents step-by-step instructions a machine follows to find meaning in data. There are many algorithms (e.g., classification, clustering, regression, anomaly detection, and more) often grouped by the machine learning technique used: supervised learning, unsupervised learning, reinforcement learning, and more.

These algorithms store the trends and patterns in a mathematical representation of their findings called a machine learning model (or simply a model). Once the model is trained to identify patterns, it can be provided with data it has never seen before to analyze, categorize, and make recommendations.

Machines aren’t human and, in theory, should not be guided by human prejudices and biases – how then can a machine exhibit algorithmic bias (or simply bias)? How can an algorithm produce a model that makes unfair predictions and recommendations due to bias? Let’s first take a step back to explore the use of machine learning in the world today. 

Machine learning is used by businesses and many federal organizations to analyze data to answer questions that are too complex for humans to analyze manually. Today, models make recommendations and predictions, called inferences, based on studying data. Inferences are made about people, including their identities, demographic attributes, preferences, and likely future behaviors. Inferences can even be made about a person’s health!

From an ethical standpoint, inferences should be fair, especially those predicting a person’s preferences and likely future behaviors. Algorithmic bias presents itself when inferences are systematically less favorable to individuals from a particular group where there is no relevant difference between the groups that justifies the recommendation. Oftentimes biased recommendations disenfranchise people by denying certain opportunities given to others. 

What is a real-world example of algorithmic bias?

Let’s dig into some practical examples of algorithmic bias in machine learning. 

For the caffeine addicts that can’t start their day without a warm cup of premium roast coffee, an iced caramel frappuccino, or a hot chai tea latte, sometimes you don’t know what to try next. A great business strategy is to use machine learning, specifically reinforcement learning, to provide product recommendations that keep customers’ preferences top of mind while also introducing new products customers might enjoy. 

Let’s say a recommendation engine was developed that learned from data that customers under 20 prefer frappuccinos over espresso beverages. With this newfound knowledge, the recommendation engine never recommends the new flavor of the season, pumpkin spice, to customers over the age of 20! The case can be made that the recommendation engine is biased against those over 20. 

Incorrectly recommending (or not recommending) a drink doesn’t sound that bad, right? A person over 20 missing out on the new season flavor isn’t the end of the world! Have you considered what happens when the same practices and techniques used to train the drink recommendation engine are moved to other parts of the business like recruiting or hiring?

Imagine a recruiting tool driven by machine learning used to review job applicants’ resumes to automate and speed up the search for top-tier software engineering talent. The use of machine learning in this case can review 1,000 resumes and identify the top 10 to be hired with speed.

Well, this recruiting tool is not imaginary; the tool was developed and used for hiring for several years by a well-known company. Unfortunately, though, the tool was found to be biased against women. It was eventually shut down because it downgraded resumes that included the word “women” as in “women in tech” or “women who code.” And it also penalized resumes from all-women’s colleges.

The recruiting tool learned that men make better software engineers by observing patterns in resumes previously submitted for software engineering positions. Most resumes came from men, which is a reflection of male dominance across the tech industry. This unchecked bias had unwanted implications because it perpetuated inequities that are already present in tech. 

What are the negative implications of bias on businesses?

In the pre-machine learning world, humans made hiring, advertising, lending, and criminal sentencing decisions. And these decisions were governed by federal, state, and local laws that regulated the decision-making process regarding fairness, transparency, and equity. Now machines either make those decisions or heavily influence those decisions. To top that off, Intellectual Property (IP) laws, which protect and enforce the rights of the creators and owners of inventions, protect the inputs to the algorithmic decision-making process. 

There are many negative implications of algorithmic bias: 

  • Unfairly allocated opportunities, resources, or information
  • Infringement on civil liberties
  • Putting the safety of individuals in question
  • Failure to provide the same quality of service to some people as others
  • Negative impact on a person’s well-being, such as experiences considered to be derogatory or offensive
  • Internal conflicts and employee demand for more ethical practices

What’s on the line?

Biased decisions can negatively impact an organization’s reputation, consumers’ trust, and future business and market opportunities. Today, governments are pursuing regulation and legislation for algorithmic decision-making. Organizations that don’t make addressing bias in machine learning a priority may be liable to incur penalties and high fines at some point in the future.

Organizations should begin to promote a culture of ethics and responsibility in machine learning now, before it’s too late, and use their voice to advance industry change and regulation for responsible AI practices. Developers of machine learning systems should flag ethical issues and recognize the moral and ethical responsibilities they have when developing machine learning systems. 

How does bias present itself in machine learning systems?

There are many ways that bias presents itself in machine learning systems. More often than not, the dataset studied by the algorithm to identify trends and patterns is the culprit. In the machine learning recruiting tool example, the algorithm studied 10 years of previously submitted resumes. It learned men mostly submit resumes for software engineering positions; therefore, mathematically, men would probably make better software engineers.

You can’t fault the machine, but you can fault the developers providing the data to the machine. The provided dataset was imbalanced, resulting in biased decisions. 

Here’s a quick summary of the three different types of data-related biases we’ll cover: 

  • Sampling bias: The collected data doesn’t reflect the environment the solution will run in, resulting in bias.
  • Exclusion bias: The action of inappropriately removing data points or records because they are deemed to be irrelevant results in bias.
  • Observer bias: The observer recording the data sees what they expect or want to see, which causes them to label the data incorrectly.

Sampling Bias

Sampling bias occurs during the data collection phase of the machine learning lifecycle. Returning to the recruiting tool example, the sample dataset contained more resumes from men than women resulting in recommendations skewed toward men. This is a prime example of sampling bias: the collected data doesn’t represent the environment the tool will run in because an equal amount of men and women apply for software engineering positions. 

Exclusion Bias

Exclusion bias usually occurs during the data preparation phase of the machine learning lifecycle, typically excluding some feature(s) (i.e., data points) from the dataset, usually under the umbrella of cleaning and preparing data.

Let’s take the case of a machine learning-driven prediction engine used to predict a new frappuccino’s popularity (around the globe) during the lunch hour. Systems like this are often used to help inform projected revenue and marketing decisions when bringing a new product to market. The dataset that will be analyzed by the machine learning algorithm includes past purchases of similar frappuccinos during the lunch rush.

While reviewing the dataset, the developer notices frappuccino purchases during the hours of 2pm to 4pm. The developer removes those erroneous records (or observations) because the typical lunchtime is from 11:30am - 2pm. The developer didn’t realize that lunchtime in Spain tends to run from 2pm to 3pm and there's a Spanish siesta that runs from 2pm to 5pm, impacting when people eat lunch.

In this case, the developer deleted records thinking they were irrelevant based on their pre-existing beliefs, resulting in a system that was biased toward US-based lunch times.

The prediction engine inherited the beliefs and biases of the developer. When you have petabytes or more of data, it's tempting to select a small sample to use for training, but you may inadvertently exclude important data, resulting in bias.

Observer Bias

Observer bias, sometimes called confirmation bias, typically occurs during the data labeling phase of the machine learning lifecycle and results from the observer recording what they want or expect to see in data.

Let’s take the case of an image classification system that identifies the frappuccino flavor in a provided image. The dataset that the algorithm analyzes includes many images of frappuccinos labeled with the name of the flavor. And someone first had to label (or identify) the drink flavor shown in each image, which is a very manual process.

Imagine the observer responsible for labeling the images tends to mislabel the Caramel Cocoa Cluster flavor as Cinnamon Roll solely because Cinnamon Roll is their favorite flavor. In this case, they see what they want in the data. This manual labeling process can be prone to human error resulting in a model that incorrectly identifies flavors and is biased toward the Cinnamon Roll frappuccino.

While there are many other types of bias, these three are the most prevalent caused by data. Now that you understand bias let’s explore ways to mitigate it. 

How to mitigate bias?

The best way to mitigate sampling, exclusion, or observer bias is to practice responsible dataset development before data is turned over to an algorithm for analysis.

Mitigating sampling bias

To mitigate sampling bias, make sure to collect data for all the cases your model will be exposed to. Ensuring fair class representation and evenly distributing data before training goes a long way in reducing bias. Several fancy techniques can fix an imbalanced dataset like oversampling and undersampling. SMOTE (synthetic minority oversampling technique) is another technique that allows you to generate synthetic data to solve the problem. 

Mitigating exclusion bias

To mitigate exclusion bias, conduct sufficient analysis of features and observations before removing them. And also, look at your own biases before deleting data. There are techniques that help you with the analysis and assist with calculating the importance of features: 

  • Examine the coefficient score for each feature: This is a number assigned to each input value to determine how important the feature is; the higher the number, the more influence on the recommendation or prediction.
  • Use the feature_importances property available for tree-based models: After training any tree-based model (i.e., XGBoost), examine the importance of the feature visually by plotting a bar chart.
  • Use Principal Component Analysis (PCA) scores: This is known for dimensionality reduction but can also be used to determine feature importance.

Mitigating observer bias

To mitigate exclusion bias, ensure observers doing the data labeling are well trained and screened for potential biases. This requires knowing your data and training observers on what to look out for.

What are some tools to help identify bias?

AWS not only provides tools to manage the end-to-end lifecycle of a machine learning system, but it also provides tools to help detect bias. Amazon SageMaker Clarify is a tool that expands the capabilities of Amazon SageMaker by detecting and mitigating bias in datasets and models. This tool allows you to evaluate bias at every stage of the development process for a machine learning model – during data analysis, training, and inference. I was super excited when this tool was announced at AWS re:Invent 2020.

As machine learning moves from the research lab to the enterprise, algorithmic bias is a huge impediment to fully realizing the benefits of machine learning. We’ve seen that bias can present itself when data is collected and prepared, impacting the final model, which leads to inaccurate and discriminatory predictions.

If you are aware of the bias in your data, proven mitigation strategies can be implemented. AWS even provides tools that help detect and mitigate bias in datasets and trained models.

It’s up to us as developers to ensure we’re practicing responsible dataset development and preventing bias from making it to production. 

About the Author

Kesha Williams is an award-winning technology leader. She’s also an AWS Machine Learning Hero, HackerRank All-Star, and Alexa Champion.


Learning more about machine learning

A Cloud Guru offers a dedicated learning path on AWS machine learning. No matter where you are in your machine learning journey - from novice to seasoned professional - we've got courses for you to take things up to the next level. More than just watching videos, you gain practical experience through hands-on labs, as well as the opportunity to gain several industry-recognized AWS certifications.

You can also access Pluralsight's wide range of machine learning paths and courses. Learn in your own time from experts with real-world experience in machine learning.

Kesha Williams

Kesha W.

Kesha Williams is an Atlanta-based AWS Machine Learning Hero, Alexa Champion, and Director of Cloud Engineering who leads engineering teams building cloud-native solutions with a focus on growing early career technologists into cloud professionals and engineering leaders. She holds multiple AWS certifications and has leadership training from Harvard Business School. Find her on Topmate at https://topmate.io/kesha_williams.

More about this author