What is Bias in Machine Learning? Real-World Examples That Show the Impact of AI Bias

Profile Picture of Omar Trejo
Omar Trejo
Senior Data Scientist
Puzzle of an artificial brain with circuits on the back
Originally published on May 4, 2020Last updated on Mar 11, 2024

Key Takeaways

Why is AI bias unethical?

AI bias is unethical because it can violate individuals' rights to meaningful explanations in automated decision-making, perpetuate human prejudices, and undermine fairness and trust in AI systems. Addressing unwanted bias and upholding fairness requires a thoughtful focus on data, diverse teams, and empathy, as both an ethical imperative and a legal responsibility.

What are some famous examples of AI bias?

There are a few famous examples of AI bias, including the COMPAS system. This was a tool used in criminal justice, which has been found to unfairly assess African American defendants and potentially lead to unjust sentencing. Google Translate has also faced criticism for perpetuating gender stereotypes in translations, reflecting societal biases present in its training data. Additionally, Google Photos has been known to mislabel photos of African Americans, highlighting racial bias in facial recognition algorithms.