Bias in AI Performance - AI models exhibit bias based on gender, accent, and race, impacting their effectiveness for different users [2] - Speech recognition models show a significant disparity, with a 35% word error rate for African American speakers compared to 19% for white speakers [4] - Facial recognition models have a higher error rate of over 34% for darker-skinned women compared to less than 1% for light-skinned men [5] Real-World Consequences - Biased facial recognition models used by law enforcement can lead to disproportionate misidentification and incorrect arrests of black and brown individuals [7][8] - AI hiring tools can unintentionally downgrade resumes based on gendered terms, perpetuating historical gender biases [9][10] - In the medical field, AI models can underestimate the severity of illness in black patients due to biases in training data [15] Mitigation Strategies - Algorithmic auditing, involving rigorous testing on diverse datasets, is crucial for identifying and addressing bias in AI models [18] - Transparency is essential, requiring corporations to share the demographics of their training data and justify fairness metrics [19] - Creating diverse and inclusive datasets by including underrepresented voices in the building process is necessary to combat bias at the source [20]
When AI Gets It Wrong: The Hidden Bias in Our Algorithms | Charan Sridhar | TEDxBISV Youth
TEDx Talks·2025-09-11 15:21