Workflow
Biased AI is Already Deciding Your Future | Chioma Onyekpere | TEDxWinnipeg
TEDx Talksยท2025-07-24 15:31

AI Bias & Fairness - AI systems mimic human intelligence, learning from data to make classifications and predictions [6][8] - Bias in AI arises from non-diverse or incomplete data, leading to unfair or discriminatory outcomes [3][4] - AI amplifies existing inequalities by reflecting and perpetuating biases present in the data it's trained on [18] - Assumptions about identity are not valid data, yet AI learns from these assumptions if they are included in the training data [3][4] Examples of AI Bias - Applicant tracking systems can penalize resumes based on gendered language due to biased training data [12][13] - Facial recognition systems misidentify African and Asian faces more frequently than white faces [14] - Voice assistants misunderstand non-white speakers nearly twice as often as white speakers, with error rates up to 35% [16] - Insurance algorithms may charge higher premiums to certain demographics based on biased risk models [16][17] Addressing AI Bias - It is crucial to question the data used to train AI, ensuring it represents the entire population [19] - Building diverse teams can help recognize and identify biases that others might miss [19] - Organizations should set ethical guidelines and audit AI systems for bias, making fairness a performance metric [19][20] - Transparency in AI systems is essential, with models providing citations and reasoning for their decisions [21]