AI Trustworthiness Framework - The industry emphasizes the importance of a three-pillar framework for trustworthy AI systems: fairness, explainability, and accountability [5] - Fairness in AI systems means operating without bias or preference, requiring data de-biasing to avoid skewed outcomes [6][8] - Explainability is crucial, as AI systems should provide reasons for their actions to ensure user understanding and prevent unintended consequences [9][10] - Accountability is necessary, meaning a person or entity must be responsible for the AI's actions, especially in critical applications like self-driving cars [13][14] AI Implementation Risks - AI systems can exhibit biases based on the data they are trained on, leading to unfair or discriminatory outcomes, as seen in Amazon's hiring AI example [7][8] - Lack of explainability can result in AI systems making decisions based on flawed logic, such as mistaking snow for wolves [11][12] - Without accountability, AI systems can cause significant financial losses, as illustrated by the friend's stock trading AI example [16][17] Building Trustworthy AI - Building trustworthy AI requires a team effort, involving students, startups, and industry experts working together [20] - Continuous testing and refinement are essential to ensure the AI system behaves as intended and avoids unintended consequences [18][19] - The industry should avoid treating AI as a "magical oracle" and instead focus on building systems that are transparent and accountable [21]
Building Truestworthy AI for the Real World | Sivakumar Mahalingam | TEDxMRIIRS
TEDx Talksยท2025-10-14 15:55