Workflow
Trustworthy AI
icon
Search documents
Building Truestworthy AI for the Real World | Sivakumar Mahalingam | TEDxMRIIRS
TEDx Talks· 2025-10-14 15:55
AI Trustworthiness Framework - The industry emphasizes the importance of a three-pillar framework for trustworthy AI systems: fairness, explainability, and accountability [5] - Fairness in AI systems means operating without bias or preference, requiring data de-biasing to avoid skewed outcomes [6][8] - Explainability is crucial, as AI systems should provide reasons for their actions to ensure user understanding and prevent unintended consequences [9][10] - Accountability is necessary, meaning a person or entity must be responsible for the AI's actions, especially in critical applications like self-driving cars [13][14] AI Implementation Risks - AI systems can exhibit biases based on the data they are trained on, leading to unfair or discriminatory outcomes, as seen in Amazon's hiring AI example [7][8] - Lack of explainability can result in AI systems making decisions based on flawed logic, such as mistaking snow for wolves [11][12] - Without accountability, AI systems can cause significant financial losses, as illustrated by the friend's stock trading AI example [16][17] Building Trustworthy AI - Building trustworthy AI requires a team effort, involving students, startups, and industry experts working together [20] - Continuous testing and refinement are essential to ensure the AI system behaves as intended and avoids unintended consequences [18][19] - The industry should avoid treating AI as a "magical oracle" and instead focus on building systems that are transparent and accountable [21]
Mitsubishi Electric and Inria Commence Joint Technology Development to Ensure AI Trustworthiness Using Formal Methods
Businesswire· 2025-09-18 06:00
Group 1 - Mitsubishi Electric Corporation and Inria have launched a joint research project titled "Formal Reasoning applied to AI for Methodological Engineering" (FRAIME) [1] - The aim of the FRAIME project is to realize trustworthy AI systems [1] - This project is part of Inria's DÉFI, which is a large-scale industry-academia collaboration [1]
How to Build Trustworthy AI — Allie Howe
AI Engineer· 2025-06-16 20:29
Core Concept - Trustworthy AI is defined as the combination of AI Security and AI Safety, crucial for AI systems [1] Key Strategies - Building trustworthy AI requires product and engineering teams to collaborate on AI that is aligned, explainable, and secure [1] - MLSecOps, AI Red Teaming, and AI Runtime Security are three focus areas that contribute to achieving both AI Security and AI Safety [1] Resources for Implementation - Modelscan (https://github.com/protectai/modelscan) is a resource for MLSecOps [1] - PyRIT (https://azure.github.io/PyRIT/) and Microsoft's AI Red Teaming Lessons eBook (https://ashy-coast-00aeb501e.6.azurestaticapps.net/MS_AIRT_Lessons_eBook.pdf) are resources for AI Red Teaming [1] - Pillar Security (https://www.pillar.security/solutionsai-detection) and Noma Security (https://noma.security/) offer resources for AI Runtime Security [1] Demonstrating Trust - Vanta (https://www.vanta.com/collection/trust/what-is-a-trust-center) provides resources for showcasing Trustworthy AI to customers and prospects [1]