Workflow
Trustworthy AI
icon
Search documents
X @Anthropic
Anthropic· 2026-04-02 16:59
These functional emotions have real consequences. To build AI systems we can trust, we may need to think carefully about the psychology of the characters they enact, and ensure they remain stable in difficult situations.Read the full paper: https://t.co/1mjWW7RfZm ...
RMX Industries Announces Appointment of Dr. Sukumaran Nair to Advisory Board
Prnewswire· 2026-02-10 13:21
Core Insights - RMX Industries has appointed Dr. Sukumaran Nair to its Advisory Board, enhancing its focus on U.S. defense and security applications [1] - Dr. Nair's expertise in software-defined networks, virtualization, and trustworthy AI aligns with RMX's mission to improve operational AI and actionable AI through its VAST™ platform [1] - The appointment is expected to strengthen RMX's defense solutions and advance next-generation network and edge computing solutions for defense customers [1] Company Overview - RMX Industries, Inc. is a technology company specializing in advanced data compression and video optimization solutions, particularly for defense and security applications [1] - The company aims to transform how organizations capture, transmit, store, and deliver visual data across various environments, especially in constrained networks [1] - RMX's solutions are designed to operate seamlessly across different infrastructures, ensuring critical visual intelligence is accessible regardless of connectivity conditions [1] Dr. Sukumaran Nair's Background - Dr. Nair is the Vice Provost for Research and Chief Innovation Officer at Southern Methodist University, with a strong background in software-defined networks and cyber security [1] - He has a history of translating research into practical applications for both defense and commercial sectors, supported by various government and industry collaborations [1] - His accolades include the Dallas 500 award and the Distinguished University Citizen award, highlighting his contributions to the field [1]
Thomson Reuters Convenes Global AI Leaders to Advance Trust in the Age of Intelligent Systems
Prnewswire· 2026-01-13 14:00
Core Insights - The Trust in AI Alliance has been launched by Thomson Reuters to promote the development of trustworthy, agentic AI systems [1][2][3] - The alliance aims to facilitate collaboration among leading AI researchers and engineers to define principles for responsible AI [2][4] Group 1: Purpose and Mission - The Trust in AI Alliance focuses on ensuring safety, accountability, and transparency in autonomous AI systems, particularly in high-stakes environments [2][6] - The initiative is designed to move beyond discussion to actionable insights, sharing key themes from sessions to inform the broader industry [3][5] Group 2: Participants and Collaboration - Founding members include senior leaders from Anthropic, AWS, Google Cloud, and OpenAI, alongside experts from Thomson Reuters [4][6] - The alliance will explore reliability, interpretability, and verification as essential factors for building trust in advanced AI systems [4][6] Group 3: Thomson Reuters' Role - Thomson Reuters Labs leverages its extensive experience at the intersection of technology and human expertise to lead this dialogue [5] - The organization aims to shape frameworks and standards that will enhance confidence in AI applications across various sectors [6]
Building Truestworthy AI for the Real World | Sivakumar Mahalingam | TEDxMRIIRS
TEDx Talks· 2025-10-14 15:55
AI Trustworthiness Framework - The industry emphasizes the importance of a three-pillar framework for trustworthy AI systems: fairness, explainability, and accountability [5] - Fairness in AI systems means operating without bias or preference, requiring data de-biasing to avoid skewed outcomes [6][8] - Explainability is crucial, as AI systems should provide reasons for their actions to ensure user understanding and prevent unintended consequences [9][10] - Accountability is necessary, meaning a person or entity must be responsible for the AI's actions, especially in critical applications like self-driving cars [13][14] AI Implementation Risks - AI systems can exhibit biases based on the data they are trained on, leading to unfair or discriminatory outcomes, as seen in Amazon's hiring AI example [7][8] - Lack of explainability can result in AI systems making decisions based on flawed logic, such as mistaking snow for wolves [11][12] - Without accountability, AI systems can cause significant financial losses, as illustrated by the friend's stock trading AI example [16][17] Building Trustworthy AI - Building trustworthy AI requires a team effort, involving students, startups, and industry experts working together [20] - Continuous testing and refinement are essential to ensure the AI system behaves as intended and avoids unintended consequences [18][19] - The industry should avoid treating AI as a "magical oracle" and instead focus on building systems that are transparent and accountable [21]
Mitsubishi Electric and Inria Commence Joint Technology Development to Ensure AI Trustworthiness Using Formal Methods
Businesswire· 2025-09-18 06:00
Group 1 - Mitsubishi Electric Corporation and Inria have launched a joint research project titled "Formal Reasoning applied to AI for Methodological Engineering" (FRAIME) [1] - The aim of the FRAIME project is to realize trustworthy AI systems [1] - This project is part of Inria's DÉFI, which is a large-scale industry-academia collaboration [1]
How to Build Trustworthy AI — Allie Howe
AI Engineer· 2025-06-16 20:29
Core Concept - Trustworthy AI is defined as the combination of AI Security and AI Safety, crucial for AI systems [1] Key Strategies - Building trustworthy AI requires product and engineering teams to collaborate on AI that is aligned, explainable, and secure [1] - MLSecOps, AI Red Teaming, and AI Runtime Security are three focus areas that contribute to achieving both AI Security and AI Safety [1] Resources for Implementation - Modelscan (https://github.com/protectai/modelscan) is a resource for MLSecOps [1] - PyRIT (https://azure.github.io/PyRIT/) and Microsoft's AI Red Teaming Lessons eBook (https://ashy-coast-00aeb501e.6.azurestaticapps.net/MS_AIRT_Lessons_eBook.pdf) are resources for AI Red Teaming [1] - Pillar Security (https://www.pillar.security/solutionsai-detection) and Noma Security (https://noma.security/) offer resources for AI Runtime Security [1] Demonstrating Trust - Vanta (https://www.vanta.com/collection/trust/what-is-a-trust-center) provides resources for showcasing Trustworthy AI to customers and prospects [1]