AI Red Teaming Agent: Azure AI Foundry — Nagkumar Arkalgud & Keiji Kanazawa, Microsoft
In the age of autonomous AI agents, ensuring their safety and reliability is paramount. But how can we proactively uncover vulnerabilities before they impact real-world scenarios? Enter Azure AI Evaluation SDK’s Red Teaming Agent—a cutting-edge tool designed to rigorously challenge your AI agents, exposing hidden risks and unexpected behaviors. This session will guide you through the powerful capabilities of Azure’s Red Teaming Agent, demonstrating how it simulates adversarial scenarios, stress-tests agenti ...