LLM Observability

Search documents
Datadog Joins the S&P 500 Index
Newsfile· 2025-07-09 13:00
Core Insights - Datadog, Inc. has been included in the S&P 500 Index, effective prior to the opening of trading on July 9, 2025, marking a significant milestone for the company [1][2] - The company reported $2.8 billion in revenue for the trailing twelve months ending March 31, 2025, reflecting a year-over-year growth of 26% [2] - Datadog continues to expand its product portfolio, having unveiled over 400 new products, capabilities, and features at its annual DASH conference in June [3] Company Overview - Datadog is a monitoring and security platform for cloud applications, providing a SaaS platform that integrates various capabilities such as infrastructure monitoring, application performance monitoring, log management, and cloud security [4] - The platform is designed to support organizations of all sizes across various industries, facilitating digital transformation and cloud migration while enhancing collaboration among development, operations, security, and business teams [4]
Datadog Expands LLM Observability with New Capabilities to Monitor Agentic AI, Accelerate Development and Improve Model Performance
Newsfile· 2025-06-10 20:05
Core Insights - Datadog has introduced new capabilities for monitoring agentic AI, including AI Agent Monitoring, LLM Experiments, and AI Agents Console, aimed at providing organizations with end-to-end visibility and governance over AI investments [1][4][8] Industry Context - The rise of generative AI and autonomous agents is changing software development, but many organizations struggle with visibility into AI system behaviors and their business value [2][3] - A study indicates that only 25% of AI initiatives are currently delivering promised ROI, highlighting the need for better accountability in AI investments [4] Company Developments - Datadog's new observability features allow companies to monitor agentic systems, run structured experiments, and evaluate usage patterns, facilitating quicker and safer deployment of LLM applications [3][4] - The AI Agent Monitoring tool provides an interactive graph mapping each agent's decision path, enabling engineers to identify issues like latency spikes and incorrect tool calls [4][6] - LLM Experiments enable testing of prompt changes and model swaps against real production data, allowing users to quantify improvements in response accuracy and throughput [6][7] - The AI Agents Console helps organizations maintain visibility into both in-house and third-party agent behaviors, measuring usage, impact, and compliance risks [7]