AI voice agents
Search documents
Voice AI startup Deepgram raises $130 million at $1.3 billion valuation
Yahoo Finance· 2026-01-13 13:31
Company Overview - Deepgram, a voice AI technology startup, raised $130 million at a valuation of $1.3 billion to expand internationally, roll out new models, and pursue acquisitions [1][2] - The funding round was led by AVP, with participation from new investors such as Alumni Ventures, Princeville Capital, and Citi Ventures, along with existing backers like Tiger Global, Madrona, and In-Q-Tel [2] Product and Market Expansion - Deepgram provides AI models and infrastructure for enterprises and developers to create custom AI voice agents capable of real-time, contextual conversations [2] - The company plans to use the funds to expand into new markets in Europe and the Asia-Pacific region, increase language support, and fund acquisitions and large compute purchases [3] - Currently, Deepgram supports over 50 languages and has acquired OfOne, a voice AI platform for drive-thrus, to enhance its offerings in the restaurant industry [4] Industry Trends - The demand for voice AI has surged, with many products integrating voice capabilities wherever there are text fields or buttons [3] - More than 1,300 organizations utilize Deepgram's voice AI functionality, which powers conversational customer-service platforms for clients like NASA and Amazon Web Services [5]
Could AI transparency backfire for businesses?
Yahoo Finance· 2025-11-11 15:23
Core Viewpoint - The ethical AI movement emphasizes transparency as a means to build trust in AI technologies and the brands that utilize them, but recent findings suggest that AI disclosure may actually erode user trust [1][6][8]. Group 1: AI Transparency and Trust - The Financial Times (FT) has adopted a cautious approach to AI disclosure, recognizing that disclaimers about AI usage can undermine trust in their premium brand [4][5]. - A study conducted by Schilke & Reimann found that AI disclosure generally reduces user trust across various scenarios, indicating a hidden cost to transparency [6][7]. - The erosion of trust is particularly pronounced when AI usage is revealed rather than self-disclosed, with a consistent drop in trust observed in tasks like content drafting and proofreading [7][8]. Group 2: Business Practices and AI Implementation - Many businesses, including Zendesk, advocate for transparency in AI interactions, particularly in customer service, where 25% of interactions are deemed high value and require human involvement [8][9]. - Zendesk's research indicates that 47% of customer service interactions are classified as failed, highlighting the need for improvement and the opportunity to build trust through effective AI solutions [12]. - The BBC also emphasizes careful wording in AI disclosures, suggesting that user acceptance of AI-generated content may evolve over time [13]. Group 3: Standards and Governance - The British Standards Institute (BSI) has introduced a common standard for AI management systems to ensure ethical and transparent AI applications, which is crucial for managing risks like bias [15][16]. - BSI's research indicates that trust in AI can be enhanced through agreed standards that ensure the safety and integrity of AI models, focusing on the transparency of underlying training data [16][18]. - The development of trust in AI will depend on governance and regulation, particularly in specialized use cases such as medical devices and biometric identification [20].