Responsible AI
Search documents
Datavault AI 宣布,将于 2025 年 12 月 24 日向所有符合条件的公司登记股权持有人,以及 Scilex Holding Company 的普通股持有人发放 Dream Bowl Meme Coin 代币
Globenewswire· 2025-12-13 06:17
Core Viewpoint - Datavault AI Inc. announces the distribution date of the Dream Bowl 2026 Meme Coin to eligible shareholders on December 24, 2025, as a gesture of appreciation for its partnership with Scilex Holding Company [1][4]. Group 1: Distribution Details - The distribution of the Meme Coin will occur on December 24, 2025, for eligible shareholders of Datavault AI and Scilex [1]. - Detailed instructions regarding wallet setup, token acquisition, and distribution process will be mailed to shareholders around December 12, 2025 [2]. - Eligible recipients must open a digital wallet with Datavault AI and sign an Opt-In Agreement to receive the Meme Coin [3]. Group 2: Token Characteristics - The Meme Coin is a digital collectible intended for personal, non-commercial use related to the Dream Bowl XIV event on January 11, 2026 [5]. - The coin does not confer any rights such as equity, voting rights, dividends, or profit-sharing in Datavault AI or any other entity [5]. - The Meme Coin will be tradable on Datavault AI's proprietary platform, Information Data Exchange, with trading expected to start around January 11, 2026 [6]. Group 3: Company Overview - Datavault AI is a leader in data monetization, credentialing, and digital interaction technologies, operating within the Web 3.0 environment [7]. - The company provides comprehensive solutions across various industries, including sports, entertainment, biotechnology, and fintech, leveraging patented technologies in audio and data science [7].
Former LivePerson CEO Launches KID, a Safe Creative AI Device Amid Alarming AI Toy Safety Findings.
Newsfile· 2025-12-11 00:20
Core Insights - KID Company has launched KID®, a creative AI device designed for children aged 4 to 12, emphasizing safety and creativity without ads or internet access [1][12] - The device aims to provide a healthier alternative to traditional screens, addressing concerns about screen addiction and inappropriate content in AI toys [4][5] Product Design and Features - KID features a unique rounded sphere design, promoting tactile interaction, and allows children to create stories and artwork using voice and touch with the help of AI "Buddies" [2][12] - The device operates in a fully protected environment, ensuring no exposure to unsafe content [4][12] Market Context and Motivation - The launch of KID comes in response to increasing scrutiny of AI toys, which have been found to expose children to inappropriate content [4] - The founder, Robert LoCascio, noted the negative impact of adult-oriented devices on children, leading to the development of KID to foster imagination and family connection [5][6] Educational Initiatives - KID Company offers weekly AI safety and creativity classes at its store in Los Altos, California, providing hands-on experience with responsible technology use [11][12] Availability and Promotion - KID is available for purchase ahead of the 2025 holiday season, with a referral program that rewards families for sharing the device [9][12]
Udemy and Mila Partner to Empower the Global Workforce with Responsible AI Skills
Businesswire· 2025-12-09 14:25
Core Insights - Udemy and Mila have announced a partnership aimed at accelerating responsible AI skill development for the global workforce [1][2] - The collaboration will focus on creating scalable AI learning programs that emphasize ethical and responsible AI application [1][3] Partnership Details - Starting in January, new responsible AI learning programs will be launched, combining Udemy's global reach with Mila's expertise in responsible AI research [2] - The programs will blend technical skills with strategic and ethical decision-making, catering to organizations at various stages of their AI journey [2][3] Organizational Impact - The partnership aims to equip organizations with practical skills and frameworks necessary for safe and strategic AI adoption [3] - Professionals will gain access to courses on responsible AI, ethics, governance, and practical decision-making [6] Educational Offerings - The learning solutions will include applied training on integrating responsible practices into workflows and insights from Mila researchers on emerging trends [6] - Flexible learning formats will be available, including on-demand content and expert-led sessions for enterprise teams [6] About Udemy - Udemy is an AI-powered skills acceleration platform serving 82 million learners and over 17,000 organizations globally [1][4] - The platform provides personalized experiences to help organizations develop the necessary capabilities for a rapidly evolving workplace [4] About Mila - Mila is the world's largest academic AI research center, specializing in deep learning and dedicated to advancing AI for the benefit of all [5] - Founded by Yoshua Bengio, Mila is supported by the Canadian government and recognized for its influential research and leadership in responsible AI [5]
Microsoft(MSFT) - 2025 FY - Earnings Call Transcript
2025-12-05 17:32
Financial Data and Key Metrics Changes - Microsoft reported record-breaking financial results for FY25, with revenue growing 15% to over $281 billion, operating income increasing 17%, and earnings per share rising 16% [35][41] - The company returned a total of $37.7 billion in cash to shareholders, marking a 10% increase from the previous fiscal year [35] Business Line Data and Key Metrics Changes - Microsoft Cloud business revenue surpassed $168 billion, growing 23% year over year, with Azure revenue growing 34% to over $75 billion [36] - Microsoft 365 Commercial Cloud revenue grew 15%, while Dynamics 365 revenue increased by 19% [36] - The Microsoft 365 business exceeded $95 billion, up 14% year over year, with a consumer subscription base growing to 89 million [37] Market Data and Key Metrics Changes - LinkedIn revenue surpassed $17 billion, with membership growing to 1.2 billion professionals, marking four consecutive years of double-digit member growth [38] - Gaming revenue exceeded $23 billion, with Game Pass revenue reaching nearly $5 billion for the first time [38] Company Strategy and Development Direction - Microsoft is focused on three core business priorities: security, quality, and AI innovation, emphasizing the importance of these areas for future growth [41][40] - The company is investing in AI infrastructure, including new data centers and AI models, to enhance its offerings and meet customer demands [42][43] Management's Comments on Operating Environment and Future Outlook - Management expressed confidence in the company's ability to lead in AI and emphasized the transformative potential of AI across various sectors [41] - The company aims to create high-value solutions for customers and communities, ensuring broad access to AI technology [47] Other Important Information - The board of directors nominated John David Rainey for election, who is the CFO of Walmart, to replace Carlos A. Rodriguez [5] - Shareholders approved all management proposals, including executive compensation and the selection of Deloitte & Touche as the independent auditor [33] Q&A Session Summary - Shareholders raised concerns about Microsoft's human rights due diligence and the potential risks associated with AI and cloud services [23][25] - The company responded by highlighting its commitment to human rights and transparency in AI development [31][32] - Shareholders proposed several resolutions related to AI censorship risks and the effectiveness of Microsoft's human rights processes, all of which were not approved [33][34]
Microsoft(MSFT) - 2025 FY - Earnings Call Transcript
2025-12-05 17:32
Financial Data and Key Metrics Changes - Microsoft reported record-breaking financial results for FY25, with revenue growing 15% to over $281 billion, operating income increasing 17%, and earnings per share rising 16% [35][41] - The company returned a total of $37.7 billion in cash to shareholders, marking a 10% increase from the previous fiscal year [35] Business Line Data and Key Metrics Changes - Microsoft Cloud business revenue surpassed $168 billion, growing 23% year over year, with Azure revenue growing 34% to over $75 billion [36] - Microsoft 365 Commercial Cloud revenue grew 15%, while Dynamics 365 revenue increased by 19% [36] - The Microsoft 365 business exceeded $95 billion, up 14% year over year, with a consumer subscription base growing to 89 million [37] Market Data and Key Metrics Changes - LinkedIn revenue surpassed $17 billion, with membership growing to 1.2 billion professionals, marking four consecutive years of double-digit member growth [38] - Gaming revenue exceeded $23 billion, with Game Pass revenue reaching nearly $5 billion for the first time [38] Company Strategy and Development Direction - Microsoft is focused on three core business priorities: security, quality, and AI innovation, with significant investments in AI infrastructure and solutions [41][42] - The company is building a planet-scale cloud and AI factory, with over 400 data centers across 70 regions [42] - Microsoft aims to lead in AI by integrating AI capabilities across its platforms and services, including the introduction of Copilot features in various applications [44][46] Management's Comments on Operating Environment and Future Outlook - Management expressed confidence in sustaining growth momentum and emphasized the importance of innovation in AI and security [41][39] - The company anticipates continued demand for its cloud services and AI solutions, projecting strong revenue growth in the upcoming fiscal year [39] Other Important Information - The board of directors nominated John David Rainey for election, who is the Executive Vice President and CFO of Walmart [5] - Shareholders approved all management proposals, including executive compensation and the ratification of Deloitte & Touche as the independent auditor [33] Q&A Session Summary Question: What are the key risks associated with Microsoft's AI initiatives? - Management highlighted the importance of transparency and responsible AI deployment, acknowledging the potential risks of bias and misuse in AI technologies [32][19] Question: How is Microsoft addressing shareholder concerns regarding human rights and AI? - The company stated its commitment to human rights due diligence and ongoing assessments to prevent misuse of its technologies [32][25] Question: What steps is Microsoft taking to ensure its technology aligns with climate goals? - Management acknowledged the need for transparency regarding the environmental impact of its technologies and committed to addressing these concerns in future disclosures [30][29]
Former LivePerson CEO Launches KID®, a Safe Creative AI Device Amid Alarming AI Toy Safety Findings
Businesswire· 2025-12-03 11:00
Core Insights - KID Company has launched KID®, a creative AI device aimed at children aged 4 to 12, designed to provide a safer alternative to traditional screens that often promote attention-seeking behaviors [1] - The device features a unique rounded sphere design, encouraging tactile interaction, and allows children to create stories, characters, and artwork without internet access [1] - The launch responds to growing concerns about the safety of AI-powered toys, as recent reports highlight risks of inappropriate content exposure in connected toys [1] Company Overview - KID Company is based in Los Altos, California, founded by Robert LoCascio, who previously led LivePerson [1] - The company's mission focuses on creating safer, healthier digital experiences for families, emphasizing a childhood-first approach to technology [1] - KID is marketed as a closed, no-internet device that avoids ads, apps, and data collection, promoting creativity and real-world connections [1] Product Details - KID is priced at $299.99, with a monthly subscription fee of $19.95, offering the first month free [1] - A referral program is available, providing additional free months for families who share the device [1] - KID Company also conducts weekly AI safety and creativity classes at its store, fostering responsible technology use among children and parents [1]
Why AI is not your friend | Rita Arrigo | TEDxMelbourne
TEDx Talks· 2025-12-01 17:10
AI Development & Trends - The AI field is evolving through three phases: efficiency and productivity enhancement, agentic AI capable of acting and collaborating, and physical intelligence with embodied AI and robots [19][20][21] - Large world models are being developed to enable AI to understand and perceive the world in 3D, suggesting AI will soon be integrated into physical environments [21][22] - AI is considered a "power tool" that can help regenerate and rejuvenate the world, offering new perspectives beyond exploitation [24] Responsible AI & Ethics - Responsible AI involves guardrails, governance, reliability, accuracy, and safety features to prevent misuse and ensure ethical application [13][14][15] - Microsoft's experience with Seeing AI highlights the importance of responsible AI, particularly in facial recognition and emotion detection, to avoid miscalculations and protect vulnerable populations [10][11][12] - It is crucial to maintain the distinction between AI as a tool and AI as a person to avoid over-reliance and confusion [23][24] Practical Applications & Engagement - AI can be applied to various sectors, including climate (reducing floods, improving biodiversity), health (better diagnosis, drug discovery), work (safe work environments, democratized dignity of work), and culture (democratized creativity) [24][25][26] - Individuals can engage with AI by debating with it, using different AI tools like Notebook LLM and Leonardo AI, attending AI events, participating in hackathons, and exploring AI ethics [27][28][29] - The National AI Center and the National Communications Museum offer resources and opportunities to learn about and interact with AI [4][5][29]
The feds get 4,000 website complaints a day. Can a “responsible” AI chatbot untangle the mess?
BetaKit· 2025-11-20 19:44
Core Insights - The Ottawa Responsible AI Summit focused on the development of a government AI chatbot aimed at improving user experience on Canadian government websites, addressing issues of security, equity, and accessibility [1][3][4] Group 1: AI Chatbot Development - The Canadian government's AI chatbot prototype, powered by OpenAI's GPT-4 model, allows users to ask questions in plain language and receive relevant information from government websites, while emphasizing the need for users to verify AI-generated answers [2] - The chatbot is designed to handle up to 4,000 daily complaints about the government website, aiming to alleviate pressure on service call centers and in-person offices [1][2] - The tool will not require user accounts or collect personal information, allowing for anonymous inquiries, which is a deliberate design choice to enhance user privacy [4][5] Group 2: Security and Equity Considerations - Discussions at the summit highlighted the importance of data privacy and equitable deployment of AI tools, with a focus on ensuring that the benefits of AI reach diverse populations [3][4] - The development team is cautious about data collection practices, opting not to gather extensive demographic data to avoid potential misuse [5][6] - The chatbot aims to provide tailored responses for various demographic groups, ensuring that it does not perpetuate existing biases [9][10] Group 3: Community Engagement and Representation - The summit emphasized the need for diverse representation in defining "responsible" AI, with discussions on who should have a seat at the decision-making table [12][13] - The Canadian Digital Service (CDS) plans to consult with different communities to better understand their interactions with government services, ensuring that the chatbot is tested through various community lenses [10][17] - The CDS is adopting a "bubble-based" approach to consultation, starting with internal government communities before expanding to broader community engagement [17][18] Group 4: Project Viability and Future Trials - The chatbot has completed a trial with 2,700 users, achieving a 95% success rate, and plans to conduct another trial with 3,500 users next year [19] - There are concerns regarding the chatbot's ability to handle millions of queries and the potential costs associated with the project, which may affect its formal launch [19][20] - The project is still in beta testing, and its future is uncertain, with no guarantees of moving beyond this phase [19]
WARNER MUSIC GROUP AND STABILITY AI JOIN FORCES TO BUILD THE NEXT GENERATION OF RESPONSIBLE AI TOOLS FOR MUSIC CREATION
Prnewswire· 2025-11-19 16:00
Core Insights - Warner Music Group (WMG) and Stability AI are collaborating to develop responsible AI tools for music creation, focusing on ethical practices and protecting creators' rights [1][2] - The initiative aims to enhance the creative process for artists, songwriters, and producers by providing professional-grade tools that utilize ethically trained models [1][2] - Stability AI is recognized as a leader in commercially safe generative audio, with its Stable Audio models specifically designed for high-quality music generation [2][4] Company Overview - Warner Music Group operates in over 70 countries and includes a diverse range of renowned labels and a music publishing arm with over one million copyrights [3] - Stability AI is positioned as a creative partner for media generation and editing, having gained recognition for its contributions to the generative AI field, including the release of Stable Diffusion [4][5]
Regulating AI to unlock innovation | David de Falguera | TEDxEsade Salon
TEDx Talks· 2025-11-12 17:22
The Role of Regulation in AI Innovation - Regulation is essential for building trust in AI technology, which is crucial for its adoption in sectors like healthcare, finance, and law [5][6] - Regulation provides certainty for innovation teams, enabling them to make faster decisions and focus on creating powerful tools without legal uncertainty [7][8] - Regulation drives better design by encouraging teams to consider trustworthy AI and human rights from the beginning of the innovation process, raising the quality of AI products [9][10] Overcoming Challenges in AI Regulation - Building multi-disciplinary teams (tech, law, cyber, data) is essential for collaborating from the beginning of the innovation process [12] - Accountability, demonstrating compliance, and considering ethics from the beginning are crucial when innovating with AI [13][14] - Compliance by design is key to building trustworthy AI, allowing for faster decisions and safe innovation [14][15] The AI Act - The AI Act classifies AI systems into unacceptable risk, high risk, and limited risk categories, which helps companies understand their obligations based on the level of risk [16] - The AI Act contains mechanisms to evolve alongside AI, allowing regulators and policymakers to update regulations as needed [17][18]