Workflow
Trainium3 chip
icon
Search documents
Amazon Is Expanding Its AI Chip Ambitions. Should Nvidia Investors Be Worried?
The Motley Fool· 2025-12-03 19:32
Amazon says customers can save 30% to 40% by using its AI chips over Nvidia's GPUs.Amazon (AMZN 0.89%) is a leading artificial intelligence company that incorporates AI into its vast e-commerce and advertising platforms, as well as being the world's largest cloud computing company. However, it is now taking a key step in expanding its AI empire by rolling out a new AI chip that could significantly challenge the dominant position held by chipmaker Nvidia (NVDA 0.49%).Amazon's Trainium3 chip is the latest mov ...
Amazon unveils Trainium3 chip, doubling down on AI hardware push
Proactiveinvestors NA· 2025-12-02 21:39
Group 1 - Proactive provides fast, accessible, informative, and actionable business and finance news content to a global investment audience [2] - The news team covers medium and small-cap markets, as well as blue-chip companies, commodities, and broader investment stories [3] - Proactive's content includes insights across various sectors such as biotech, pharma, mining, natural resources, battery metals, oil and gas, crypto, and emerging technologies [3] Group 2 - Proactive is committed to adopting technology to enhance workflows and content production [4] - The company utilizes automation and software tools, including generative AI, while ensuring all content is edited and authored by humans [5]
Trainium3 UltraServers Now Available: Enabling Customers to Train and Deploy AI Models Faster at Lower Cost
Businesswire· 2025-12-02 18:30
Core Insights - Amazon Web Services (AWS) has launched Trainium3 UltraServers, powered by the new Trainium3 chip, aimed at enhancing AI model training and deployment efficiency at lower costs [1][6]. Performance Enhancements - Trainium3 UltraServers offer up to 4.4 times more compute performance, 4 times greater energy efficiency, and nearly 4 times more memory bandwidth compared to Trainium2 UltraServers [6]. - The servers can scale up to 144 Trainium3 chips, delivering up to 362 FP8 PFLOPs with 4 times lower latency, facilitating faster training of larger models and serving inference at scale [6]. Cost Efficiency - Customers utilizing Trainium are experiencing reductions in training and inference costs by up to 50% [6]. - Decart has achieved 4 times faster inference for real-time generative video at half the cost of GPUs, while Amazon Bedrock is already handling production workloads on Trainium3 [6].