Vera Rubin superchip
Search documents
Nvidia CEO Jensen Huang Says Rubin Architecture Is Now in Full Production. Here's Why That Matters.
Yahoo Finance· 2026-01-10 08:22
Key Points The unprecedented demand for artificial intelligence (AI) chips has resulted in an ongoing shortage. Nvidia's Rubin architecture has hit full production, six months ahead of schedule. By rolling out these cutting-edge chips early, Nvidia is betting demand will remain strong. 10 stocks we like better than Nvidia › The adoption of artificial intelligence (AI) has been all the rage in recent years, and the foundation for future AI proliferation has been laid. There's been a massive data ...
5 ETFs to Buy for January
ZACKS· 2026-01-08 18:00
Key Takeaways Despite macro and geopolitical worries, January momentum remains strong across equities.Small caps, momentum, semis and defense are early 2026 leaders amid AI and security spending.Healthcare stands out as a defensive but winning play as investors balance risks and safety.As the market has entered 2026, the S&P 500 is coming off a third straight year of returns well above its long-term annual average of roughly 10%.Investors entered 2026 with notable concerns.While GDP growth has accelerated a ...
How long will Jensen Huang be Nvidia's CEO?
Yahoo Finance· 2026-01-07 13:29
LAS VEGAS — Since 1993, Jensen Huang has held the position of CEO at Nvidia. He led the company from a stock price of pennies per share from 1999 to 2016 to its current height of more than $187 — and the most valuable company in the world. That's a tenure that far surpasses other tech leaders (and those in most other industries), including Apple's Tim Cook (14 years), Meta's Mark Zuckerberg (22) and Tesla's Elon Musk (who founded SpaceX in 2002). Huang will turn 63 this year, though. And the company has ...
光物质通道:AI 用 3D 光子互连板 --- Lightmatter Passage _ A 3D Photonic Interposer for AI
2025-09-22 00:59
Summary of Lightmatter Passage Conference Call Industry and Company Overview - **Industry**: AI and Photonic Computing - **Company**: Lightmatter, known for its Passage M1000 "superchip" platform utilizing photonic technology to enhance AI training capabilities [1][3][13] Core Points and Arguments 1. **Exponential Growth of AI Models**: The scale of AI models has increased dramatically, with models now reaching hundreds of billions or even trillions of parameters, necessitating thousands of GPUs for training [3][4] 2. **Challenges in AI Training**: The industry faces significant challenges in scaling AI training, particularly due to the slowdown of Moore's Law and the limitations of traditional electrical interconnects, which create bottlenecks in data communication and synchronization [7][10][11] 3. **Lightmatter's Solution**: The Passage M1000 platform addresses the interconnect bottleneck by employing a 3D photonic stacking architecture, integrating up to 34 chiplets on a single photonic interposer, achieving a total die area of 4,000 mm² [13][14] 4. **Unprecedented Bandwidth**: The Passage platform delivers a total bidirectional bandwidth of 114 Tbps and 1,024 high-speed SerDes lanes, allowing each chiplet to access multi-terabit-per-second I/O bandwidth, effectively overcoming traditional I/O limitations [17][21] 5. **Comparison with Competitors**: Lightmatter's approach contrasts with other industry players like NVIDIA and Cerebras, who focus on maximizing single-chip performance or building ultra-large chips. Lightmatter emphasizes optical interconnects to achieve high bandwidth communication across chiplets [30][42][44][52] Additional Important Insights 1. **Nature Paper Validation**: A study published in *Nature* demonstrated the feasibility of photonic processors for executing advanced AI models, achieving near-electronic precision, which complements Lightmatter's focus on interconnect solutions [22][23][82] 2. **Future of AI Acceleration**: The combination of Lightmatter's optical interconnects and the advancements in photonic computing suggests a paradigm shift towards hybrid electronic-photonic architectures, breaking through performance ceilings in AI acceleration [82][83] 3. **Scalability and Efficiency**: Lightmatter's Passage aims to simplify AI deployments and improve efficiency by collapsing datacenter-level communication into a single "superchip," potentially offering better cost efficiency and flexibility compared to traditional methods [42][52][78] Conclusion - The emergence of Lightmatter's Passage platform represents a significant advancement in addressing the challenges of modern AI training, providing a breakthrough pathway through innovative photonic interconnect technology [84]