NVIDIA AI infrastructure
Search documents
SimScale Partners with AI Engineering GmbH to Unlock Ultra-Fast, Meshless SPH Simulation in the Cloud, Powered by NVIDIA AI Infrastructure
Globenewswire· 2026-03-16 20:30
Core Insights - SimScale has announced a strategic collaboration with AI Engineering GmbH to integrate the PAMICS® solver into its cloud engineering simulation platform, enhancing simulation speeds by 10-20 times for complex industrial applications [2][4]. Company Overview - SimScale is recognized as the world's first AI-native cloud platform for engineering simulation, serving over 800,000 users globally and enabling rapid exploration of design decisions [10]. - AI Engineering GmbH specializes in advanced simulation solutions, particularly in Smoothed Particle Hydrodynamics (SPH) and AI-enhanced engineering tools [11]. Technological Advancements - The integration of AI Engineering's PAMICS solver with SimScale's infrastructure aims to democratize access to high-fidelity, meshless Computational Fluid Dynamics (CFD) [3]. - The PAMICS solver utilizes a Lagrangian SPH approach, allowing for direct simulation of fluid dynamics from raw CAD geometries, thus eliminating the need for meshing [6]. - This collaboration supports advanced visualization workflows compatible with NVIDIA Omniverse libraries, facilitating photorealistic rendering and immersive review of simulation results [5]. Industry Applications - The PAMICS solver is designed to handle complex fluid dynamics scenarios, including multiphase flows, fluid-structure interactions, and arbitrary motion, which are challenging for traditional methods [6][9]. - Key use cases include accurately predicting oil lubrication in gearboxes, modeling multiphase flows in industrial mixers, and simulating vehicle wading and contamination management [9]. Strategic Goals - The partnership aims to empower engineers to explore thousands of engineering decisions rapidly, thereby accelerating the development of predictive Digital Twins [7]. - By leveraging NVIDIA's accelerated computing infrastructure, the integration is expected to enhance the performance of simulation workflows significantly [4][7].
Dassault Systèmes and NVIDIA Partner to Build Industrial AI Platform Powering Virtual Twins
Businesswire· 2026-02-03 16:00
Core Insights - Dassault Systèmes and NVIDIA have formed a long-term strategic partnership to create a shared industrial architecture for mission-critical artificial intelligence across various industries [1] - The collaboration aims to integrate Dassault Systèmes' Virtual Twin technologies with NVIDIA's AI infrastructure, open models, and accelerated software libraries [1] - This partnership will lead to the development of science-validated industry World Models and innovative working methods through skilled virtual companions [1]
Pathway to Deliver New Class of Adaptive and Continuously Learning AI Systems with AWS and NVIDIA Technologies
Businesswire· 2025-12-01 16:00
Core Insights - Pathway has introduced a groundbreaking post-Transformer architecture called BDH (Dragon Hatchling) that operates on NVIDIA AI infrastructure and AWS cloud technology, enabling adaptive and continuously learning AI systems [1][5]. Group 1: Technology and Innovation - The integration of NVIDIA and AWS technologies signifies a shift from static to adaptive intelligence, allowing for new complex applications that were previously unattainable for enterprises [2][3]. - BDH's architecture allows for continuous learning, enabling models to evolve with business operations rather than remaining static, which is a limitation of traditional Transformer-based models [2][3]. - The BDH model is designed for enterprise use cases that require complex thinking, low latency, and high observability, leveraging AWS as the preferred cloud provider [4]. Group 2: Market Position and Performance - Pathway's BDH architecture challenges conventional deep learning assumptions, suggesting that larger models can enhance interpretability through neuron specialization [6]. - BDH demonstrates competitive performance on general-purpose hardware while offering faster inference on specialized AI processors, potentially reducing latency and operational costs for enterprises [7]. - The architecture will be showcased at AWS re:Invent 2025, indicating Pathway's commitment to innovation and market presence [5][7]. Group 3: Company Background and Leadership - Pathway is led by CEO Zuzanna Stamirowska, a complexity scientist, and includes a team of AI pioneers with notable backgrounds in the field [10]. - The company is supported by leading investors and advisors, including key figures in AI research, which enhances its credibility and potential for growth [11]. - Pathway is headquartered in Palo Alto, California, and is trusted by organizations such as NATO and Formula 1 racing teams, highlighting its industry relevance [9].
Oracle Unveils Next-Generation Oracle Cloud Infrastructure Zettascale10 Cluster for AI
Prnewswire· 2025-10-14 12:24
Core Insights - Oracle has announced the launch of OCI Zettascale10, the largest AI supercomputer in the cloud, capable of delivering up to 16 zettaFLOPS of peak performance [1][2][3] - The supercomputer is built on Oracle's Acceleron RoCE networking architecture and NVIDIA AI infrastructure, designed to support multi-gigawatt AI workloads with improved efficiency and reliability [1][3][4] Group 1: Technical Specifications - OCI Zettascale10 connects hundreds of thousands of NVIDIA GPUs across multiple data centers, forming multi-gigawatt clusters optimized for low GPU-GPU latency [1][2] - The architecture is hyper-optimized for density within a two-kilometer radius, enhancing performance for large-scale AI training workloads [2] - Initial deployments will target up to 800,000 NVIDIA GPUs, ensuring predictable performance and strong cost efficiency [3][5] Group 2: Strategic Partnerships and Development - The supercomputer is part of a collaboration with OpenAI at the Stargate site in Abilene, Texas, focusing on advancing AI capabilities [1][3] - Oracle and NVIDIA are combining their technologies to provide a robust compute fabric for state-of-the-art AI research and industrial applications [4] Group 3: Networking Innovations - Oracle Acceleron RoCE networking architecture enhances scale, reliability, and efficiency for AI workloads, allowing GPUs to connect to multiple switches simultaneously [4][11] - Key features include a wide, shallow, resilient fabric that reduces costs and power while increasing deployment speed for larger AI clusters [4][11] - The design improves reliability by eliminating data sharing across network planes, maintaining stability for AI jobs [4][11]