AMD Instinct MI300X

Search documents
AMD's AI Chips Gain Ground in Data Centers: A Sign for More Upside?
ZACKS· 2025-07-16 18:01
Key Takeaways AMD's data center revenues jumped 57.2% Y/Y in Q1 2025, driven by rising AI workload demand. Meta is deploying AMD MI300X for Llama models and collaborating on future AI chip platforms. Despite gains, AMD faces strong competition from INTC and NVDA in the data center AI chip market.Advanced Micro Devices (AMD) is strengthening its footprint in the artificial intelligence (AI) market through an expanding portfolio tailored for data center applications. The latest MI300 series accelerator fami ...
Seekr Selects Oracle Cloud Infrastructure to Deliver Trusted AI to Enterprise and Government Customers Globally
Prnewswire· 2025-06-12 12:00
Core Insights - Seekr has entered a multi-year agreement with Oracle Cloud Infrastructure (OCI) to enhance enterprise AI deployments and develop next-generation vision-language foundation models [1][2][3] - The partnership aims to leverage OCI's AI infrastructure powered by AMD Instinct MI300X GPUs for efficient and secure AI model training and deployment [2][3] - SeekrFlow™, Seekr's AI software platform, will utilize OCI's capabilities to optimize GPU usage and scale large language models (LLMs) globally [2][4] Company Overview - Seekr is a privately held AI company focused on providing trustworthy and transparent AI solutions for enterprise and government clients [6] - The company offers an end-to-end Enterprise AI platform that includes data preparation, analysis capabilities, and tools for building domain-specific LLMs and Agentic AI solutions [6] Partnership Details - The collaboration between Seekr, OCI, and AMD aims to accelerate the availability of trusted AI solutions, particularly in the federal government sector [3][4] - OCI's infrastructure is designed to handle demanding AI workloads, providing flexibility in pricing and easier migration of on-premises applications [4] Technical Capabilities - OCI's AI infrastructure enables efficient multi-node training and inference, allowing Seekr to train LLMs at a lower cost while optimizing performance [3][4] - The partnership emphasizes the importance of massive GPU compute capacity for developing advanced AI models, particularly for analyzing extensive imagery and sensor data [3]
超越DeepSeek?巨头们不敢说的技术暗战
3 6 Ke· 2025-04-29 00:15
无可置疑的,DeepSeek-R1模型的面世使中国AI技术发展有了极大的优势侧,也标志着人工智能领域的 里程碑式突破。 不过,技术创新往往伴随应用成本的转移。约65%的早期采用者反馈,在实际部署中需要投入大量开发 资源进行适配优化,这在一定程度上削弱了其理论上的效率优势。 这款具有颠覆性意义的推理模型不仅在研发效率上展现出显著优势,其性能指标可与OpenAI等业界领 军企业的产品分庭抗礼,甚至基于中国的应用场景,可能还有所超越,而其所需计算资源较同类产品大 幅缩减近30%。 该模型的成功实践既印证了算法创新的无限可能,也引出了关键的技术进化命题,即当未来算法突破与 传统计算架构出现适配瓶颈时,行业将面临怎样的转变挑战? 当前主流大模型(如GPT-4、Gemini Pro、Llama3等)正以每月迭代2-3次的频率推进技术革新,持续刷 新性能基准。DeepSeek-R1通过独创的分布式训练框架和动态量化技术,成功将单位算力下的推理效能 提升40%,其研发轨迹为行业提供了算法与系统工程协同进化的典型案例。 而且,该团队研发的多头潜注意力机制(MLA)在实现内存占用降低50%的突破性进展时,也带来了 开发复杂度的显 ...