Workflow
赤兔推理引擎
icon
Search documents
清程极智推出一站式AI评测与API服务智能路由平台 AI Ping
Cai Jing Wang· 2026-02-02 06:22
近日,在Ping The Future:智能跃迁,路由新境——清程AI Ping产品发布会上,清程极智推出AI Ping, 一站式AI评测与API服务智能路由平台,完善大模型应用阶段的基础设施能力。 清程极智CEO汤雄超完整地介绍了清程极智的企业定位和产品布局,他表示,从大模型训练与微调,到 推理部署的高性价比实现,再到应用阶段对服务稳定性和使用效率的更高要求,AI Infra的关注重点正 在不断演进。他介绍,清程极智长期围绕大模型训练、推理和应用三类核心场景开展技术实践,先后推 出八卦炉训练系统和赤兔推理引擎,支撑模型在多种算力环境下的高效训练与部署。 据悉,AI Ping聚焦大模型服务使用环节,围绕模型服务评测、统一接入与智能路由等核心能力,构建 起覆盖"评测—接入—路由—优化"的完整链路。平台以真实业务场景为导向,对不同厂商、不同模型 API的延迟、稳定性、吞吐与性价比等关键指标进行长期、持续观测。目前,AI Ping已覆盖30余家中国 大模型API服务商,在统一标准与方法论下对模型服务能力进行对比分析,为企业在复杂的模型与服务 选择中提供更加理性的决策参考。 发布会现场,清程极智联合20余家大模型AP ...
清程极智推出AI Ping平台 瞄准模型API服务需求
Xin Lang Cai Jing· 2026-01-31 05:01
Core Insights - The core task of AI infrastructure is shifting from training and inference of large models to a new phase focused on "intelligent circulation," emphasizing the efficient and stable use of model capabilities in real business scenarios [1] - The market demand for model API services is emerging as the industry transitions to a stage where the focus is on how to operate models long-term, stably, and at scale [1] Company Overview - Qingcheng Jizhi was founded in December 2023 in Beijing, with a core founding team from Tsinghua University's Computer Science Department [1] - The company has previously launched the "Bagua Furnace" training system and the "Chitu" inference engine to support efficient training and deployment of models across various computing powers [1] Product Development - The AI Ping platform, launched by Qingcheng Jizhi, offers a one-stop AI evaluation and API service routing platform, aimed at enhancing the infrastructure capabilities during the application phase of large models [1] - AI Ping covers a complete chain from "evaluation—access—routing—optimization," focusing on real business scenarios and providing long-term observation of key metrics such as latency, stability, throughput, and cost-effectiveness for different model APIs [2] Market Analysis - A joint report with Huqing Puzhi titled "Large Model API Service Industry Analysis Report (2025)" indicates that DeepSeek and Qwen series models dominate the open-source model API calls, with significant performance differences among service providers [3] - The report highlights that the average daily consumption of large models in China's enterprise market reached 10.2 trillion tokens in the first half of last year, indicating a shift from seeking a single strongest model to finding optimal solutions for specific business scenarios [3]
大模型应用迈入规模化运营新阶段 清程AI Ping构建API服务新生态
Huan Qiu Wang· 2026-01-30 07:33
Core Insights - The article discusses the transition of large model applications from exploration to stable and scalable operation, emphasizing the importance of model API service performance, stability, and efficiency in the industry [1][5][10] Industry Developments - Haidian District is accelerating the construction of a modern industrial system focused on artificial intelligence, aiming to support enterprises in collaborative exploration around common industry needs [3] - The shift in AI infrastructure focus from model training and inference to efficient and stable application in real business scenarios is highlighted, with an emphasis on building intelligent routing capabilities [3][5] Company Initiatives - Qingcheng Jizhi has launched the AI Ping platform, a one-stop AI evaluation and API service intelligent routing platform, to support the infrastructure for large model applications [5][10] - The platform aims to provide a complete link from evaluation to optimization, monitoring key performance indicators of different model APIs for informed decision-making by enterprises [7][10] Collaborative Efforts - A collaborative initiative involving over 20 large model API service providers was launched to promote the development of a sustainable model API service ecosystem, focusing on evaluation and industry communication [8][9] - The AI Ping platform has already covered over 30 Chinese large model API service providers, facilitating comparative analysis of service capabilities [7][9] Performance Analysis - The 2025 Large Model Service Performance Ranking will be published based on evaluation data from AI Ping, providing a reference for the industry [8] - A report analyzing the supply structure and usage characteristics of large model API services indicates that the core competitive factors have shifted from price to delivery quality, with key metrics including response latency and stability [10]
清程极智发布AI Ping平台
Zhong Zheng Wang· 2026-01-30 06:46
Group 1 - The core viewpoint of the articles is the introduction of the AI Ping platform by Qingcheng Jizhi, which focuses on the evaluation, unified access, and intelligent routing of large model services, creating a complete link covering "evaluation - access - routing - optimization" [1][3] - AI Ping currently covers over 30 domestic large model API service providers, providing comparative analysis of model service capabilities under unified standards and methodologies, aiding enterprises in making rational decisions amidst complex model and service choices [1] - The shift in the core tasks of AI infrastructure is highlighted, moving from training and inference of large models to a new stage focused on "intelligent circulation," emphasizing the efficient and stable use of model capabilities in real business scenarios [2] Group 2 - The key to achieving intelligent circulation lies in the construction of intelligent routing capabilities, which includes model routing for selecting the most suitable model for different tasks and service routing for optimizing performance and cost among various API service providers [2] - Qingcheng Jizhi's CEO, Tang Xiongchao, emphasizes the evolving focus of AI infrastructure from training and inference to the higher demands for service stability and efficiency during the application phase, leading to the development of AI Ping as a one-stop AI evaluation and API service intelligent routing platform [3] - The development of a complete AI task distribution network through the collaborative development of model and service routing capabilities is crucial for determining the final efficiency and cost of artificial intelligence systems [2]
清程极智师天麾:MaaS盈利战打响,Infra技术已成利润关键丨GAIR 2025
雷峰网· 2025-12-26 09:57
Core Viewpoint - The article discusses the current state of domestic computing power in China, emphasizing the need for improved software ecosystems and system-level optimization to enhance the utilization of domestic chips in AI applications [5][21]. Group 1: AI Infrastructure and Market Trends - The GAIR conference highlighted the rapid evolution of computing power and its impact on AI technology and industry structure, focusing on the next decade of China's AI industry [2]. - The speaker, Shi Tianhui, pointed out that the bottleneck in the utilization of domestic computing power lies in the software ecosystem and system-level optimization capabilities [5][21]. - The MaaS (Model as a Service) market is experiencing significant growth, with a reported increase of over 400% in the first half of the year, indicating a strong demand for AI services [33]. Group 2: Challenges and Solutions in AI Infrastructure - The current challenge is that many domestic enterprises purchase chips from multiple vendors, leading to difficulties in software compatibility and maintenance [22][13]. - The company has developed a proprietary inference engine, "Chitu," which aims to simplify the use of domestic chips and improve their performance [21][22]. - The article emphasizes the importance of a unified software solution to address the "M×N" problem of optimizing multiple models across various chips, which requires significant resources and expertise [25][29]. Group 3: Innovations and Product Offerings - The "Chitu" inference engine has been designed to support both domestic and foreign chips, significantly lowering the barriers for customers to utilize AI applications effectively [22][27]. - The company has introduced "AI Ping," a one-stop platform for evaluating and accessing various MaaS offerings, which aims to reduce information asymmetry in the market [30][36]. - The platform provides comprehensive performance evaluations and a routing function that allows users to access multiple suppliers through a single interface, enhancing cost efficiency and service reliability [39][41].
ChinaSC 2025:产学研聚力,解锁智能算力经济新未来!
Cai Jing Wang· 2025-11-10 08:34
Core Insights - The ChinaSC 2025 conference focused on the theme of "Intelligent Computing Power, Large Models, New Economy," discussing the technological trends and policy directions in China's computing power development [1] - The event featured the release of the "2025 China High-Performance Computing Performance TOP100 Ranking" and the "2025 China Computing Power Leading Enterprises Award" [2] - The AIPerf500 international AI computing power ranking was updated, highlighting the advancements in AI training and inference performance [3][4] Industry Developments - The conference emphasized the importance of AI as a driving force for transformation across various industries, with efficient AI computing power being crucial for the development and implementation of large models [5][6] - The establishment of the Ankang Intelligent Computing Center aims to become a key hub for computing power in Western China, with a target of building a 20,000P cluster [7] - The integration of AI and HPC (High-Performance Computing) was discussed, with innovations in software and algorithms being essential for overcoming structural bottlenecks in traditional HPC applications [8] Technological Innovations - The AIPerf ranking introduced new metrics for evaluating AI computing systems, focusing on training capabilities and inference performance [3][4] - Companies like Beijing Super Cloud Computing Center and Alibaba Cloud were recognized for their high-performance AI computing systems [3] - The development of liquid cooling technology was highlighted as a key innovation for enhancing computing power across various applications [9][10] Strategic Collaborations - A strategic cooperation agreement was signed between the Ankang High-tech Zone Management Committee and the China Intelligent Computing Industry Alliance to foster collaboration in infrastructure, ecosystem development, and technology transfer [11] - The conference also recognized outstanding contributions in the field, awarding several individuals and companies for their achievements in computing power technology [12][13] Future Outlook - The China Intelligent Computing Industry Alliance plans to continue its efforts in promoting the development of the computing power industry, focusing on practical applications and addressing technological challenges [14] - The conference concluded with a strong emphasis on the need for collaboration and innovation to drive the growth of the computing power economy in China [15]
品高股份全新思路的软硬件结合技术 助力AI领域实现突破性进展
Quan Jing Wang· 2025-10-21 09:36
Core Insights - The company, Pingao Co., Ltd. (688227.SH), has disclosed its technological advancements in the AI sector, focusing on software and algorithm optimization to reduce reliance on high-performance hardware, particularly in the context of overseas high-end chip bans [1] - The trend in the industry indicates that AI software capabilities are improving, leading to lower hardware performance requirements, as modern AI algorithms show increased tolerance for hardware errors and noise [2] - Pingao's approach combines domestic chips with software optimization, allowing less powerful domestic chips to achieve higher performance through innovative software solutions [3] Industry Trends - The industry consensus is shifting towards software optimization to enhance the efficiency of domestic chips, with various domestic enterprises and research institutions investing in this direction [2] - Notable advancements include Tsinghua University's "Bagua Furnace" training system and "Chitu" inference engine, which optimize the efficiency of domestic computing power [2] Company Solutions - Pingao's "Pingyuan AI All-in-One Machine," developed in collaboration with Jiangyuan Technology, exemplifies the company's strategy, achieving a 30% increase in response speed for the DeepSeek-R1 model and a 2.5 times improvement in energy efficiency compared to mainstream GPUs [3] - The company has developed the BingoAIInfra intelligent computing power scheduling platform, which enhances the utilization of domestic hardware by allowing precise management of GPU resources [4] Ecosystem Layout - Pingao is building a comprehensive "hardware-software-ecosystem" system to ensure the sustainable development of its technology, including strategic investments in domestic chip companies and collaboration on optimizing inference algorithms [5] - The company’s Pingao Cloud operating system supports a wide range of domestic heterogeneous chip servers and applications, creating a self-controlled ecosystem that mitigates risks associated with overseas technology limitations [5] Conclusion - In the context of rapid digital economy and AI industry growth, Pingao's innovative approach not only achieves technological breakthroughs but also provides a viable path for mainstream AI applications to transition from reliance on overseas high-end hardware to domestic chips, thereby driving the autonomous development of domestic AI computing power [6]
推理、训练、数据全链条的工程挑战,谁在构建中国 AI 的底层能力?|AICon 北京
AI前线· 2025-06-16 07:37
Core Viewpoint - The rapid evolution of large models has shifted the focus from the models themselves to systemic issues such as slow inference, unstable training, and data migration challenges, which are critical for the scalable implementation of technology [1] Group 1: Key Issues in Domestic AI - Domestic AI faces challenges including computing power adaptation, system fault tolerance, and data compliance, which are essential for its practical application [1] - The AICon conference will address seven key topics focusing on the infrastructure of domestic AI, including native adaptation of domestic chips for inference and cloud-native evolution of AI data foundations [1] Group 2: Presentations Overview - The "Chitu Inference Engine" by Qingcheng Jizhi aims to efficiently deploy FP8 precision models on domestic chips, overcoming reliance on NVIDIA's Hopper architecture [4] - Huawei's "DeepSeek" architecture will discuss performance optimization strategies for running large models on domestic computing platforms [5][6] - JD Retail's presentation will cover the technical challenges and optimization practices for high throughput and low latency in large language models used in retail applications [7] - Alibaba's session will explore the design and future development of reinforcement learning systems, emphasizing the complexity of algorithms and system requirements [8] - The "SGLang Inference Engine" will present an efficient open-source deployment solution that integrates advanced technologies to reduce inference costs [9] - Ant Group will share insights on stability practices in large model training, focusing on distributed training fault tolerance and performance analysis tools [10] - Zilliz will discuss the evolution of data infrastructure for AI, including vector data migration tools and cloud-native data platforms [11]