Workflow
通用人工智能 (AGI)
icon
Search documents
Ilya闹翻,奥特曼400万年薪急招「末日主管」,上岗即「地狱模式」
3 6 Ke· 2025-12-29 09:02
奥特曼开价400万,要为OpenAI买一份「安全保险」! 近日,奥特曼发帖要为OpenAI招募一位「准备工作负责人(Head of Preparedness)」。 55.5万美元年薪,外加股权,换算成人民币大约400万起步。 他在招聘帖子中特别点名了两件事,这是在过去的一年中发现的: 模型对心理健康的潜在影响; 模型在计算机安全上强到一个新阶段,已经开始能发现「高危漏洞」。 奥特曼强调,我们在衡量能力增长方面已经有了很扎实的基础,但接下来的挑战是如何防止这些能力被滥用,如何在产品里、以及在现实世界里把这些坏 处压到最低,同时还能让大家继续享受它带来的巨大好处。 他认为这是一个巨大的难题而且几乎没有先例,是一个需要「更精细理解和更细致度量的世界」。 | 在硅谷,「55.5万美元基础年薪+股权」,属于极少见的高底薪高管岗,底薪越高,往往意味着岗位稀缺、责任边界更大。 | | --- | | 虽然OpenAI并未公开股权规模,该岗位薪酬总包可能达到百万美元级别。 | | 与高薪相对应的是极富挑战性的工作内容。 | | 奥特曼为这个岗位的定调就是「充满压力」「要立刻下深水」: | | 这会是一份压力很大的工作,而且你 ...
OpenAI 高管:通用人工智能的瓶颈在于人类打字速度不够快
Huan Qiu Wang Zi Xun· 2025-12-15 10:00
【环球网科技综合报道】12月15日消息,据《商业内幕》报道,OpenAI Codex 产品开发负责人 Alexander Embiricos日前表示,通用人工智能 (AGI) 目前"被低估的限制因素"是"人类的打字速度"。 Embiricos 认为,人类的打字速度限制了通用人工智能的发展进程,人类需要依靠提示词来引导并检查 人工智能的工作。 "如果我们能够重建系统,让智能体默认就能发挥作用,我们就能开始解锁'曲棍球棒效应',"他说。 来源:环球网 曲棍球棒效应,指在固定周期内前期销量较低,期末出现突发性增长的需求波动现象,其需求曲线形态 类似曲棍球棒。 Embiricos 表示,实现完全自动化的工作流程没有简单的途径,每个用例都需要自己的方法。但他预 计,人们很快就会看到明显进展。(思瀚) ...
新华财经晚报:两部门出台措施严管养老机构预收费 原则上通过存管银行收取
Xin Hua Cai Jing· 2025-11-19 10:36
Domestic News - The Ministry of Finance successfully issued €4 billion in sovereign bonds in Luxembourg on November 18, with a total subscription amount of €100.1 billion, 25 times the issuance amount, indicating strong international investor interest [1] - The State Administration for Market Regulation reported that there are over 220 national standards related to children in China, including 45 mandatory standards, covering various aspects of children's daily life and safety [1] - The Ministry of Civil Affairs and the Financial Regulatory Bureau jointly issued guidelines for the supervision of pre-charges in elderly care institutions, requiring that funds be collected through designated bank accounts to enhance regulatory oversight [3] Industry News - The China Vaccine Industry Association issued an initiative to oppose "involutionary" competition, urging members to adhere to pricing laws and maintain market price stability by avoiding bids below cost [2] - The State Administration for Market Regulation plans to strengthen the quality and safety supervision of toy products in accordance with mandatory national standards, aiming for significant improvements in safety levels by 2027 [2]
新模型“屠榜”,第一财经对话谷歌团队:AI“新旗手”如何诞生
第一财经· 2025-11-19 06:20
Core Viewpoint - Google has officially launched Gemini 3, a significant advancement in AI models that has achieved leading performance across major benchmarks, potentially reshaping the competitive landscape in the AI industry [3][5][21]. Group 1: Model Performance - Gemini 3 Pro has outperformed competitors in various benchmarks, achieving 37.5% in the "Humanity's Last Exam" without tools, significantly ahead of GPT-5.1 at 26.5% [9]. - In the GPQA Diamond test, Gemini 3 Pro scored 91.9%, surpassing GPT-5.1's 88.1%, indicating its strong capabilities in scientific and mathematical problem-solving [10]. - The model has set new records in multimodal understanding, scoring 81% in MMMU-Pro and 87.6% in Video-MMMU, showcasing its advanced reasoning abilities [11]. Group 2: User Experience and Applications - Users have reported exceptional experiences with Gemini 3, including generating complex game designs and web applications with minimal prompts, highlighting the model's practical utility [12][14]. - The model is designed to assist users in handling complex, multi-step tasks, such as organizing emails and purchasing tickets, which demonstrates its potential for everyday applications [15]. Group 3: Strategic Moves and Market Impact - Google has integrated Gemini 3 into its search engine and launched a new AI programming product called Antigravity, indicating the model's readiness for commercial applications [17][19]. - The launch has led to increased market speculation about Google's competitive position in the AI programming space, particularly against companies like Anthropic [21]. - Following the launch, Loop Capital upgraded Google's parent company rating from "hold" to "buy," reflecting confidence in Gemini's impact on the company's market performance [22]. Group 4: Technological Advancements - Google's success in rapidly advancing from a follower to a leader in AI is attributed to its differentiated full-stack technology approach, which includes hardware investments and advanced TPU networks [23]. - The company emphasizes that the speed of AI development is accelerating, with models like Gemini 3 enabling new applications and enhancing existing capabilities [24].
新模型「屠榜」,对话谷歌团队:AI「新旗手」如何诞生
Xin Lang Ke Ji· 2025-11-19 05:49
Core Insights - Google has officially launched Gemini 3, a significant advancement in AI, which is expected to redefine the competitive landscape in the AI industry, with predictions that it will be hard for competitors to surpass its performance in the next six months [1][4][20]. Performance Metrics - Gemini 3 Pro has achieved top rankings across major benchmark tests, outperforming competitors like GPT-5.1 and Claude Sonnet 4.5 in various categories, including academic reasoning and scientific knowledge [5][7][8]. - In the "Humanity's Last Exam," Gemini 3 Pro scored 37.5%, leading by 10 percentage points over GPT-5.1, which scored 26.5% [7]. - The model scored 91.9% in the GPQA Diamond test, indicating high reliability in solving scientific and mathematical problems [8]. - It also excelled in multimodal understanding, achieving 81% in MMMU-Pro and 87.6% in Video-MMMU [9]. User Experience and Applications - Users have reported exceptional experiences with Gemini 3, including generating complex web applications and 3D visualizations with minimal prompts, showcasing its advanced capabilities [11][17]. - The model is designed to handle multi-step complex tasks, which is a key strength, and it has been integrated into Google Search and a new AI programming product called Antigravity [19][20]. Market Impact and Future Outlook - The launch of Gemini 3 has led to increased market confidence in Google, with analysts upgrading the company's stock rating and target price, reflecting a positive outlook on its AI capabilities [26]. - Google has seen significant user engagement, with over 650 million monthly active users and 13 million developers building applications based on Gemini [26]. - The company aims to leverage its extensive user base and product ecosystem to drive AI adoption, positioning itself as a leader in the AI market [26][27]. Competitive Landscape - Google's advancements have raised concerns among competitors like Anthropic, which previously held a leading position in AI programming tools [25]. - The company believes that its differentiated full-stack technology approach, from hardware to model training, has been crucial in achieving its rapid progress [27][28]. - Gemini 3 is viewed as a step towards achieving general artificial intelligence (AGI), with Google potentially outpacing competitors like OpenAI and xAI [29].
OpenAI推出最新人工智能模型GPT-5
Xin Hua She· 2025-08-08 02:04
Core Insights - OpenAI has released its latest AI model, GPT-5, which is claimed to be the most powerful AI system to date, surpassing previous models in various benchmark tests [1] - GPT-5 demonstrates industry-leading performance in programming, mathematics, writing, health, and visual perception, with significant advancements in reducing hallucinations, enhancing instruction execution, and lowering "flattery" tendencies [1] - The model employs a unified system architecture that integrates an efficient foundational model, deep reasoning modules, and real-time routing systems to determine when to respond quickly or engage in deep reasoning for expert-level answers [1] - OpenAI's CEO, Sam Altman, describes GPT-5 as "the best model in the world," marking an important step towards developing Artificial General Intelligence (AGI) [1] - GPT-5 is available for free to users, with different subscription tiers offering varying levels of access and features, including a more powerful GPT-5 pro version for Pro subscribers [1] Limitations and Comparisons - Reports indicate that GPT-5's new features primarily represent improvements over existing functionalities of ChatGPT and other AI systems [2] - There are still critical limitations in areas such as persistent memory, autonomy, and cross-task adaptability [2] - Comparisons with other leading AI models suggest that GPT-5 may be on par with competitors, and its superiority remains to be fully evaluated [2]
XBIT最新美股价格累涨20%,香港稳定币牌照明年发放
Sou Hu Cai Jing· 2025-07-31 01:39
Core Insights - The U.S. stock market has shown resilience with a cumulative increase of 20% since April 2025, driven primarily by strong performance in the technology sector, particularly Nvidia, which has a market capitalization exceeding $3.89 trillion [1][2] - The recent passage of the "Beautiful America Act" by the U.S. House of Representatives is expected to provide clear policy benefits to cyclical sectors such as energy, industrials, finance, and consumer goods, laying a foundation for performance growth in these areas [4] - The integration of technology giants into the artificial intelligence sector is expected to deepen, with significant investments from companies like Tesla and Microsoft, which may enhance the valuation expectations for the smart vehicle and AI technology sectors [5][6] Market Dynamics - The U.S. stock market's price fluctuations reflect the underlying economic fundamentals and have significant spillover effects on global capital flows, making it a critical reference point for investors in the cryptocurrency market [2] - The regulatory progress on stablecoins, particularly in Hong Kong, is anticipated to accelerate the normalization of the stablecoin market, which could indirectly impact traditional financial markets like U.S. stocks through liquidity transmission [2] - The potential for the Federal Reserve to lower interest rates, as indicated by recent statements from the U.S. Treasury Secretary, is expected to lower market risk-free rates and drive capital towards equity markets [6][7] Investment Strategies - Investors are advised to maintain a multi-dimensional observation approach, tracking macroeconomic data and policy dynamics while also focusing on technological breakthroughs and commercial developments from major tech companies [6] - A diversified investment strategy across sectors and asset classes is crucial to mitigate risks associated with market volatility, with platforms like XBIT providing tools for asset diversification [6][7] - Continuous monitoring of policy dynamics related to the tech sector and stablecoins is essential for investors to identify cross-market investment opportunities in a complex financial environment [7]
OpenAI将部署第100万颗GPU,展望一亿颗?
半导体行业观察· 2025-07-22 00:56
Core Viewpoint - OpenAI aims to deploy over 1 million GPUs by the end of this year, significantly increasing its computational power and solidifying its position as the largest AI computing consumer globally [2][4]. Group 1: GPU Deployment and Market Impact - Sam Altman announced that OpenAI plans to launch over 1 million GPUs, which is five times the capacity of xAI's Grok 4 model that operates on approximately 200,000 Nvidia H100 GPUs [2]. - The estimated cost for 100 million GPUs is around $3 trillion, comparable to the GDP of the UK, highlighting the immense financial and infrastructural challenges involved [5]. - OpenAI's current data center in Texas is the largest single facility globally, consuming about 300 megawatts of power, with expectations to reach 1 gigawatt by mid-2026 [5][6]. Group 2: Strategic Partnerships and Infrastructure - OpenAI is not solely reliant on Nvidia hardware; it has partnered with Oracle to build its own data centers and is exploring Google's TPU accelerators to diversify its computing stack [6]. - The rapid pace of development in AI infrastructure is evident, as a company with 10,000 GPUs was considered a heavyweight just a year ago, while 1 million GPUs now seems like a stepping stone to even larger goals [6][7]. Group 3: Future Vision and Challenges - Altman's vision extends beyond current resources, focusing on future possibilities and the need for breakthroughs in manufacturing, energy efficiency, and cost to make the 100 million GPU goal feasible [7]. - The ambitious target of 1 million GPUs by the end of the year is seen as a catalyst for establishing a new baseline in AI infrastructure, which is becoming increasingly diverse [7].
芯片行业,正在被重塑
半导体行业观察· 2025-07-11 00:58
Core Viewpoint - The article discusses the rapid advancements in generative artificial intelligence (GenAI) and its implications for the semiconductor industry, highlighting the potential for general artificial intelligence (AGI) and superintelligent AI (ASI) to emerge by 2030, driven by unprecedented performance improvements in AI technologies [1][2]. Group 1: AI Development and Impact - GenAI's performance is doubling every six months, surpassing Moore's Law, leading to predictions that AGI will be achieved around 2030, followed by ASI [1]. - The rapid evolution of AI capabilities is evident, with GenAI outperforming humans in complex tasks that previously required deep expertise [2]. - The demand for advanced cloud SoCs for training and inference is expected to reach nearly $300 billion by 2030, with a compound annual growth rate of approximately 33% [4]. Group 2: Semiconductor Market Dynamics - The surge in demand for GenAI is disrupting traditional assumptions about the semiconductor market, demonstrating that advancements can occur overnight [5]. - The adoption of GenAI has outpaced earlier technologies, with 39.4% of U.S. adults aged 18-64 reporting usage of generative AI within two years of ChatGPT's release, marking it as the fastest-growing technology in history [7]. - Geopolitical factors, particularly U.S.-China tech competition, have turned semiconductors into a strategic asset, with the U.S. implementing export restrictions to hinder China's access to AI processors [7]. Group 3: Chip Manufacturer Strategies - Various strategies are being employed by chip manufacturers to maximize output, with a focus on performance metrics such as PFLOPS and VRAM [8][10]. - NVIDIA and AMD dominate the market with GPU-based architectures and high HBM memory bandwidth, while AWS, Google, and Microsoft utilize custom silicon optimized for their data centers [11][12]. - Innovative architectures are being pursued by companies like Cerebras and Groq, with Cerebras achieving a single-chip performance of 125 PFLOPS and Groq emphasizing low-latency data paths [12].
ICML 2025 | 千倍长度泛化!蚂蚁新注意力机制GCA实现16M长上下文精准理解
机器之心· 2025-06-13 15:45
Core Viewpoint - The article discusses the challenges of long text modeling in large language models (LLMs) and introduces a new attention mechanism called Grouped Cross Attention (GCA) that enhances the ability to process long contexts efficiently, potentially paving the way for advancements in artificial general intelligence (AGI) [1][2]. Long Text Processing Challenges and Existing Solutions - Long text modeling remains challenging due to the quadratic complexity of the Transformer architecture and the limited extrapolation capabilities of full-attention mechanisms [1][6]. - Existing solutions, such as sliding window attention, sacrifice long-range information retrieval for continuous generation, while other methods have limited generalization capabilities [7][8]. GCA Mechanism - GCA is a novel attention mechanism that learns to retrieve and select relevant past segments of text, significantly reducing memory overhead during long text processing [2][9]. - The mechanism operates in two stages: first, it performs attention on each chunk separately, and then it fuses the information from these chunks to predict the next token [14][15]. Experimental Results - Models incorporating GCA demonstrated superior performance on long text datasets, achieving over 1000 times length generalization and 100% accuracy in 16M long context retrieval tasks [5][17]. - The GCA model's training costs scale linearly with sequence length, and its inference memory overhead approaches a constant, maintaining efficient processing speeds [20][21]. Conclusion - The introduction of GCA represents a significant advancement in the field of long-context language modeling, with the potential to facilitate the development of intelligent agents with permanent memory capabilities [23].