Workflow
Llama 4 Maverick
icon
Search documents
X @Avi Chawla
Avi Chawla· 2025-09-29 06:33
You're in a Research Scientist interview at OpenAI.The interviewer asks:"Our investors want us to contribute to open-source.o3 crushed benchmarks.But we can lose a competitive edge by open-sourcing it.What do we do?"You: "Release the research paper."Interview over.You forgot that LLMs don't just learn from raw text; they also learn from each other.For example:- Llama 4 Scout & Maverick were trained using Llama 4 Behemoth.- Gemma 2 and 3 were trained using Gemini.Distillation helps us do so, and the visual e ...
LLM也具有身份认同?当LLM发现博弈对手是自己时,行为变化了
3 6 Ke· 2025-09-01 02:29
Core Insights - The research conducted by Columbia University and Montreal Polytechnic reveals that LLMs (Large Language Models) exhibit changes in cooperation tendencies based on whether they believe they are competing against themselves or another AI [1][29]. Group 1: Research Methodology - The study utilized an Iterated Public Goods Game, a variant of the Public Goods Game, to analyze LLM behavior in cooperative settings [2][3]. - The game involved multiple rounds where each model could contribute tokens to a public pool, with the total contributions multiplied by a factor of 1.6 and then evenly distributed among players [3][4]. - The research was structured into three distinct studies, each examining different conditions and configurations of the game [8][14]. Group 2: Key Findings - In the first study, when LLMs were informed they were playing against "themselves," those prompted with collective terms tended to betray more, while those prompted with selfish terms cooperated more [15][16]. - The second study simplified the rules by removing reminders and reasoning prompts, yet the behavioral differences between the "No Name" and "Name" conditions persisted, indicating that self-recognition impacts behavior beyond mere reminders [21][23]. - The third study involved LLMs truly competing against their own copies, revealing that under collective or neutral prompts, being told they were playing against themselves increased contributions, while under selfish prompts, contributions decreased [24][28]. Group 3: Implications - The findings suggest that LLMs possess a form of self-recognition that influences their decision-making in multi-agent environments, which could have significant implications for the design of future AI systems [29]. - The research highlights potential issues where AI might unconsciously discriminate against each other, affecting cooperation or betrayal tendencies in complex scenarios [29].
全球AI智商最新排名公布,还好,没谁超过爱因斯坦
3 6 Ke· 2025-08-19 05:22
Group 1 - The project "Trackingai.org" has created a fun initiative to test AI models using a human-like IQ test format, aiming to measure their cognitive abilities in a familiar way [1][25] - The challenge featured top AI models including OpenAI's GPT-5 Pro, Google's Gemini 2.5 Pro, and xAI's Grok 4, showcasing their performance in a competitive environment [3][4] - The results of the IQ tests reveal significant insights into the cognitive evolution of AI and highlight the differences between AI and human thinking [3][28] Group 2 - In the Mensa IQ test, Google's Gemini 2.5 Pro achieved the highest score of 137, indicating its advanced capabilities in logical reasoning and abstract thinking [6][28] - OpenAI's GPT-5 scored 121, while Grok 4 scored 125, both of which are above average but below Gemini 2.5 Pro [6][19] - The performance of these models illustrates a gradient in AI intelligence levels, with each model employing different reasoning paths to arrive at correct answers [17][19] Group 3 - The Llama 4 Maverick from Meta scored only 98, reflecting a significant gap compared to top competitors, despite being close to the human average [21][22] - Meta is actively recruiting top AI researchers to improve its models and close the performance gap with leading closed-source models [22][24] - DeepSeek R1, despite using older data, scored 102, indicating that effective model architecture and training methods can lead to competitive performance without the latest updates [24][25] Group 4 - The testing method serves as a bridge for public understanding of AI capabilities, making it easier to discuss and compare AI intelligence in relatable terms [25][26] - High IQ scores for AI models signify a qualitative leap in their cognitive abilities, moving beyond mere information retrieval to complex logical reasoning and problem-solving [28][29] - The results highlight that while AI can excel in logical analysis, it does not equate to possessing a complete human-like intelligence, which includes creativity and emotional understanding [29]
AI竞争压顶,Meta终于杀入风投
虎嗅APP· 2025-07-07 10:36
Core Viewpoint - Meta's CEO Mark Zuckerberg is under pressure to enhance the company's AI capabilities and is adopting a more hands-on approach to management, including the establishment of a Corporate Venture Capital (CVC) unit to attract top talent and improve performance in the AI sector [2][8]. Group 1: Meta's Current Challenges - Zuckerberg's recent management style has shifted to a more direct and micro-level approach, reallocating resources to the GenAI team to boost the performance of LLaMA [2][4]. - There is a growing concern about talent retention at Meta, with reports of AI engineers leaving for competitors like OpenAI and Anthropic, often with offers exceeding $2 million [6][7]. - The AI landscape is becoming increasingly competitive, with Meta's LLaMA struggling to keep pace with rivals like Qwen and DeepSeek, leading to a perception of stagnation in Meta's AI initiatives [6][12]. Group 2: Establishment of CVC - Historically, Meta has not had a dedicated CVC, relying instead on its corporate development teams for acquisitions [4][5]. - The decision to form a CVC is part of Zuckerberg's broader strategy to create a "superintelligence unit" aimed at revitalizing Meta's AI efforts [8][10]. - Meta's investment in the venture fund NFDG, led by Daniel Gross, is a strategic move to gain access to top talent and innovative projects in the AI space [9][12]. Group 3: Financial Implications and Market Dynamics - The AI investment landscape is currently dominated by corporate investments, which accounted for approximately 75% of the total funding in 2023, indicating a scarcity of available high-quality targets [12][13]. - Meta's recent acquisition of Scale AI for $14.8 billion is seen as a critical step in its strategy to bolster its AI capabilities [7][12]. - The overall number of AI startups has decreased significantly, with a reported 81% drop in new AI companies since the peak in 2021, complicating Meta's efforts to secure talent and technology [12][13].
13万亿巨头,杀入CVC
3 6 Ke· 2025-07-05 02:33
Core Insights - Meta's CEO Mark Zuckerberg is experiencing frustration as the company struggles to keep pace with competitors in the AI space, particularly in light of its underwhelming performance in the metaverse and AR/VR sectors [1][2] - Despite Meta's strong financial performance and stock price nearing historical highs, there is growing anxiety about the company's future direction and competitiveness in AI [1][2] Group 1: Management Changes and Strategies - Zuckerberg has taken a hands-on approach to AI management, reallocating resources from foundational AI research to the GenAI team to enhance the performance of LLaMA [2] - The restructuring includes demoting the head of the GenAI team and splitting it into two groups, reflecting Zuckerberg's intense pressure to deliver results [2] - Meta's lack of a dedicated Corporate Venture Capital (CVC) team has prompted Zuckerberg to consider establishing one to better compete in the AI landscape [4][7] Group 2: Talent Acquisition Challenges - Meta is facing significant talent retention issues, with reports of AI engineers leaving for competitors like OpenAI and Anthropic, often with offers exceeding $2 million [6] - Zuckerberg's ambitious "superintelligence unit" plan aims to recruit top industry talent, offering salaries that could reach nine figures [6][7] - The difficulty in attracting talent is compounded by the competitive landscape, where even substantial financial incentives have not been enough to secure top candidates [10][12] Group 3: Investment and Acquisition Strategies - Meta's acquisition of Scale AI for $14.8 billion is part of a broader strategy to bolster its AI capabilities and leadership [6][12] - The company is also investing in Daniel Gross's venture fund, NFDG, to gain access to top talent and expertise in AI [7][8] - The overall investment landscape in AI is becoming increasingly competitive, with a significant drop in the number of new AI startups and rising costs for quality acquisitions [11][12]
大模型全员0分!谢赛宁领衔华人团队,最新编程竞赛基准出炉,题目每日更新禁止刷题
量子位· 2025-06-18 09:17
Core Viewpoint - The recent LiveCodeBench Pro benchmark test revealed that leading large language models (LLMs) performed poorly, with all models scoring zero points, indicating that they have not yet reached the level of human experts in competitive programming tasks [1][2][8]. Group 1: Benchmark Overview - LiveCodeBench Pro is a real-time benchmark testing platform that includes competitive programming problems from IOI, Codeforces, and ICPC [3]. - The question bank is updated daily to prevent LLMs from memorizing questions, ensuring a challenging evaluation environment [4][15]. - The benchmark consists of 584 top-tier competition problems, categorized by cognitive focus and difficulty level, with automatic selection based on normal distribution [15][17]. Group 2: Model Performance - The best-performing model achieved a pass rate of only 53% on medium difficulty questions, while the pass rate for hard questions was 0% [9][10]. - The performance metrics of various models showed that while they excelled in knowledge-intensive and logic-intensive problems, they struggled with observation-intensive problems [26][29]. - LLMs demonstrated advanced skills in precise implementations but fell short in algorithm design and complex case analysis [28][29]. Group 3: Testing Methodology - The testing team categorized problems based on underlying algorithmic concepts and recorded the official difficulty ratings from Codeforces [19]. - Each model's submissions were evaluated against human expert solutions, with results indicating that LLMs often failed to utilize provided sample inputs effectively [30][32]. - The team plans to release a completely new evaluation set quarterly to maintain the relevance and challenge of the testing environment [38]. Group 4: Team Composition - The LiveCodeBench Pro team consists of several Olympic competition winners, with a significant portion being of Chinese descent [40]. - Key team members have backgrounds in prestigious institutions and have previously interned at major tech companies, contributing to the project's credibility and expertise [41][44].
砸千亿重金、挖28岁华裔天才CEO、高薪聘谷歌OpenAI员工,传Meta正重组AI研发体系
3 6 Ke· 2025-06-11 23:33
Group 1 - Meta is establishing a new lab focused on "Superintelligence" to develop AI systems that surpass human intelligence in reasoning, problem-solving, creativity, and decision-making [1][3] - Meta has agreed to acquire 49% of Scale AI for $14.8 billion, which is approximately 106.14 billion RMB [1][3] - Alexander Wang, the 28-year-old CEO of Scale AI, is invited to join Meta's new lab, highlighting Meta's strategy to attract top talent in the AI field [1][4] Group 2 - Meta is offering compensation packages ranging from seven to nine figures to recruit top researchers from companies like OpenAI and Google, with some already agreeing to join [4][9] - Scale AI, founded in 2016, provides data labeling solutions and reported a revenue of $870 million in the previous year, with expectations to double to over $2 billion this year [3][9] - Meta's AI efforts are led by two groups: a generative AI team and a fundamental AI research lab, with Yann LeCun, a Turing Award winner, overseeing the latter [4][9] Group 3 - Meta's recent AI model testing faced criticism, with external researchers questioning the objectivity of its benchmark tests [5][8] - The company aims to regain its competitive edge in AI, especially after the rise of ChatGPT, which has intensified competition in the tech industry [9][10] - Meta's previous focus on open-source large models and social platform AI tools has led to a fragmented strategy, prompting the need for a more cohesive approach [10]
Meta delays release of flagship ‘Behemoth' AI model as engineers struggle: report
New York Post· 2025-05-15 23:15
Core Insights - Meta Platforms is delaying the release of its "Behemoth" AI model due to concerns about its capabilities and the significance of improvements over earlier versions [1][3] - The initial release was scheduled for April to align with Meta's first AI conference but has now been postponed to fall or later [2][3] Development Timeline - Behemoth was originally set for an April release, which was later pushed to June, and is now delayed further [2][3] - The company had previously described Behemoth as "one of the smartest LLMs in the world" and its most powerful model to date [3][5] Recent Developments - In April, Meta released the latest versions of its LLM, Llama 4 Scout and Llama 4 Maverick, while previewing Behemoth [5]
Report: Meta Delays Rollout of Behemoth AI Model Amid Performance Concerns
PYMNTS.com· 2025-05-15 21:53
Core Insights - Meta has delayed the rollout of its flagship AI model, Behemoth, initially planned for April, then June, and now postponed until at least fall [1][2] - The delays are attributed to challenges in improving the AI model and concerns about its performance compared to public claims [2] - Meta's CEO, Mark Zuckerberg, emphasized the transformative potential of AI and announced increased spending on AI data centers, raising capital expenditures to $64 billion to $72 billion from a previous estimate of $60 billion to $65 billion [3][4][5] Group 1 - The launch of Behemoth has been postponed multiple times, with no public commitment to a new timeline [1] - The company is facing difficulties in enhancing the AI model and ensuring it meets the performance standards advertised [2] - Meta's recent AI model releases, Llama 4 Scout and Llama 4 Maverick, aim to compete with more expensive closed models from rivals [5] Group 2 - Meta plans to significantly increase its capital expenditures to meet the growing demand for computing resources [4] - Zuckerberg highlighted the vast opportunities presented by AI and the company's strategy to accelerate efforts in expanding capacity [5]
击败DeepSeek V3?Meta强势炸场,史上最强Llama 4开源!
Ge Long Hui· 2025-04-06 06:22
Core Viewpoint - The launch of Meta's Llama 4 series marks a significant advancement in open-source AI models, positioning the company to compete with leading tech giants in the AI arms race [1][2]. Group 1: Llama 4 Series Launch - Meta introduced its most powerful open-source AI model, Llama 4, which is a multi-modal model capable of integrating various data types and converting content across different formats [3][4]. - The Llama 4 series features a mixed expert (MoE) architecture, supports 12 languages, and is touted as the strongest open-source multi-modal model available [4]. Group 2: Model Specifications - The Llama 4 series includes two versions: Scout and Maverick [5]. - Scout has 17 billion active parameters, 16 expert models, and a total of 109 billion parameters, supporting up to 10 million context inputs, outperforming OpenAI's models [6][8]. - Maverick also has 17 billion active parameters but features 128 expert models and a total of 400 billion parameters, matching the reasoning capabilities of DeepSeek-v3-0324 with only half the parameters [7][10]. Group 3: Performance Metrics - In extensive benchmark tests, Scout outperformed models such as Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 [9]. - Maverick excelled in programming, reasoning, multi-language, long context, and image benchmark tests, surpassing GPT-4o and Gemini 2.0 [11]. Group 4: Future Developments - Meta is training a new model, Llama4-Behemoth, which will have 2 trillion parameters and is expected to be released in the coming months [14]. - This model will feature 288 billion active parameters and 16 experts, and is anticipated to outperform GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in various STEM benchmark tests [15][16]. Group 5: Strategic Goals - Meta aims to establish itself as a leader in AI by making its models open-source and widely accessible, allowing global benefits [17]. - The company plans to invest $65 billion in expanding its AI infrastructure, including a nearly $1 billion data center project in Wisconsin [19].