Workflow
X
icon
Search documents
腾讯研究院AI速递 20260121
腾讯研究院· 2026-01-20 16:03
Group 1 - Musk has fulfilled his promise by open-sourcing the new recommendation algorithm for the X platform, which is 100% AI-driven and removes manual features and rules [1] - The algorithm utilizes Thunder and Phoenix engines to construct information streams, predicting 15 types of user behaviors with weighted scoring, where the weight of replying to authors' comments is 75 times that of likes [1] - Negative feedback such as blocking and reporting significantly reduces visibility, while time spent and genuine interactions are core metrics, allowing even small accounts to gain exposure and diminishing the advantage of having a large follower base [1] Group 2 - Zhipu AI has open-sourced the lightweight model GLM-4.7-Flash, which has 30 billion total parameters but only 3 billion activated, aimed at "local programming and intelligent assistants," with free API access [2] - This model is the first to adopt the MLA architecture from DeepSeek, supporting a context window of 200K and scoring 59.2 in the SWE-bench code repair test [2] - Local deployment tests show that it can run at 43 tokens per second on Apple's M5 chip and is compatible with HuggingFace, vLLM, and Huawei's Ascend NPU [2] Group 3 - MiniMax has unveiled Agent 2.0, defined as an "AI-native workspace," which offers a desktop application for seamless local and cloud connectivity, allowing operations on local files and initiating web automation tasks [3] - The Expert Agents feature encapsulates private knowledge and industry SOPs to create vertical domain expert avatars, enhancing general expertise scores from 70 to as high as 100 [3] - Users can customize Expert Agents, achieving a closed-loop capability from research to delivery, with desktop versions available for both Windows and Mac [3] Group 4 - Jieyue Xingchen has open-sourced the multimodal small model Step3-VL-10B, which, with only 10 billion parameters, competes with and even surpasses models like GLM-4.6V (106 billion) and Qwen3-VL (235 billion) in various evaluations [4] - The model possesses exceptional visual perception, deep logical reasoning, and interactive capabilities with edge agents, achieving top-tier performance in the AIME math competition [4] - It employs 1.2 trillion data for full parameter joint pre-training, over 1400 reinforcement learning iterations, and an innovative PaCoRe parallel coordination reasoning mechanism, with both Base and Thinking versions open-sourced [4] Group 5 - "Moon's Dark Side" is undergoing a new round of financing, with a valuation of $4.8 billion, an increase of $500 million from the previously announced $4.3 billion valuation just 20 days ago, with financing expected to complete soon [5] - The company currently holds over 10 billion yuan in cash and is not in a hurry to go public, planning to time its IPO as a means to accelerate AGI development [5] Group 6 - Superparameter Technology has launched the game agent COTA, which is entirely driven by a large model, achieving professional-level performance in FPS games with a visible reasoning chain [6] - It uses a "dual-system hierarchical architecture" to simulate human fast and slow thinking, with the Commander responsible for strategic decisions and the Operator executing operations in milliseconds, reducing response time to 100 ms [6] - This product validates the feasibility of large models in high-frequency competitive gaming scenarios, providing reference ideas for embodied intelligence and other real-world issues [6] Group 7 - Microsoft CEO Satya Nadella stated at the Davos Forum that mastering model orchestration capabilities is essential for establishing a competitive edge in the AI era [7] - The proliferation of AI requires enhancing "token efficiency per dollar per watt" from the supply side, while the demand side necessitates companies to drive transformation across "concepts, capabilities, and data" [7] - True "enterprise sovereignty" involves converting unique experiences and knowledge into proprietary AI models to prevent core value from flowing to model providers [7] Group 8 - a16z's analysis indicates that while ChatGPT maintains a dominant position with 800-900 million weekly active users, Gemini is growing at 155%, indicating a "winner-takes-most" market in AI assistants [8] - OpenAI's new experiences pushed through the ChatGPT interface for shopping, tasks, and learning have not truly broken through, limited by the existing chatbox interface's inability to provide a top-tier product experience [8] - Successful AI products like Replit, Suno, and Character AI share a common trait of having a distinct and focused interface, suggesting that startup opportunities lie in deep optimization for specific workflows [8] Group 9 - Anthropic's research team has discovered that model personalities can be quantified, with a dominant dimension called the "assistance axis" measuring the extent to which models operate in "intelligent assistant" mode [9] - Interventions along the assistance axis can control role-playing willingness, significantly reducing harmful response rates and defending against personality jailbreak attacks [9] - The proposed "activation ceiling" technique can lower the success rate of personality jailbreaks by nearly 60% without significantly impairing model performance, opening new pathways for human control over AI [9]
马斯克兑现承诺,开源X推荐算法,100% AI驱动,0人工规则
3 6 Ke· 2026-01-20 12:09
Core Insights - The new recommendation algorithm for the X platform, driven by AI, has been officially open-sourced, marking a significant shift from manual rules to an AI-driven system [1][36] - The algorithm utilizes a dual-engine approach, consisting of Thunder for follower content and Phoenix for global discovery [36] Algorithm Changes - The algorithm is now entirely AI-driven, removing all manually designed features and most human rules [2][36] - The previous manual tuning and recommendation rules have been eliminated, allowing the Grok-based Transformer model to learn from user interaction history [3][36] Information Flow Sources - The "For You" feed is constructed from two main sources: Thunder, which focuses on content from followed accounts, and Phoenix, which discovers content that users may like but do not follow [7][36] - Thunder ensures real-time access to new content from followed accounts, while Phoenix uses machine learning to find relevant posts from a global pool [7][36] Scoring Mechanism - The algorithm predicts user behavior based on 15 different actions, with the final score calculated using a weighted sum of these predictions [9][11] - Negative feedback, such as blocking or muting authors, significantly reduces a post's visibility [12][14] Key Algorithm Mechanisms - Users are penalized for posting multiple times in quick succession, as the algorithm aims to promote content diversity [15][16] - Each post is scored independently, ensuring that high-performing posts do not negatively impact the visibility of others [17][18] - User engagement metrics, such as time spent on a post, are crucial for determining content exposure [19][20] - Posts that have already been seen by a user will not be recommended again, ensuring fresh content with each refresh [23][24] Content Filtering - The algorithm employs a two-stage filtering process to remove duplicates, irrelevant content, and posts that users cannot access [27][28] Design Principles - The algorithm is built on five core design principles, including zero manual feature engineering and candidate isolation to ensure independent scoring of posts [30][31][32][33][34] Implications for Content Creators - To maximize exposure, creators should focus on engaging content that encourages user interaction and avoid practices like spamming or posting external links [35] - The open-sourcing of the algorithm represents a milestone in social media transparency, aligning with the company's commitment to openness since the acquisition of Twitter [36][37]
刚刚!马斯克把 X 推荐算法底裤给开源了
程序员的那些事· 2026-01-20 11:37
Core Viewpoint - The company has announced the open-sourcing of its recommendation algorithm for X, aiming to enhance transparency and invite community collaboration in its optimization efforts [1]. Group 1: Algorithm Overview - The recommendation algorithm consists of components such as Home Mixer, Thunder, and Phoenix, focusing on sorting information from "followed accounts" and "content mined from the web" using a Grok-based Transformer model [3]. - A notable feature of the algorithm is the elimination of all manually designed features and heuristic rules, relying solely on the model to learn correlations from user interaction history [5]. Group 2: Algorithm Functionality - The algorithm's workflow involves capturing user behavior data, sourcing candidate content from two channels, and applying data augmentation and multi-round filtering to predict user engagement probabilities (likes, shares, etc.), ultimately calculating a final score for content presentation [5]. - The implementation of X's recommendation algorithm is done using Rust and Python, and it is licensed under the Apache-2.0 license. As of now, it has received 1.1k stars and 195 forks on GitHub [5].
刚刚,马斯克开源基于 Grok 的 X 推荐算法!专家:ROI 过低,其它平台不一定跟
AI前线· 2026-01-20 09:36
Core Viewpoint - Elon Musk has open-sourced the X recommendation algorithm, which combines in-network content from followed accounts and out-of-network content discovered through machine learning, using a Grok-based Transformer model for ranking [3][12][18]. Summary by Sections Algorithm Overview - The open-sourced algorithm supports the "For You" feed on X, integrating content from both followed accounts and broader network sources, ranked by a Grok-based Transformer model [3][5]. - The algorithm fetches candidate posts from two main sources: in-network content (from accounts users follow) and out-of-network content (discovered through machine learning) [9][10]. Algorithm Functionality - The system filters out low-quality, duplicate, or inappropriate content to ensure only valuable candidates are processed [7]. - A Grok-based Transformer model scores each candidate post based on user interactions (likes, replies, shares, clicks), predicting the probability of various user actions [7][8]. Historical Context - This is not the first time Musk has open-sourced the X recommendation algorithm; a previous release occurred on March 31, 2023, which garnered over 10,000 stars on GitHub [12][14]. - Musk aims to enhance transparency in the algorithm to address criticisms regarding bias in content distribution on the platform [18][19]. User Reactions - Users on the X platform have summarized key insights about the recommendation algorithm, emphasizing the importance of engagement metrics like replies and watch time for content visibility [22][23]. Importance of Recommendation Systems - Recommendation systems are crucial to the business models of major tech companies, with significant percentages of user engagement driven by these algorithms (e.g., 35% for Amazon, 80% for Netflix) [25][27]. - The complexity of traditional recommendation systems often leads to high maintenance costs and challenges in cross-task collaboration [28]. Future Implications - The introduction of large language models (LLMs) presents new opportunities for recommendation systems, potentially simplifying engineering and enhancing cross-task learning [29][30]. - The open-sourcing of the X algorithm may not lead to immediate changes across other platforms, as they may lack the resources to implement similar systems [39].
与美国关系出现裂痕,欧洲要学中国打造自主版DeepSeek
Feng Huang Wang· 2026-01-20 08:21
Core Insights - European AI companies are seeking to innovate and reduce reliance on American technology amid rising geopolitical tensions with the U.S. [4] - The success of the Chinese AI startup DeepSeek has inspired European researchers to explore alternative paths for developing competitive AI products [5] - European governments are committing hundreds of millions of dollars to decrease dependence on foreign AI suppliers [5] Group 1: Current Landscape - U.S. companies dominate the AI industry across various segments, including processor design, data center capacity, and application development [4] - The perception that innovation is solely occurring in the U.S. is considered dangerous, as it may discourage European efforts to compete [5] - European AI labs may have an advantage in open research and development, allowing for collaborative improvements on models [5] Group 2: Urgency for Autonomy - The changing geopolitical landscape has heightened the urgency for Europe to achieve self-sufficiency in AI technology [6] - Tensions between European leaders and the Trump administration have raised concerns about the future of NATO and the reliance on U.S. technology [6][7] - European dependence on U.S. AI services is viewed as a potential liability in trade negotiations [7] Group 3: Strategies for Development - European countries are attempting to localize AI development through funding initiatives, regulatory adjustments, and partnerships with academic institutions [8] - There is a focus on creating competitive large language models tailored for European languages [8] - The ongoing success of U.S. platforms like ChatGPT poses a challenge for European AI companies to catch up [9] Group 4: Policy and Market Dynamics - There is ambiguity regarding how far Europe intends to push for "digital sovereignty" and whether it requires complete self-sufficiency or just local alternatives [10] - Some European suppliers advocate for strategies that prioritize local AI products, while others warn against excluding U.S. companies [10] - The consensus on policy measures to achieve self-sufficiency in AI is still lacking within Europe [10] Group 5: Future Aspirations - Despite limited budgets, European AI labs believe they can close the performance gap with U.S. leaders, as demonstrated by DeepSeek [11] - Projects like SOOFI aim to develop competitive language models with around 100 billion parameters [11] - The future progress in AI may not solely depend on the largest GPU clusters, indicating a shift in the competitive landscape [11]
Elon Musk's One Hour Worth $30 Million? Crypto Billionaire Justin Sun Says He's 'Willing To Pay'
Benzinga· 2026-01-19 03:54
Core Viewpoint - Justin Sun, founder of Tron, expressed his willingness to pay $30 million for a one-hour private conversation with Elon Musk, highlighting his admiration for Musk's influence and achievements [1][2]. Group 1: Financial Implications - The $30 million Sun is willing to pay represents only 0.35% of his estimated net worth of $8.5 billion, indicating that for billionaires, such amounts are relatively minor [3]. - For Elon Musk, the same amount constitutes a mere 0.008% of his estimated wealth of $342 billion, further emphasizing the financial insignificance of the sum for individuals in this wealth bracket [3]. Group 2: Personal Admiration and Influence - Sun has publicly praised Musk's vision and business decisions, including Musk's acquisition of Twitter, now known as X [4]. - Sun has referred to Musk as a "role model" and aims to emulate his "innovation and determination" within the cryptocurrency industry [4].
Meta-Owned Threads Overtakes X in Daily Mobile Usage
PYMNTS.com· 2026-01-19 01:59
Core Insights - Meta's Threads has surpassed Elon Musk's X in mobile daily active users, indicating a significant shift in user engagement on mobile platforms [2][3] - Threads achieved 141.5 million daily active users on mobile as of January 7, while X had 125 million, showcasing Threads' growth trajectory [3] - Despite Threads' mobile success, X maintains a larger web-based user base with approximately 150 million daily visits [3] Group 1: Threads vs. X - Threads has seen a consistent increase in daily active users on mobile devices, attributed to long-term trends rather than recent controversies surrounding X [2][4] - The growth of Threads contrasts with its limited traction among web users, where X continues to dominate [3] Group 2: Meta's AI Initiatives - Meta is launching Meta Compute, an AI initiative aimed at enhancing its data center and AI infrastructure, with plans to create tens of gigawatts of computing capacity this decade [6][8] - The initiative is part of Meta's strategy to compete with AI leaders like Google, Microsoft, and OpenAI, following a lukewarm response to its previous AI model, Llama 4 [8] - Leadership for the Meta Compute initiative is under the guidance of experienced company veterans and a newly appointed president, indicating a strategic focus on capacity planning and partnerships [7]
X @xAI
xAI· 2026-01-18 19:07
2nd Place: Memeable is a browser extension that generates personalized memes of people from X threads. Create personalized ready-to-post memeable content with one click. @thomasthecosmic @gubmee https://t.co/bvOU1gT6ux ...
印度学霸咋成美国CEO,不是英语好,是这套圈层闭环太狠了
Sou Hu Cai Jing· 2026-01-18 08:20
Group 1 - The article highlights the significant presence of Indian-origin CEOs in major American companies, with 10% of the Fortune 500 CEOs being of Indian descent and over 60% of executives in the top 300 global companies having Indian accents [1][3] - The success of Indian-origin individuals in the corporate world is attributed to their elite educational background, particularly from institutions like Hyderabad Public School, which emphasizes leadership and offers a unique social network [3][5] - Alumni networks play a crucial role in the success of Indian-origin professionals, with notable figures like Microsoft CEO Satya Nadella contributing back to their alma mater and fostering connections within the elite circles [5][7] Group 2 - Indian Institutes of Technology (IIT) and Indian Institutes of Management (IIM) alumni have established strong networks in Silicon Valley, facilitating job recommendations and resource exchanges among graduates [7][9] - A cultural practice known as "paying it forward" exists within the Indian-origin community, where experienced professionals mentor newcomers, helping them navigate corporate environments and develop essential skills [9][11] - Recent trends indicate a shift in the corporate landscape, with at least 190 Indian-origin executives being laid off in 2023, suggesting that reliance on networks and communication skills may not be sufficient in a changing economic environment [12]
Who will be next to implement an Australia-style under-16s social media ban?
CNBC· 2026-01-18 07:21
Core Viewpoint - The Australian Senate has passed a law banning children under 16 from having social media accounts, prompting global interest and potential similar legislation in other countries, particularly the U.K. [1][3] Group 1: Legislative Actions - The Online Safety Amendment Act in Australia, effective December 10, mandates age verification for major social media platforms, including TikTok, Facebook, and Instagram, with non-compliance fines reaching 49.5 million Australian dollars (approximately $32 million) [2]. - Countries such as the U.K., France, Denmark, Spain, Germany, Italy, and Greece are considering similar bans on social media for under-16s [4][12]. Group 2: Reactions and Implications - Mixed reactions have emerged from teenagers, tech companies, and experts regarding the Australian ban, with some advocating for similar measures globally [3][5]. - The U.K. Prime Minister has expressed support for protecting children from social media, indicating that all options are being considered for further protections [10]. Group 3: Industry Response - Reddit has initiated a lawsuit against the Australian law, claiming it restricts political discussion, while Meta has urged reconsideration of the ban [7]. - The tech industry may resist such legislative changes, as seen in the responses from major platforms [6].