腾讯研究院AI速递 20260119
腾讯研究院·2026-01-18 16:01

Group 1 - xAI's Colossus 2 is the world's first supercomputer cluster to reach 1GW power, with plans to upgrade to 1.5GW in April and a final capacity of 2GW [1] - The cluster will house 555,000 GPUs, surpassing Meta and Microsoft, dedicated to training Grok 5 with 60 trillion parameters [1] - The surge in power demand from data centers may lead to rolling blackouts for 67 million residents in the US PJM grid area, prompting xAI to deploy 168 Tesla Megapack energy storage systems [1] Group 2 - OpenAI has launched an $8/month ChatGPT Go subscription service, offering the GPT-5.2 Instant version with message and image creation limits ten times that of the free version [2] - The company plans to test advertisements in the US on both free and Go versions, with ads clearly marked and not affecting response content [2] - OpenAI assures that user data will not be sold to advertisers, and users can opt out of personalized ads and delete related data [2] Group 3 - OpenAI has quietly launched the ChatGPT Translate tool, supporting over 50 languages and allowing users to adjust the tone of translations [3] - Google has responded with the open-source TranslateGemma model, supporting 55 languages and featuring 12 billion parameters, surpassing the previous 27 billion baseline [3] - TranslateGemma retains multimodal capabilities to translate text in images, with a 4 billion version that can run on mobile devices [3] Group 4 - Black Forest Labs has open-sourced the FLUX.2 Klein model, achieving end-to-end inference in under 0.5 seconds on modern hardware, unifying text-to-image generation and editing [4] - The 4 billion parameter model requires only 13GB of VRAM to run on consumer-grade GPUs, while the 9 billion version matches the performance of models with five times the parameters [4] - The model offers FP8 and NVFP4 quantized versions, achieving inference speedups of up to 1.6x and 2.7x on RTX GPUs, with VRAM usage reduced by 40% to 55% [4] Group 5 - Meituan has released the LongCat-Flash-Thinking-2601 model with 560 billion parameters, introducing a rethinking mode that allows for simultaneous parallel thinking [7] - The model shows significant improvements in tool usage and search benchmarks, with a new evaluation method for generalization capabilities in automated environment scaling [7] - The model employs environment scaling and multi-environment reinforcement learning, enhancing adaptability in out-of-distribution scenarios [7] Group 6 - The court has unsealed over 100 documents in the lawsuit between Musk and OpenAI, revealing that Altman indirectly holds shares in OpenAI through the YC fund [8] - A diary entry from Brockman in 2017 admits to wanting to turn OpenAI into a for-profit company and remove Musk, stating it was the only chance to get rid of him [8] - OpenAI refutes claims that Musk sought a 50%-60% equity stake and CEO position, with the judge deeming the evidence too contentious for a jury trial set for April 27 [8] Group 7 - Neuralink's first subject revealed that brain chips can be upgraded without surgery through three methods: Telepathy app updates, OTA firmware updates, and hardware iterations [9] - After 85% of electrodes detached, the team used software algorithms to enhance the performance of the remaining 15%, achieving better results than intact electrodes [9] - Future plans include a "dual-chip configuration" to create a "digital bridge" between the brain and spinal cord, potentially allowing paralyzed individuals to walk again [9] Group 8 - Sequoia Capital partners have published a blog asserting that AGI has arrived, defining it as the ability to clarify tasks [10] - The article cites an example of an intelligent agent completing a recruitment task autonomously in 31 minutes, demonstrating its capability to form hypotheses and validate them [10] - The capabilities of long-cycle intelligent agents are expected to double every seven months, with predictions that by 2028 they could complete a human expert's daily work [10] Group 9 - OpenAI's post-training lead stated that the intelligence of a model is determined by how well it understands user queries [11] - GPT-5.1 has transformed all chat models into reasoning models, allowing them to autonomously decide on thinking duration based on question difficulty [11] - Improvements have been made in context memory, automatic model switching, and user-defined expression styles, with future models expected to be more customizable [11] Group 10 - Anthropic's new Economic Index report indicates that AI accelerates significantly with task complexity, achieving speedups of 9 times for high school tasks and 12 times for college tasks [12] - Human-AI collaboration has extended the time limit for AI tasks from 2 hours to 19 hours, nearly a tenfold increase, emphasizing the importance of human feedback [12] - The report warns of the "de-skilling" risk, as AI systematically removes high-intelligence components from work, with tasks now requiring an average of 14.4 years of education [12]