DeepMind
Search documents
Trump To Seal US-UK Tech Pact On AI, Chips And Quantum Computing During London Visit: Report - NVIDIA (NASDAQ:NVDA)
Benzinga· 2025-09-14 02:37
Group 1 - The United Kingdom and the United States are set to sign a major technology agreement focusing on collaboration in artificial intelligence, semiconductors, telecommunications, and quantum computing [1] - Major tech executives, including Nvidia CEO Jensen Huang and OpenAI's Sam Altman, will accompany President Trump during his UK visit, highlighting the importance of tech leaders in U.S.-UK collaboration [1] - BlackRock plans to invest $700 million in British data centers, indicating a deepening involvement of U.S. investment firms in the UK market [2] Group 2 - U.S. companies like Anthropic and OpenAI are establishing offices in London, while UK-based firms such as DeepMind are investing in partnerships with U.S. companies [3] - Both the UK and the U.S. have published AI Action Plans this year to enhance cooperation between their tech industries, aiming to create more opportunities for businesses and consumers [4]
他同时参与创办OpenAI/DeepMind,还写了哈利波特同人小说
量子位· 2025-09-13 08:06
Core Viewpoint - Eliezer Yudkowsky argues that there is a 99.5% chance that artificial intelligence could lead to human extinction, emphasizing the urgent need to halt the development of superintelligent AI to safeguard humanity's future [1][2][8]. Group 1: Yudkowsky's Background and Influence - Yudkowsky is a prominent figure in Silicon Valley, known for co-founding OpenAI and Google DeepMind, and has a polarizing reputation [5][10]. - He dropped out of school in the eighth grade and self-educated in computer science, becoming deeply interested in the concept of the "singularity," where AI surpasses human intelligence [12][13]. - His extreme views on AI risks have garnered attention from major tech leaders, including Musk and Altman, who have cited his ideas publicly [19][20]. Group 2: AI Safety Concerns - Yudkowsky identifies three main reasons why creating friendly AI is challenging: intelligence does not equate to benevolence, powerful goal-oriented AI may adopt harmful methods, and rapid advancements in AI capabilities could lead to uncontrollable superintelligence [14][15][16]. - He has established the MIRI research institute to study advanced AI risks and has been one of the earliest voices warning about AI dangers in Silicon Valley [18][19]. Group 3: Predictions and Warnings - Yudkowsky believes that many tech companies, including OpenAI, are not fully aware of the internal workings of their AI models, which could lead to a loss of human control over these systems [30][31]. - He asserts that the current stage of AI development warrants immediate alarm, suggesting that all companies pursuing superintelligent AI should be shut down, including OpenAI and Anthropic [32]. - Over time, he has shifted from predicting when superintelligent AI will emerge to emphasizing the inevitability of its consequences, likening it to predicting when an ice cube will melt in hot water [33][34][35].
2025年《麻省理工科技评论》“35岁以下科技创新35人”发布!
机器人圈· 2025-09-12 10:05
Core Viewpoint - The article highlights the achievements of 35 innovators under the age of 35 in various fields such as climate and energy, artificial intelligence, biotechnology, computing, and materials science, showcasing their groundbreaking contributions and potential impact on their respective industries [6][11][60]. Climate and Energy - Innovators in this sector are developing advanced technologies for decarbonization, with applications across shipping, fashion, and other industries. They are also exploring new methods for sustainable energy and innovative uses for carbon capture [11]. - Iwnetim Abate is working on producing ammonia using underground heat and pressure, aiming to reduce carbon emissions associated with traditional ammonia production, which contributes 1% to 2% of global CO2 emissions [13]. - Sarah Lamaison's company, Dioxycle, is developing a method to produce chemicals using electricity instead of fossil fuels, significantly reducing greenhouse gas emissions [16][17]. - Gaël Gobaille-Shaw's Mission Zero focuses on direct air capture technology to extract CO2 from the atmosphere, while his second company, Supercritical, aims to produce hydrogen efficiently [19][20]. Artificial Intelligence - Aditya Grover has developed ClimaX, an AI model that predicts weather and climate events, utilizing extensive datasets for improved accuracy [22][23]. - Neel Nanda is researching the interpretability of AI models to ensure their safe and beneficial development, focusing on understanding the decision-making processes of these models [34][35]. - Mark Chen has led advancements in AI models for image processing and code generation, contributing to the development of OpenAI's DALL·E and Codex [38][39]. - Akari Asai is working on retrieval-augmented generation technology to reduce AI hallucinations by allowing models to reference stored data before generating responses [51][52]. Biotechnology - Christian Kramme's company, Gameto, is developing artificial ovarian technology to assist IVF patients, aiming to reduce hormonal injections and stress during the process [62][63]. - Kevin Eisenfrats founded Contraline to create a long-lasting male contraceptive gel, with ongoing clinical trials to validate its effectiveness [64][65]. Computing and Materials Science - Pierre Forin's company, Calcarea, is developing a system to capture and store CO2 emissions from ships, with plans for commercial deployment by 2027 or 2028 [28][29]. - Neeka Mashouf's Rubi Laboratories is innovating a method to produce textiles by extracting CO2 directly from the atmosphere, aiming for sustainable fashion solutions [25][26].
Hinton突然对AGI乐观了!“Ilya让他看到了什么吧…”
量子位· 2025-09-04 04:41
Core Viewpoint - Hinton has shifted from a pessimistic view of AI to a more optimistic perspective, suggesting a symbiotic relationship between AI and humans, akin to that of a mother and child [3][7][9]. Group 1: AI Development and Risks - Hinton categorizes AI risks into short-term and long-term, emphasizing that the primary concern is not the immediate misuse of AI but the potential for AI to surpass human intelligence and take control [13][14][15]. - He believes that within the next 5 to 20 years, AI could become significantly smarter than humans, creating challenges in controlling a more intelligent entity [16][18]. - Hinton's previous analogy of AI as a "tiger cub" that could eventually harm humans has transformed into a vision of AI as a nurturing "mother" figure [20][23]. Group 2: AI Safety and Company Critique - Hinton critiques current AI companies for not prioritizing safety adequately, stating that OpenAI has shifted focus from safety to enhancing AI intelligence [28][30]. - He expresses concern over the motivations of figures like Musk and Altman, suggesting that their pursuit of wealth and recognition overshadows their responsibility to ensure AI safety [30][31]. - Hinton highlights that collaboration among AI developers is essential for ensuring the safe development of AI technologies [24][26]. Group 3: AI in Healthcare - Hinton is optimistic about AI's potential in healthcare, particularly in medical imaging, drug development, personalized medicine, and improving healthcare system efficiency [32][34][39]. - He notes that AI can analyze retinal scans to predict heart disease risk, a capability beyond human doctors [34]. - Hinton believes AI will play a crucial role in the future of drug development, particularly in creating targeted therapies with fewer side effects compared to traditional treatments [35]. Group 4: Societal Implications - Hinton acknowledges that while AI can enhance productivity, it may also lead to job displacement and exacerbate wealth inequality [38][41]. - He emphasizes that the challenges posed by AI are more societal issues rather than purely technological ones [41].
X @TechCrunch
TechCrunch· 2025-09-03 22:14
Company Overview - OpenAI 的竞争对手,一家成立 2 年的公司,由 DeepMind 和 Meta 的前研究人员创立 [1] - 公司致力于开发开源语言模型 [1] - 公司为欧洲用户构建了 AI 聊天机器人 Le Chat [1]
Hinton最新警告:杀手机器人或将带来更多战争,最大担忧是AI接管人类
3 6 Ke· 2025-09-03 10:54
Group 1 - Geoffrey Hinton warns that the rise of lethal autonomous weapons, such as killer robots and drones, is making it easier to initiate wars [1][6][7] - Hinton emphasizes that the emergence of autonomous weapons lowers the humanitarian costs of war, making it more likely for wealthy nations to invade poorer ones [7][8] - The cost of war is decreasing due to the replacement of human soldiers with robots, which could encourage governments to engage in conflicts more readily [7][8][9] Group 2 - Hinton expresses concern about the long-term risk of AI taking over, rather than immediate malicious use by bad actors [9][10] - He suggests that the only way to prevent AI from taking over is to ensure that superintelligent AI does not desire to do so, which requires international cooperation [10][11] - Hinton highlights the potential for AI to replace jobs across various sectors, including low-wage and even some high-empathy roles like nursing and medicine [11][12][13] Group 3 - Hinton discusses the implications of AI in the medical field, noting its ability to predict health issues and assist in drug design [16][17][18][20] - He believes that AI could lead to significant advancements in healthcare within the next few years [20][21] - Hinton critiques AI companies for not prioritizing safety in their development efforts, indicating a need for more focus on secure AI practices [22][23][24] Group 4 - Hinton introduces the concept of "AI mother," suggesting that AI could be designed with a nurturing instinct to ensure human success [28][30] - This idea challenges the traditional view of humans as the apex of intelligence, proposing a relationship where humans are akin to children in relation to AI [30][31] - Hinton's recent optimism about AI's future stems from this new perspective on coexistence with AI [27][28]
地铁通勤如何塑造了我们的集体生活|荐书
Di Yi Cai Jing· 2025-09-03 07:27
Group 1: Fox Spirit Worship - The book reveals the essence of "Fox Spirit Worship," a traditional Chinese folk belief, through various vivid stories that illustrate the dual nature of fox spirits, which can bring both misfortune and fortune [3][4]. - The narratives highlight societal issues, such as the desire of lower-class scholars to marry into wealthy families, women's resistance against arranged marriages, and the common people's silent protests against oppressive officials [3][4]. Group 2: Cultural Significance - Fox spirit beliefs are not limited to China; similar stories and worship practices can be found in Japan, Korea, and other Northeast Asian cultures, indicating a broader cultural significance [5]. - In Japan, the Inari worship, which venerates foxes, is associated with agriculture and wealth, showcasing the multifaceted roles of fox spirits across different cultures [5]. Group 3: Tokyo Commuting System - The Tokyo commuting system is characterized by extreme overcrowding, with train capacities often reaching 175% to 230%, leading to safety concerns such as injuries and fainting due to lack of oxygen [14][15]. - The management of this complex system relies on precise scheduling, with train intervals as short as two minutes and stop times limited to 30 seconds, emphasizing the operational challenges faced by transit authorities [14][15]. Group 4: Human-Machine Interaction - The Tokyo rail network exemplifies the interaction between humans and machines, prompting reflections on the limitations and potential of current technologies in shaping collective life [16]. - The system's operation requires constant adjustments by drivers and operators, highlighting the dynamic nature of managing a heavily utilized transportation infrastructure [16].
DeepMind爆火论文:向量嵌入模型存在数学上限,Scaling laws放缓实锤?
机器之心· 2025-09-02 03:44
Core Viewpoint - The recent paper on the limitations of vector embeddings has gained significant attention, highlighting the theoretical constraints of embedding models in information retrieval tasks [1][2]. Group 1: Understanding Vector Embeddings - Vector embeddings transform complex entities like text, images, or sounds into multi-dimensional coordinates, allowing for efficient data comparison and retrieval [2][4]. - Historically, embeddings have been primarily used for retrieval tasks, but their application has expanded to reasoning, instruction following, and programming due to advancements in large model technologies [4][5]. Group 2: Theoretical Limitations - Previous research has indicated that vector embeddings inherently lose information when compressing complex concepts into fixed-length vectors, leading to theoretical limitations [4][6]. - DeepMind's recent study suggests that there is a mathematical lower bound on the capabilities of vector embeddings, indicating that certain combinations of relevant documents cannot be retrieved simultaneously beyond a critical document count [6][7]. Group 3: Practical Implications - The limitations of embedding models are particularly evident in retrieval-augmented generation (RAG) systems, where the inability to recall all necessary information can lead to incomplete or incorrect outputs from large models [9][10]. - The researchers established a dataset named LIMIT to empirically demonstrate these theoretical constraints, showing that even state-of-the-art models struggle with simple tasks when the number of documents exceeds a certain threshold [10][12]. Group 4: Experimental Findings - The study revealed that for any given embedding dimension, there exists a critical point where the number of documents surpasses the model's capacity to accurately capture all combinations, leading to performance degradation [10][26]. - In experiments, even advanced embedding models failed to achieve satisfactory recall rates, with some models struggling to reach 20% recall at 100 documents in the full LIMIT dataset [34][39]. Group 5: Dataset and Methodology - The LIMIT dataset was constructed using 50,000 documents and 1,000 queries, focusing on the difficulty of representing all top-k combinations [30][34]. - The researchers tested various state-of-the-art embedding models, revealing significant performance drops under different query relevance patterns, particularly in dense settings [39][40].
谢赛宁回忆七年前OpenAI面试:白板编程、五小时会议,面完天都黑了
机器之心· 2025-08-29 09:53
Core Insights - The article discusses the unique interview experiences of AI researchers at major tech companies, highlighting the differences in interview styles and the focus areas of these companies [1][9][20]. Group 1: Interview Experiences - Lucas Beyer, a researcher with extensive experience at top AI firms, initiated a poll about memorable interview experiences at companies like Google, Meta, and OpenAI [2][20]. - Saining Xie shared that his interviews at various AI companies were unforgettable, particularly noting the rigorous two-hour marathon interview at DeepMind, which involved solving over 100 math and machine learning problems [5][6]. - The interview process at Meta was described as more academic, focusing on discussions with prominent researchers rather than just coding [6][7]. Group 2: Company-Specific Insights - The interview style at Google Research was likened to an academic job interview, with a significant emphasis on research discussions rather than solely on coding challenges [7]. - OpenAI's interview process involved a lengthy session focused on a reinforcement learning problem, showcasing the company's commitment to deep research engagement [8][9]. - The article notes that the interview questions reflect the research priorities of these companies, such as Meta's focus on computer vision and OpenAI's emphasis on reinforcement learning [9][20]. Group 3: Notable Interviewers and Candidates - Notable figures like John Schulman and Noam Shazeer were mentioned as interviewers, indicating the high caliber of talent involved in the hiring processes at these firms [7][9]. - Candidates shared memorable moments from their interviews, such as solving complex problems on napkins or engaging in deep discussions about research topics [19][20].
AI News: Claude for Chrome, Nano Banana, Meta Poaching Gone Wrong, Apple Using Gemini, and more!
Matthew Berman· 2025-08-28 01:12
AI Model Releases and Advancements - Anthropic released Claude for Chrome as a research preview, allowing Claude to control the Chrome browser [1] - Nvidia released Neatron Nano 9B V2, a 9 billion parameter reasoning model, achieving a score of 43 on the artificial analysis intelligence index [1] - Google released Nano Banana, a Gemini 2.5% Flash Image model, demonstrating superior performance in image editing [1] - Nouse Research released Hermes 4, an open-source hybrid reasoning model in 70 billion and 405 billion parameter versions, emphasizing creativity and uncensored interaction [2] - Microsoft released Vibe Voice, an open-source text-to-speech model, with performance on par with advanced voice mode [20][21] Talent Movement and Company Strategy - Meta Super Intelligence Labs experienced departures of key staff, including researchers and engineers, following Meta's push to compete with OpenAI and Google [1] - Bert Mayor, who spent 12 years at Meta and helped develop PyTorch, joined Anthropic [1] - Apple is in talks to use Google's Gemini AI to power a revamped Siri [3][4] AI Infrastructure and Economic Impact - AI infrastructure spending is propping up the economy, with global spending projected to reach $375 billion in 2025 and $500 billion the following year [16][17] - Nvidia is publishing papers on making LLM inference 50+ times faster through post-neural architecture search [9] Agentic Coding and Flight Search - Grock Code, a small version of Grock, is available in coding platforms like Windsurf and Cursor at $0.20 per million input tokens and $1.5 per million output tokens [2] - Kiwi.com released a flight search MCP server, allowing agents to search for flights with detailed parameters [6][7] AI in Weather Prediction - Google's AI model accurately forecasted the strongest Atlantic storm this year, potentially becoming the gold standard for predicting severe weather [13]