Workflow
深度学习
icon
Search documents
李彦宏接受《时代》专访:AGI可能不存在,中国模型落后的不太多
Feng Huang Wang· 2026-01-27 04:39
凤凰网科技讯 北京时间1月27日,百度CEO李彦宏接受了美国《时代》杂志的专访,谈到了百度在AI领 域的发展历程。李彦宏表示,他甚至不相信存在所谓的通用人工智能(AGI),没有一个模型能够"面面俱 到",OpenAI不行,谷歌也做不到。在模型发展上,中国落后的没那么多。 在百度北京庞大总部入口大厅的墙上,悬挂着一块镶嵌着金色数字"1417"的小木牌。这个数字取自北大 对面的一间宾馆房间。正是在那里,李彦宏于2000年创立了这家市值500亿美元的公司。 问:你在2000年创办百度时,有没有预料到AI能在今天发挥如此重要作用? 李彦宏:没有。当时我创办百度时,我意识到互联网将在中国成为一件大事,而搜索技术对于中国互联 网的发展非常重要。但我当时并没有把AI和搜索引擎联系起来。大约在2010年,我们意识到机器学习 (AI的一个分支)开始在搜索结果排名中发挥作用。我们大约在那时开始投资AI,以便研究有多少人会点 击某个链接。随后在2012年,我们意识到深度学习将变得非常重要。它识别图像的精确度远超上一代技 术。百度对AI的实质性大规模投资,正是从2012年左右开始的。 问:你曾提到,去年在将AI融入社会和经济各个领域 ...
浙江小伙给机器人造大脑,2年拿下6亿订单
Sou Hu Cai Jing· 2026-01-26 22:28
"你好,请不要挡道,我正在清扫小区的主干道,请让一让。" 如今,在绿城服务、碧桂园服务、万物云等园区,清扫机器人正在替代物业的真人员工,处理重复性高、枯燥、环境恶劣的工作。 曾经在科幻影片才会出现的打扫机器人,已经开始照进现实。 ● 有鹿机器人正在园区里"执勤" 而在众多清扫机器人品牌中,有鹿机器人可以说是其中的佼佼者。 成立两年便成功实现量产,并累计拿到6.6亿元订单。 有鹿的路子为什么不一样?这得从创始人陈俊波的一个"执念"说起。 有鹿机器人成立于2023年2月,表面看,这是一家初创公司;但看其创始人陈俊波的背景,却一点也不"初创"。 他曾是阿里达摩院自动驾驶业务的核心人物,先后主导研发出物流无人车小G和"小蛮驴",是一个对智能大模型有着无限热爱的人。 ● 陈俊波 1981年,陈俊波出生于浙江金华,高考时,成绩优秀的他,顺利被浙江大学录取。 当时,计算机和生物学都是非常热门的专业,但陈俊波是个喜欢收到及时反馈的人。 "反馈快速、直接,有利于自己后续不断优化调整",于是,他选择了计算机专业。 进入大学后,陈俊波一直专心研究学术,一路读到了博士。 2009年,即将完成博士学业的陈俊波,收到在阿里工作的同学的邀 ...
吴通控股:目前公司5G消息业务收入占比较小
Core Viewpoint - The company WuTong Holdings has announced that its subsidiary Guodu Internet's 5G messaging platform, 5G 101, has integrated with multimodal large models such as DeepSeek and Baidu's Wenxin Yiyan, aiming to enhance the application of natural language processing and deep learning in the 5G messaging sector [1] Group 1 - The 5G 101 platform is working on combining the latest advancements in natural language processing and deep learning to enrich mobile information services [1] - The company is focused on creating a "full-link 5G messaging intelligent solution" for its clients, although the commercial value of this initiative is yet to be validated and realized [1] - Currently, the revenue contribution from the company's 5G messaging business is relatively small [1]
汇顶科技:公司在AI计算方面拥有扎实的技术基础,已将深度学习技术融入指纹识别等多项产品中
Group 1 - The company has a solid technical foundation in AI computing and has integrated deep learning technology into various products such as fingerprint recognition, audio, and touch control, resulting in strong edge AI processing capabilities [1] - The company will continue to monitor trends in technology development and market demand [1] - The company will share information with investors promptly if new products are released in the future [1]
贝仑石油取得基于深度学习的钻井裂缝类型预测专利
Sou Hu Cai Jing· 2026-01-23 06:53
来源:市场资讯 声明:市场有风险,投资需谨慎。本文为AI基于第三方数据生成,仅供参考,不构成个人投资建议。 国家知识产权局信息显示,四川贝仑石油工程技术有限公司取得一项名为"一种基于深度学习的钻井裂 缝类型预测方法、系统及介质"的专利,授权公告号CN120997673B,申请日期为2025年8月。 天眼查资料显示,四川贝仑石油工程技术有限公司,成立于2017年,位于成都市,是一家以从事科技推 广和应用服务业为主的企业。企业注册资本3000万人民币。通过天眼查大数据分析,四川贝仑石油工程 技术有限公司专利信息2条,此外企业还拥有行政许可2个。 ...
美媒:泡沫藏着打通三个学科的密码
Xin Lang Cai Jing· 2026-01-22 05:49
早期的理论认为,泡沫中的气泡会沿着一定轨迹滚动,然后停留在某处。这种框架有助于解释为什么泡 沫一旦形成就会"看起来"稳定,就像一块巨石静静地躺在山谷底部一样。不过,对实验数据的深入分析 揭示了一个问题:泡沫的实际行为与这些理论的预测并不相符。宾夕法尼亚大学的工程师们利用计算机 模拟追踪了泡沫中气泡的运动轨迹,结果发现气泡们根本不老实,总在能量景观(描述系统随状态变化 的分布模型——编者注)上"溜达"。按理说气泡应该像滚石下山一样停在"谷底",但现实却是它们一直 在山坡上"散步"。宾夕法尼亚大学化学与生物分子工程系教授、该论文的共同作者约翰·C·克罗克 说:"我们早在20年前就开始注意到这些差异,但我们一直没找到合理的答案。" 泡沫的这种行为困扰了科学界很久,直到AI领域的梯度下降法(常用于AI中,用来递归性地逼近最小 偏差模型——编者注)带来了灵感。从数学角度看,泡沫的运动方式与训练AI常用的"深度学习"过程极 为相似,现代AI系统通过在训练过程中不断自我调整数值参数来进行"深度学习",这一过程不是一味追 求最小误差,而是让系统在一大片能量较平坦的区域里游走,探索各种"解法"。泡沫的气泡在广阔的能 量景观上不 ...
美媒:泡沫藏着打通AI、物理学、生物学的密码
Huan Qiu Shi Bao· 2026-01-21 22:37
美国《科技日报》 1 月 17 日文章,原题:工程师们发现了人工智能、物理学和生物学之间的共同原理 厨 房里的一瓶新鲜奶油或浴室里的剃须膏,竟然藏着科学界争论了20年的"泡沫之谜"谜底。此前,科学界 一直认为泡沫的微观结构看似无序却保持静止。但近日,一项发表于《美国国家科学院院刊》 泡沫的这种行为困扰了科学界很久,直到AI领域的梯度下降法(常用于AI中,用来递归性地逼近最小 偏差模型——编者注)带来了灵感。从数学角度看,泡沫的运动方式与训练AI常用的"深度学习"过程极 为相似,现代AI系统通过在训练过程中不断自我调整数值参数来进行"深度学习",这一过程不是一味追 求最小误差,而是让系统在一大片能量较平坦的区域里游走,探索各种"解法"。泡沫的气泡在广阔的能 量景观上不断重组,和AI模型在各种解法之间游走的底层逻辑是统一的。 这一发现也为物理学家设计能够适应环境的智能材料指明了新的思路——如果能用泡沫的"自适应"原理 去设计材料,未来的窗帘能自己调整透光度,衣服遇冷热自动调节保温性能。同时,这一发现或许也能 为生物学家探究生命奥秘(例如活细胞的内部框架)提供新的见解,蛋白折叠、免疫细胞运动等过程可 能遵循同样的能 ...
十三年布局,一朝反超,谷歌AI崛起的真实故事
3 6 Ke· 2026-01-19 11:25
Core Insights - The article narrates the journey of Google in the AI sector, highlighting its comeback from setbacks to achieving significant milestones with the launch of products like Nano Banana and Gemini App, showcasing the importance of talent, time, and long-term vision in technology development [1][49][52]. Group 1: Key Events and Milestones - In August 2025, Google's image generator Nano Banana topped the LMArena charts, leading to a surge in global user engagement, generating billions of images [3][49]. - By September 2025, the Gemini App became the most downloaded app on the Apple App Store, with monthly active users increasing from 450 million in July to 650 million by October [49]. - In November 2025, Google released the Gemini 3 model, surpassing ChatGPT in multiple benchmarks, resulting in a significant increase in stock price [3][49]. Group 2: Historical Context and Strategic Moves - The origins of Google's AI success can be traced back to a secret auction in December 2012 at Lake Tahoe, where Google acquired DNNresearch for $44 million, marking a pivotal moment in its AI strategy [6][10][22]. - The acquisition of DeepMind in January 2014 for approximately $600 million further solidified Google's position in AI, bringing in top talent and innovative technology [24]. - The introduction of the Transformer model in June 2017 revolutionized AI, laying the groundwork for subsequent advancements in large language models [30][32]. Group 3: Challenges and Responses - Google's cautious approach to AI, particularly in the chatbot domain, led to missed opportunities, exemplified by the delayed release of Bard, which resulted in a significant drop in stock value after a failed launch in February 2023 [35][38]. - The return of co-founder Sergey Brin to active involvement in AI development was a crucial turning point, leading to strategic talent acquisitions and the eventual merger of Google Brain and DeepMind in April 2023 [39][42]. Group 4: Technological Advancements - The development of the TPU (Tensor Processing Unit) began in 2013, which later became a key competitive advantage for Google, enabling efficient AI model operations [28][48]. - By the end of 2025, Google had developed the Ironwood chip, achieving a performance of 4,614 TFLOPs per chip, significantly enhancing its computational capabilities [47][48]. Group 5: Themes and Conclusions - The overarching themes of talent and time are emphasized throughout Google's journey, illustrating that strategic investments in human capital and patience in technology development can lead to eventual success [53][55]. - The article concludes that despite challenges, Google's ability to adapt and innovate demonstrates that even major tech companies can recover and thrive in competitive landscapes [57][58].
最烦做演讲,黄仁勋曝英伟达养了61个CEO、从不炒犯错员工:CEO是最脆弱群体
3 6 Ke· 2026-01-19 10:43
Core Insights - Jensen Huang, CEO of Nvidia, emphasizes that the company's success is not based on GPU production volume but rather on its unique corporate culture and innovation capabilities [1][24] - Huang predicts that AI investments will fundamentally change computing, leading to computers that can learn autonomously under human guidance, resulting in a transformation of job roles rather than a reduction in employment [2][41] - Nvidia's management structure is designed to foster a safe environment where mistakes are tolerated, allowing for innovation and growth without fear of termination [1][25] Group 1: Company Philosophy and Culture - Nvidia has cultivated a culture where no one is fired for making mistakes, fostering a safe environment for innovation [1][25] - The company has a unique management structure with nearly 61 individuals acting as "CEOs," each deeply committed to the company's mission [1][18] - Huang believes that the essence of Nvidia's success lies in its corporate character and the ability to unite the team in adversity [24] Group 2: Vision for the Future - Huang asserts that in five years, AI will enable computers to handle problems a billion times larger than current capabilities, fundamentally altering the nature of work [38][39] - The future will see an increase in productivity and efficiency across industries, with AI solving previously insurmountable challenges [40][41] - Huang anticipates that while job roles will evolve, the overall number of jobs will not decrease, and AI will provide new opportunities for those currently unemployed [41][44] Group 3: Historical Context and Personal Insights - Nvidia's journey has spanned 33 years, with a consistent focus on reshaping the computing industry since its inception [5][16] - Huang reflects on the importance of learning from past decisions and maintaining a flexible approach to leadership and strategy [14][15] - The company has a history of making bold decisions, such as the early adoption of CUDA technology, which laid the groundwork for its current success [6][8]
刚刚,Geoffrey Hinton成为第二位引用量破百万的科学家
机器之心· 2026-01-16 01:55
Core Viewpoint - Geoffrey Hinton has officially become the second computer scientist in history to surpass 1 million citations on Google Scholar, marking a significant milestone in his academic career and contributions to artificial intelligence [1][3]. Group 1: Academic Achievements - Hinton's citation count currently stands at 1,000,083, with an h-index of 192, indicating his substantial impact in the field of computer science and artificial intelligence [2]. - He is renowned for his work on backpropagation, which addressed the training challenges of multilayer neural networks, laying the groundwork for the deep learning revolution [10]. - Hinton, along with Yoshua Bengio and Yann LeCun, received the Turing Award in 2018, recognizing their pivotal contributions to the field of deep learning [13]. Group 2: Key Contributions - Hinton's notable innovations include the Boltzmann Machine, Restricted Boltzmann Machine, Deep Belief Network, Dropout technique, t-SNE for data visualization, Capsule Networks, and Knowledge Distillation, among others [14]. - His collaboration on AlexNet, which won the ImageNet competition in 2012, is considered a landmark moment that demonstrated the power of deep learning [16]. - The paper "Deep Learning," co-authored by Hinton, has garnered over 100,000 citations, summarizing the evolution and principles of deep learning [16]. Group 3: Personal Background and Career - Born into an academic family, Hinton's early life was marked by high expectations, which shaped his relentless pursuit of knowledge [5][8]. - He moved to Canada in the 1980s, where he established a long-term academic career at the University of Toronto, contributing significantly to the development of AI in Canada [9]. - Hinton's later years have seen him express concerns about the potential risks of AI, emphasizing the need for caution in its development [20]. Group 4: Legacy and Impact - Hinton's citation milestone reflects not only his individual achievements but also the collaborative efforts of his students, Alex Krizhevsky and Ilya Sutskever, who have also made significant contributions to AI [29]. - The historical context of Hinton's work illustrates the broader narrative of humanity's quest to understand intelligence, highlighting the transformative impact of his research on modern AI [31].