Workflow
非凸优化
icon
Search documents
网红“牛津女孩”朱雯琪回深圳当CEO
Nan Fang Du Shi Bao· 2025-12-08 23:12
朱雯琪硕士毕业照。受访者供图 如今31岁的朱雯琪即将完成在牛津大学数学系的博士学业,她给自己解锁了新任务——回国创业。11 月,朱雯琪选择将公司落地深圳南山,成为一名初创企业CEO。 自学 "妈妈带我理解数学、爱上数学" 南都:你介绍道,自己10岁从小学退学,当时如何安排自学生活? 朱雯琪:我从小在深圳长大,到了读书的年纪就在一所公立小学上学。小学时的成绩经常上上下下,最 差的时候曾经英语考36分,数学考45分。 我妈妈曾是中科大少年班的毕业生,毕业后当过几年教师。在妈妈全职在家教我的时间里,我基本上是 学英语和数学比较多。每天的学习时间也与同龄人不同,有点像武侠小说里那种"闭关修炼"。常常是10 时开始学习,到了中午和妈妈外出吃个午饭,再回来学习到17时。 很多人问我在家自学了两年就直接上高中,是不是等于两年里学了三年的内容?我是学了三倍的课吗?其 实不是。我妈妈是在教学中从另一个维度回头看这些知识,她擅长把数学还原成一种很自然的东西,用 生活中的例子把好几个概念串在一起讲。 举一个例子,她会从"4开根号是±2"讲起,再用"赚4块是+4,付4块是-4"帮我理解正负数。接着她就会 问"那-4能不能开根号",就 ...
北大校友、华人学者金驰新身份——普林斯顿大学终身副教授
机器之心· 2025-10-04 05:30
Core Insights - Chi Jin, a Chinese scholar, has been promoted to tenured associate professor at Princeton University, effective January 16, 2026, marking a significant milestone in his academic career and recognition of his foundational contributions to machine learning theory [1][4]. Group 1: Academic Contributions - Jin joined Princeton's Department of Electrical Engineering and Computer Science in 2019 and has rapidly gained influence in the AI field over his six-year tenure [3]. - His work addresses fundamental challenges in deep learning, particularly the effectiveness of simple optimization methods like Stochastic Gradient Descent (SGD) in non-convex optimization scenarios [8][12]. - Jin's research has established a theoretical foundation for two core issues: efficient training of large and complex models, and ensuring these models are reliable and beneficial in human interactions [11]. Group 2: Non-Convex Optimization - One of the main challenges in deep learning is non-convex optimization, where loss functions have multiple local minima and saddle points, complicating the optimization process [12]. - Jin has demonstrated through multiple papers that even simple gradient methods can effectively escape saddle points with the presence of minimal noise, allowing for continued exploration towards better solutions [12][17]. - His findings have provided a theoretical basis for the practical success of deep learning, alleviating concerns about the robustness of optimization processes in large-scale model training [18]. Group 3: Reinforcement Learning - Jin's research has also significantly advanced the field of reinforcement learning (RL), particularly in establishing sample efficiency, which is crucial for applications with high interaction costs [19]. - He has provided rigorous regret bounds for foundational RL algorithms, proving that model-free algorithms like Q-learning can maintain sample efficiency even in complex settings [22]. - This theoretical groundwork not only addresses academic inquiries but also guides the development of more robust RL algorithms for deployment in high-risk applications [23]. Group 4: Academic Background - Jin holds a Bachelor's degree in Physics from Peking University and a Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley, where he was mentored by renowned professor Michael I. Jordan [25]. - His academic background has equipped him with a strong foundation in mathematical and analytical thinking, essential for his theoretical research in AI and machine learning [25]. Group 5: Recognition and Impact - Jin, along with other scholars, received the 2024 Sloan Award, highlighting his contributions to the field [6]. - His papers have garnered significant citations, with a total of 13,588 citations on Google Scholar, indicating the impact of his research in the academic community [27].