凸优化
Search documents
苏炜杰获2026「统计学诺奖」考普斯奖,14年来首位华人得主
机器之心· 2026-02-07 04:09
机器之心编辑部 在时隔 14 年之后,有着「统计学诺贝尔奖」之称的考普斯奖(COPSS Presidents' Award),又一次迎来了华人得主。 2026 年考普斯奖颁给了「北大校友、现宾夕法尼亚大学副教授苏炜杰」。 奖项委员会给他的评语是 ,「为大语言模型的多项应用建立了严格的统计基础;在隐私保护数据分析方面取得突破性进展,并成功应用于 2020 年美国人口普查; 设计了 AI 顶级会议的同行评审机制,并于 ICML 2026 正式落地;在凸优化领域开展了奠基性研究;以及在深度学习的数学理论与高维统计推断方面作出了广泛而 深远的贡献。」 作为国际统计学和数据科学领域的最高荣誉,考普斯奖的地位相当于数学界的菲尔兹奖,每年只颁发给一位年龄在 40 岁以下的统计学家 。该奖项由五大顶级统计 学会(国际数理统计学会 IMS、美国统计学会 ASA、加拿大统计学会 SSC 及美国东西部生物统计学会 ENAR 与 WNAR)共同评选,旨在表彰对统计学理论、方 法或应用做出杰出贡献的学者。 在历史上,考普斯奖的获得者几乎都是后来定义了该领域的宗师级人物。 统计学是华人的优势学科,曾有多位华人获得考普斯奖,包括近期回国的 ...
真·博士水平,GPT-5首次给出第四矩定理显式收敛率,数学教授只点拨了一下
3 6 Ke· 2025-09-10 09:32
Core Insights - GPT-5 has successfully extended the qualitative fourth moment theorem to a quantitative form with explicit convergence rates, marking a significant advancement in mathematical research [1][6][8]. Group 1: Research Achievements - OpenAI's GPT-5 Pro improved the known boundary value in convex optimization from 1/L to 1.5/L within minutes [6]. - The research led by three mathematics professors aimed to test GPT-5's ability to generalize the qualitative fourth moment theorem to include explicit convergence rates, covering both Gaussian and Poisson cases [8][14]. Group 2: Interaction with Researchers - During the initial interaction, GPT-5 provided a correct overall conclusion but made errors in reasoning that could invalidate the proof, which were later corrected through further questioning by researchers [10][12]. - GPT-5 was able to format the results into a research paper, including an introduction, main theorem statements, detailed proofs, and references, demonstrating its capability in academic writing [12]. Group 3: Further Exploration - Researchers sought to extend the findings to the Poisson case, prompting GPT-5 to recognize structural differences between Gaussian and Poisson scenarios [14][15]. - After initial missteps, GPT-5 was guided to consider non-negativity in the Poisson case, leading to a more accurate reformulation of the theorem [16][17]. Group 4: Publication Challenges - The authors initially intended to list GPT-5 as a co-author but were informed by arXiv that AI cannot be credited as an author, resulting in a submission without GPT-5's name [18].
真·博士水平!GPT-5首次给出第四矩定理显式收敛率,数学教授只点拨了一下
量子位· 2025-09-10 08:01
Core Insights - GPT-5 has successfully extended the qualitative fourth moment theorem to a quantitative form with explicit convergence rates, marking a significant advancement in mathematical research [1][2][10]. Group 1: Research Achievements - The original theorem indicated that convergence would occur but did not specify the speed of convergence; GPT-5's contribution clarifies this aspect [2]. - OpenAI co-founder Greg Brockman expressed satisfaction with the progress made using GPT-5 in mathematical research [4]. - GPT-5 Pro improved known boundary values in convex optimization from 1/L to 1.5/L within minutes, showcasing its capabilities [8]. Group 2: Research Methodology - A controlled experiment was conducted by three mathematics professors using the Malliavin–Stein framework to test GPT-5's ability to generalize the fourth moment theorem [9][10]. - Initial prompts were based on a paper that established a qualitative fourth moment theorem applicable to two Wiener–Itô integrals with differing parity [11]. - GPT-5 provided a generally correct conclusion but made errors in reasoning that could jeopardize the proof's validity [13][14]. Group 3: Iterative Improvement - Upon identifying errors, researchers prompted GPT-5 to check its formulas and provide detailed derivations, leading to further corrections [15]. - GPT-5 was able to format the results into a research paper structure, including an introduction, main theorem statements, and a complete proof process [17]. - The AI suggested that the method could be extended to non-Gaussian frameworks, indicating its potential for broader applications [20]. Group 4: Further Exploration - Researchers aimed to extend the findings to Poisson cases, recognizing structural differences between Gaussian and Poisson scenarios [21][24]. - GPT-5 initially overlooked a critical fact regarding non-negativity in Poisson cases but was able to correct itself after specific guidance from researchers [26][28]. Group 5: Publication Challenges - The authors initially intended to list GPT-5 as a co-author but were informed by arXiv that AI cannot be credited as an author [29]. - Ultimately, the paper was submitted without GPT-5 listed as an author, reflecting ongoing discussions about AI's role in academic contributions [30].
GPT-5 Pro独立做数学研究!读论文后给出更精确边界,OpenAI总裁:这是生命迹象
量子位· 2025-08-21 04:23
Core Viewpoint - The article discusses the capabilities of OpenAI's GPT-5 Pro in independently exploring and proving mathematical concepts, specifically in the field of convex optimization, highlighting its potential as a significant breakthrough in AI research [1][9][42]. Group 1: GPT-5 Pro's Achievements - GPT-5 Pro provided a more precise threshold and corresponding proof for a boundary problem in convex optimization compared to the original paper [2][26]. - The model was able to refine the boundary from 1/L to 1.5/L using advanced inequality techniques in just 17.5 minutes, while human verification took 25 minutes [27][28]. - OpenAI's president referred to this achievement as a "sign of life," indicating the model's advanced capabilities [9]. Group 2: Convex Optimization Insights - The original paper titled "Are Optimization Curves Convex?" investigates whether the optimization curve generated by gradient descent on smooth convex functions is convex [10][11]. - The paper concludes that the convexity of the optimization curve depends on the choice of step size, with specific ranges ensuring convexity [14][17]. - Key findings include that for step sizes in the range (0, 1/L], the optimization curve is guaranteed to be convex, while in the range (1.75/L, 2/L), it may not be convex even if gradient descent converges [17][26]. Group 3: Comparison of Approaches - GPT-5 Pro's proof approach differed from the updated version of the original paper, demonstrating its ability to independently discover and prove mathematical rules [41][42]. - The original authors later updated their paper to establish 1.75/L as an exact boundary, closing previously unexplored intervals [41][42].