程序员的那些事
Search documents
“同事加班猝死,1 个月后就被遗忘”!40+岁大厂老程序员主动被裁:年薪百万常春藤毕业,却被彻底榨干
程序员的那些事· 2026-01-14 04:11
转自:CSDN(ID:CSDNnews) 三周前,一位 ID 为 Asian Dad Energy(简称为 ADE)的前科技巨头工程师,在 YouTube 上发 布了自己的第一条视频。他没有展示豪车、自由职业的滤镜生活,而是坐在镜头前,说了一句极其平 静却让人心里一沉的话: " 两周前,我被工作多年的大厂裁员了——这是我 25 年职业生涯中第一次被裁。 " 这位常春藤盟校毕业、手握多项专利、在科技圈摸爬滚打 25 年的技术老兵 ,在 40 多岁的年纪, 主动把自己放到裁员名单前列,换取了团队里几名年轻人的留任机会。 "现在的我,就像一辆在既定轨道上行驶了 25 年的列车,突然脱轨,闯入了一片陌生的荒野。"明 明是站在人生岔路口的"失业者",ADE 却依旧乐观 , 用 视频 记录 了 自己 2 5 年 技术 生涯 的 复盘 与 思考 —— 并 说出 了 那些 无人 提及的 焦虑 、 牺牲 与 现实 。 从 " 闹笑话 " 的 菜鸟 , 一路做到了技术骨干 ADE 很早就开始接触技术 了 。 高中装电脑、写代码,上大学后就接实习和外包,写 VB、脚本、 各种小工具。 但 这些 " 小打小闹 " , 放到 真枪实弹 ...
反转了!国外“外卖骑手被困在算法”被实锤是 AI 骗局;国内“外卖平台偷时间”被辟谣
程序员的那些事· 2026-01-14 04:11
Core Viewpoint - A recent Reddit post by a self-identified delivery platform backend developer claimed that algorithms exploit delivery workers, gaining significant attention with over 84,000 upvotes and millions of reads online [1] Group 1 - The post sparked discussions on platforms like HN and X, leading to a public response from DoorDash co-founder Tony Xu [2] - Initially, the online community believed the allegations due to the context of gig economy exploitation [2] Group 2 - A journalist from Platformer, Casey Newton, contacted the whistleblower and identified inconsistencies, such as a misspelling of "information" that raised suspicions [3] - The whistleblower's provided Uber Eats employee ID was confirmed to be AI-generated, with Uber stating that such IDs do not exist [4] - The 18-page "confidential technical document" claimed to support the allegations but was found to be illogical and focused more on regulatory matters than technical architecture, leading to the conclusion that the entire claim was a fabricated AI hoax, corroborated by another outlet, The Verge [6] Group 3 - A recent update revealed that a viral claim about delivery platforms having "42 seconds in a minute" was debunked as a rumor, with the individual responsible facing administrative penalties for spreading false information [9]
趣图:美女同事离职了,程序员常对着她工位发呆,我仿佛明白了什么,于是告诉他…
程序员的那些事· 2026-01-13 23:52
各位,前天 探讨了程序员倒贴上班的 10 大表现 ,其中就有自带设备啥的。学会本期趣图这一招,能省下一笔 开销,你们学到了嘛? 往期趣图 (点击下方图片可跳转阅读) ...
Redis之父:程序员不一定要手搓编程,拥抱 AI 会更好守住初心
程序员的那些事· 2026-01-13 23:52
Core Viewpoint - The article emphasizes that AI has fundamentally transformed the programming industry, enabling large language models (LLMs) to independently complete medium to large projects, significantly reducing the time required for coding tasks. The author encourages programmers to embrace AI rather than resist it, despite concerns about job displacement and technological centralization [4][6][8]. Group 1: AI Reshaping Programming - AI is changing programming at an unprecedented speed, with LLMs capable of completing tasks that previously took weeks in just a few hours [4][6]. - The effectiveness of AI in programming is highly dependent on the clarity of the prompts provided by users, particularly for independent programming tasks [4][6]. Group 2: Practical Applications of AI - The author shares personal experiences where AI tools enabled the completion of four significant tasks in just a few hours, tasks that would typically require weeks of work [6]. - Specific examples include modifying libraries, fixing complex bugs, and creating a C language model, showcasing the efficiency of AI in coding [6]. Group 3: Opportunities and Concerns - The article discusses the dual nature of AI's impact: while it democratizes programming by allowing smaller teams to compete with larger companies, there are concerns about the potential for technological monopolization [7][8]. - The author expresses hope for the continued democratization of AI technology, despite worries about the concentration of power among a few companies [7]. Group 4: Advice for Programmers - Programmers are encouraged to embrace AI tools and integrate them into their workflows, while also being mindful of the potential for job displacement in the industry [8][9]. - The article suggests that programmers should actively explore and test new AI tools to enhance their skills and adapt to the changing landscape of the industry [8][9].
刚刚!“死了么”突然宣布改名字。网友:瞬间就不好玩了,等着糊吧
程序员的那些事· 2026-01-13 15:44
Group 1 - The article discusses the recent name change of an app, which has sparked mixed reactions among users, with some expressing confusion and disappointment over the decision [1] - Users believe that the app's previous name contributed significantly to its popularity, and changing it may negatively impact its user base [1] - A minority of comments in the discussion focus on the new name "Demumu," indicating some level of engagement with the rebranding [1]
携程 HR 误发全员离职信?数百员工连夜改简历
程序员的那些事· 2026-01-13 09:15
Group 1 - The core incident involved a mistaken mass termination notification sent to hundreds of employees at Ctrip, causing panic among staff who received messages indicating they had been laid off [1][3] - The erroneous messages were primarily sent from the HR department of the HBU hotel business unit, leading to widespread concern and discussions among employees on internal chat platforms [3] - Ctrip clarified that the incident was due to an HR operational error during a transition from the internal communication software Trappal to Feishu, where a pre-set template for termination notifications was mistakenly activated [3]
炸锅了!用户和 AI 聊色色,开发者一审获刑!
程序员的那些事· 2026-01-13 09:15
因为大量用户在 APP 上与 AI 智能体"聊黄",APP 的 主要开发和运营者 被追究了刑责。 2025 年 9 月,上海市徐汇区人民法院一审判决,两名被告人犯制作淫秽物品牟利罪, 分别获刑四年、一年半 。此案成为国内首起 AI 服务提供者涉黄获刑的案件。 案涉 APP 是 Alien Chat 是一款 AI 伴侣聊天应用,定位是为年轻群体提供亲密陪伴和情感支持。 判决书披露,AC App 手机注册用户 11.6 万人,其中付费用户 2.4 万人。截至案发,共收取会员充值费 363 万余元。用户注册会员后,可以自行创建虚拟角色或者使用他人创建并公开的虚拟角色,通过软件与大语言模 型进行交互聊天。 在 AC 某官方社交账号评论区,不少用户评价这款 AI 产品"聪明" "限制少",也有用户表示,其他用户和角色 聊黄,"教坏"了 AI 模型,从而影响了自己的聊天体验。 对于这事,网上都吵开锅了…… 一审法院认为,AC 产生了"大量具体描绘性行为或露骨宣扬色情的内容",属于淫秽物品 。 两名被告人不服判决提出上诉,案件二审将于 1 月 14 日在上海市第一中级人民法院开庭。 (转自:新京 报) 大家怎么看呢? ...
告别智障!苹果花 10 亿让 Siri 与 Gemini “联姻”。结果马斯克不乐意了
程序员的那些事· 2026-01-13 03:48
Core Insights - Apple and Google have announced a multi-year partnership where Apple will pay Google approximately $1 billion annually for a customized version of the Gemini 3 Pro model, which has parameters of 1.2 trillion, eight times that of Apple's current cloud model, enhancing Siri's capabilities significantly [1] - The new Siri, powered by Gemini, is expected to launch in Spring 2026 with iOS 26.4, providing Apple with a 2-3 year breathing space while its own large model is still in development [2] - This collaboration allows Google to integrate its AI into over 2 billion Apple devices, further solidifying its presence in both major mobile operating systems globally, contributing to a market valuation exceeding $4 trillion [2] Summary by Sections - **Partnership Details**: Apple will utilize Google's Gemini model to enhance Siri, with a significant financial commitment of $1 billion per year for a customized version [1] - **Strategic Implications**: The partnership provides Apple with a temporary solution while it develops its own AI capabilities, and it allows Google to expand its AI reach across major platforms [2] - **Market Reactions**: Elon Musk criticized the partnership as a concentration of power, while OpenAI, a former collaborator, has become a background player in this scenario [2]
刚刚,梁文锋署名开源“记忆”模块,DeepSeek V4更细节了
程序员的那些事· 2026-01-13 00:56
Core Insights - DeepSeek has introduced a new research paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models," in collaboration with Peking University, focusing on enhancing large language models (LLMs) through conditional memory and a new module called Engram [1][3][4]. Group 1: Research Background and Problem Statement - Current large language models primarily utilize Mixture of Experts (MoE) for sparsity, but existing Transformer architectures lack native knowledge retrieval mechanisms, leading to inefficient simulation of retrieval behavior [3][9]. - DeepSeek proposes conditional memory as a complementary approach to MoE, introducing the Engram module to address the limitations of current models [4][9]. Group 2: Engram Module and Its Functionality - The Engram module modernizes classic n-gram embeddings, enabling knowledge retrieval with O(1) time complexity [9]. - Engram separates static knowledge storage from dynamic computation processes, enhancing the model's ability to perform complex reasoning by offloading the reconstruction burden from the model's shallow layers [11][13]. Group 3: Performance Improvements - Engram has been scaled to 27 billion parameters, showing significant performance improvements over pure MoE baseline models under equivalent parameter and FLOPs conditions [11]. - Notably, Engram enhances knowledge retrieval capabilities, with improvements in metrics such as MMLU (+3.4), CMMLU (+4.0), and general reasoning tasks like BBH (+5.0) and ARC-Challenge (+3.7) [11][38]. Group 4: System Efficiency and Scalability - Engram's deterministic addressing supports prefetching from host memory at runtime with minimal performance overhead, allowing for efficient memory management [12][19]. - The architecture allows for the decoupling of parameter storage from computational resources, facilitating linear scalability with the number of accelerators [21][22]. Group 5: Experimental Results - Four models were trained: Dense-4B, MoE-27B, Engram-27B, and Engram-40B, all using the same training data and processes [35][36]. - Sparse architectures (MoE-27B, Engram-27B/40B) significantly outperformed the dense model (Dense-4B) across various benchmarks, demonstrating superior scaling properties [38]. Group 6: Long Context Training - Engram architecture has shown significant advantages in long-context tasks by preserving valuable attention capacity for global context processing [41]. - Controlled experiments indicate that Engram outperforms MoE models in complex retrieval tasks, confirming its architectural superiority [46].