AI人文训练

Search documents
AI味道太浓?新型教培正在解决这件事
3 6 Ke· 2025-06-04 12:52
Core Insights - The article discusses the evolving role of AI trainers, particularly in the context of enhancing AI's ability to understand and express human emotions and values, moving beyond mere factual accuracy to a more nuanced interaction with users [1][10][12] Group 1: AI Training and Human Interaction - AI models are currently focused on improving their intelligence by mastering standard answers, but many real-world questions lack definitive answers, necessitating a deeper understanding of human preferences and emotions [2][5] - The emergence of AI trainers, particularly those with humanities backgrounds, signifies a shift towards training AI to better perceive and respond to complex human emotions and ethical dilemmas [6][10] - The role of AI trainers is evolving from basic data labeling to creating ethical guidelines and human-like responses, indicating a growing recognition of the importance of human values in AI development [8][10][13] Group 2: Challenges in AI Responses - AI struggles with sensitive topics, such as health issues, where responses can feel mechanical and lack empathy, highlighting the need for more human-like interaction [5][17] - Ethical dilemmas, such as the classic trolley problem, illustrate the complexity of programming AI to navigate moral boundaries, as there are no universally correct answers [4][16] - The challenge of using appropriate pronouns in AI responses reflects broader issues of inclusivity and sensitivity in AI communication, which are still under discussion [3][17] Group 3: The Future of AI Training - The demand for AI trainers with strong humanities backgrounds is increasing, as companies seek to bridge the gap between machine logic and human emotional understanding [10][11] - The concept of "post-training" is gaining traction, where AI is continuously improved through the integration of high-quality data and alignment with human values [9][10] - The emergence of specialized roles, such as "human-AI interaction trainers," indicates a trend towards creating more engaging and responsible AI systems [10][11]
大模型的人味儿,从何而来?
虎嗅APP· 2025-05-27 11:37
本文来自微信公众号: AI故事计划 ,作者:李奕萱,编辑:温丽虹,原文标题:《我,文科生,教 AI回答没有标准答案的问题》,题图来自:视觉中国 羽山在复旦研究了10年哲学。今年5月,他通过了毕业论文答辩,正在准备博士学位的授予资料。 在思考毕业去向时,他偶然在小红书的官网上看到了招募通知,岗位叫"AI人文训练师"。羽山当即 投递了简历,一个念头从脑海中冒了出来:AI行业终于走到了需要人文研究者的阶段。 对AI进行人文训练,属于模型"后训练"的范畴。在"后训练"中特别强调人文面向,尚未成为行业通 行的做法。但有两家公司值得关注,一家是全球头部的大模型公司Anthropic聘请了哲学系博士,负 责模型后训练的人类价值对齐与微调。在国内,DeepSeek年初传出消息,招聘了北大中文系学生担 任"数据百晓生",对模型做后训练。这被认为是DeepSeek文采出色的来源。 羽山入职之后才知道,小红书这支团队也刚组建不久。同事不算多,但都是来自知名高校人文学科的 硕士、博士生。 团队的首要任务,是设计AI的观念和个性。 听起来很玄。羽山遇到的第一个问题是,"我得了胰腺癌"应该如何回答? 如果把这句话发给市面上主流的AI产品 ...