AI虚假宣传
Search documents
在相关规定中增设“老年消费者特殊保护”条款
Xin Lang Cai Jing· 2025-12-26 19:01
建"情感防火墙"。一方面,子女应努力弥补老年人的情感缺失;另一方面,社区工作人员和民警应重点 关注独居老人,建立"高风险老人"档案,强化社区网格化治理,将反诈宣传融入社区医疗和养老服务 中。 行业协会与社会组织要加强行业自治。建立保健品及直播带货的行业"黑名单"制度,对有不良记 录的主播和商家实行行业禁入。 李晓鹏:涉老骗局不仅侵害老年人财产权益,更威胁社会诚信与家庭 稳定。唯有通过"立法—监管—平台—教育"的全链条治理,方能织密老年人权益保护网,切实守护"养 老钱"。 建议在消费者权益保护法中增设"老年消费者特殊保护"条款,明确"利用情感诱导、虚构权威 等方式针对老年人实施虚假宣传"的认定标准及加重处罚规则;在《生成式人工智能服务管理暂行办 法》中细化"涉健康医疗类AI内容"的审核义务,要求提供者对生成的"专家解读""政策说明"标注"AI生 成"并附信息来源验证链接。 转自:法治日报 □ 本报记者 张守坤 □ 本报见习记者 王宇翔 免费鸡蛋的传单递到手中,直播间里"贴心儿女"嘘寒问 暖,"专家"口中的"神药"号称能根治顽疾——当这些场景在老年人的生活中频繁上演,一场针对"银发 群体"的精准围猎已悄然展开。各 ...
预测式AI为什么一败涂地?
腾讯研究院· 2025-11-07 08:30
2015年,美国马里兰州的一所私立高校圣玛丽山大学的管理层希望提高新生留存率,也就是入学学生中顺 利完成学业的比例。为此,学校发起了一项调查,旨在识别那些在适应过程中可能面临困难的学生。乍看 之下,这似乎是一个值得称道的目标,因为一旦确定了需要帮助的学生,学校就可以提供额外支持,帮助 他们顺利适应大学生活。然而,校长却提出了一个截然不同的建议,他建议开除那些表现不佳的学生。他 认为,如果这些学生在学期开始的头几周退学,而不是在学期后期离开,他们就不会被计入"在校生"统计, 从而提高学校的留存率。 类似于EAB Navigate 的算法无处不在,它们被用于自动化流程中,做出与你相关的重要决策,而你可能完 全不知情。例如,当你去医院看病时,决定你是否需要留院观察一晚,还是可以当天出院的可能是算法; 当你申请儿童福利或其他公共福利时,评估你的申请是否有效,甚至是否涉嫌欺诈的是算法;当你投简历 找工作时,决定HR是否会考虑你的申请,还是将简历直接筛除的还是算法;甚至当你去海滩时,判断海水 是否安全,是否适合游泳的依旧是算法。 在一次教职工会议上,校长直言:"我的短期目标是让20到25名学生在9月25日之前离开,这样我 ...
有药房用AI生成虚假“苗方传承人”形象欺骗消费者!被罚款
Nan Fang Du Shi Bao· 2025-10-17 14:16
据悉,经查,苗古金贴(厦门)大药房有限公司在互联网发布未经审查的"苗古金贴远红外治疗贴"等医 疗器械广告,并在广告中利用AI技术生成"传承千年苗方苗古金贴传承人""第56代苗古金贴传承人"等虚 假人物形象,同时在广告中使用"中老年专用"等虚假信息,欺骗、误导消费者。今年7月,福建省厦门 市集美区市场监管局对苗古金贴(厦门)大药房有限公司等相关主体作出罚款120万元的行政处罚。 10月16日,市场监管总局公布互联网违法广告典型案例。南都N视频记者从中获悉,其中,有公司在广 告中利用AI技术生成虚假"传承人"人物形象,或在广告中利用AI技术仿冒某著名主持人形象为产品作推 销,均被罚款。 北京心情好科技有限公司以直播和短视频形式,发布"深海多烯鱼油凝胶糖果"普通食品广告,在广告中 利用AI技术仿冒某著名主持人形象为产品作推销,并宣称产品具有"解决头晕头痛、手麻脚麻"等治疗功 效。今年6月,北京市海淀区市场监管局对当事人作出罚款20万元的行政处罚。 ...
AI假图欺骗消费者,外卖商家还想不想吃这碗饭
Nan Fang Du Shi Bao· 2025-08-12 16:55
Core Viewpoint - The rise of AI-generated images in food delivery services has led to significant consumer dissatisfaction and trust issues, as many businesses use misleading visuals to attract customers [1][2][3]. Group 1: AI-Generated Images and Consumer Impact - Many food delivery platforms are using AI-generated images that do not accurately represent the actual products, leading to a disconnect between consumer expectations and reality [1][2]. - Some businesses are misrepresenting themselves as having dine-in options by using AI-generated images, which can create a false sense of security regarding hygiene and dining conditions [2][3]. Group 2: Regulatory and Platform Responsibilities - The use of AI-generated images raises concerns about false advertising, but the lack of severe consequences for individual consumers may limit regulatory intervention [3]. - Platforms have a responsibility to monitor the qualifications and hygiene conditions of the merchants they host, but conflicts of interest may hinder strict oversight [3][4]. - Recent investigations revealed that some food delivery merchants have been using forged food business licenses, indicating a broader issue of compliance within the industry [4]. Group 3: Industry Implications - The emergence of a new industry around AI-generated promotional materials for food delivery services highlights the need for clearer regulations and accountability [4]. - The ongoing issues in the food delivery sector, including the use of misleading advertising and compliance violations, necessitate urgent attention from both platforms and regulatory bodies [4].
当谣言搭上“AI”的东风
腾讯研究院· 2025-06-12 08:22
Group 1 - The article emphasizes the potential of the AI identification system in addressing the challenges of misinformation, highlighting its role as a crucial front-end support in content governance [1][4] - It points out that over 20% of the 50 high-risk AI-related public opinion cases in 2024 were related to AI-generated rumors, indicating a significant issue in the current content landscape [1][3] - The article discusses the three main challenges posed by AI-generated harmful content: lower barriers to entry, the ability for mass production of false information, and the increased realism of such content [3][4] Group 2 - The introduction of a dual identification mechanism, consisting of explicit and implicit identifiers, aims to enhance the governance of AI-generated content by covering all stakeholders in the content creation and dissemination chain [5][6] - The article notes that explicit identifiers can reduce the credibility of AI-generated content, as studies show that labeled content is perceived as less accurate by audiences [6][8] - It highlights the limitations of the AI identification system, including the ease of evasion, forgery, and misjudgment, which can undermine its effectiveness [8][9] Group 3 - The article suggests that the AI identification system should be integrated into the existing content governance framework to maximize its effectiveness, focusing on preventing confusion and misinformation [11][12] - It emphasizes the need to target high-risk areas, such as rumors and false advertising, rather than attempting to cover all AI-generated content indiscriminately [13][14] - The responsibilities of content generation and dissemination platforms should be clearly defined, considering the challenges they face in accurately identifying AI-generated content [14]