Workflow
icon
Search documents
《关于加强科学技术伦理治理的指导意见(征求意见稿)》
关于加强科技伦理治理的指导意见(征求意见稿) 翻译 以下2021年草拟的中国政策文件概述了一个基本框架。 评估与科学研究相关的伦理问题。政策呼吁进行伦理审查。 研究可能危及人类生命、社会稳定、个人隐私的。 环境,以及在一定程度上,动物福利。AC SET对最终版本的翻译。 这份文档——与这份草稿版本非常相似——正在准备中。 标题 《关于加强科学技术伦理治理的指导意见》(征求意见稿) 反馈) 作者 来源: MOST网站,2021年7月28日。 中文文本:The Chinese source text is available online at: https://www.most.gov.cn/tztg/202107/W020210728538739828898.docx 一个已存档的中文源文本版本可在以下网址在线获取:AnarchivedversionoftheChinesesourcetextisavailableonlineat: https://perma.cc/P3KS-24US 美国美元1约等于7.2人民币元(RMB),截至2025年3月13日。 翻译日期 2025年3月13日 翻译者 等等语言集团有 ...
China Renewable Energy_ Polysilicon, Wafer, Solar Cell and Solar Glass Prices Edged Up in January but Still at Losses
Mild rise of polysilicon prices amid supply cut – The average market price of rod-type polysilicon rose 2-3% from Rmb36.5-40.6/kg to Rmb37.2-41.7/kg in January, while that of granular silicon also edged up 3% from Rmb38/kg to Rmb39/kg in the month, per price data from the China Silicon Industry Association. According to the Association, PRC monthly polysilicon output dropped 43.4% yoy and 6.6% mom to 970k MT in January. Most polysilicon suppliers are actively fulfilling the commitments of industry self-regu ...
FAQ_ Debt Ceiling – Abolish vs. Increase
shuinu9870 shuinu9870 更多一手调研纪要和研报数据加V: 更多资料加入知识星球:水木调研纪要 关注公众号:水木纪要 更多一手调研纪要和研报数据加V: 更多一手调研纪要和研报数据加V: M Update shuinu9870 shuinu9870 shuinu9870 更多一手调研纪要和研报数据加V: 更多资料加入知识星球:水木调研纪要 关注公众号:水木纪要 更多一手调研纪要和研报数据加V: the x-date have the potential to create significant risks across many markets. These Eliminating the debt ceiling would not authorize new spending, nor would it cost shuinu9870 Indeed, these risks were emphasized by Treasury Secretary Yellen in a letter to the US 2 Although the two policy debates can be ...
中华人民共和国国家标准:网络安全技术-生成的基本安全要求人工智能服务(反馈草案)
Core Viewpoints - The draft national standard aims to enhance the security of generative AI services by addressing cybersecurity issues, with a primary focus on preventing AI systems from generating content deemed offensive by the Communist Party, such as pornography, bullying, hate speech, defamation, copyright infringement, and criticism of the Party's monopoly on power [1][12] - The standard provides comprehensive security requirements for generative AI services, covering training data security, model security, and security measures, and is applicable to service providers conducting security assessments and relevant regulatory authorities [38][39] Scope and Overview - The document outlines the basic security requirements for generative AI services, including training data security, model security, and security measures, and provides security assessment requirements [38] - It aims to help service providers establish a cybersecurity baseline for generative AI services and improve service security levels by addressing key issues such as cybersecurity, data security, and personal information protection throughout the service lifecycle [46] Training Data Security Requirements - Data source security: Service providers must conduct security assessments of data sources before collection and verify data after collection, rejecting sources with over 5% illegal or unhealthy information [48][49] - Data content security: Training data must be filtered for illegal and unhealthy information before use, and intellectual property rights must be managed to avoid infringement risks [62][63] - Data annotation security: Annotators must undergo internal security training, and annotation rules must be detailed to ensure data accuracy and safety [68][71] Model Security Requirements - Model training: The safety of generated content should be a primary evaluation metric during training, and regular security audits of development frameworks and code are required [75][76] - Model output: Technical measures should be implemented to improve the accuracy and reliability of generated content, and models should refuse to answer questions that induce illegal or unhealthy information [78][79] - Model monitoring: Continuous monitoring of model inputs is necessary to prevent malicious attacks, and a standardized monitoring and evaluation system should be established [81] Security Measures Requirements - Service applicability: The necessity, applicability, and safety of generative AI services in various fields must be fully demonstrated, with additional security measures for critical scenarios such as medical and financial services [87] - Service transparency: Information about service applicability, scenarios, and purposes should be disclosed prominently, and user input collection for training purposes should be optional and easy to disable [88][91] - Public and user complaints: Service providers must provide channels for public and user complaints and establish rules and timelines for handling them [93] Appendices - Appendix A lists major security risks related to training data and generated content, including violations of socialist core values, discriminatory content, commercial violations, and infringement of legal rights [99][100][102][104] - Appendix B provides key points for security evaluation, including the construction of keyword libraries, test question banks for generated content, and classification models for filtering and evaluating security risks [108][109][114]
人工智能安全中的关键概念:机器学习中的可靠不确定性量化(英)
Issue Brief Key Concepts in AI Safety Reliable Uncertainty Quantification in Machine Learning Authors Tim G. J. Rudner Helen Toner ...