AI版权侵权
Search documents
Seedance 2.0 封神!迪士尼指控其侵权
程序员的那些事· 2026-02-16 00:56
Core Viewpoint - Disney has accused ByteDance of copyright infringement regarding its newly launched AI video model, Seedance 2.0, which allegedly uses Disney's copyrighted content without authorization [2][3]. Group 1: Legal Accusations - Disney's legal notice claims that Seedance 2.0 utilized unauthorized Disney copyrighted materials during its training process [3]. - The model reportedly includes a pre-set library of pirated materials featuring core IPs such as Marvel and Star Wars, allowing users to generate videos of characters like Spider-Man and Darth Vader [3]. - Disney condemned this action as treating protected commercial IP as free public domain material, describing it as deliberate and unacceptable virtual vandalism [3]. Group 2: Industry Reactions - The American Film Association and the Hollywood Actors Guild have also condemned the model, asserting that it disregards copyright rules and threatens the film creation ecosystem [3]. - Disney has demanded that ByteDance immediately cease the infringement and refrain from future violations [3]. - In response, ByteDance has urgently restricted the generation of copyrighted characters and real-life likenesses in Seedance 2.0, although they have not yet issued a formal response [3]. Group 3: Context of the Dispute - This incident marks Disney's quickest and most direct legal action against an AI company regarding copyright issues, although it is not the first time Disney has taken such legal steps [3].
郑友德:AI记忆引发的版权危机及其化解
3 6 Ke· 2026-02-04 00:41
Core Insights - The research from Stanford and Yale serves as a warning and roadmap for the AI industry, emphasizing the need for responsible, transparent, and sustainable development in the face of copyright challenges posed by generative AI (GenAI) [1][2]. Group 1: Technical Truths Revealed - A significant study revealed that major language models (LLMs) can reproduce copyrighted texts with over 95% accuracy, indicating a deep memory of training data [3][4]. - The study confirmed that all tested LLMs could extract long passages of copyrighted material, with Claude 3.7 showing a 95.8% extraction rate for specific works [5][6]. - The research highlighted the vulnerability of existing protective measures, as models like Gemini 2.5 Pro and Grok 3 could reproduce over 70% of copyrighted content without any circumvention [7][8]. Group 2: Industry Risk Orientation - The AI industry faces systemic financial risks, with significant debt accumulation among major players, potentially reaching $1.5 trillion in the coming years [9][10]. - The reliance on fragile legal foundations for "fair use" raises concerns about the sustainability of the AI industry's financial ecosystem, especially if courts determine that AI operations constitute illegal copying [9][10]. Group 3: Judicial Conflicts - There is a stark contrast in judicial interpretations between the UK and Germany regarding whether model learning constitutes copyright infringement, with the UK courts denying that models store copies, while German courts have ruled otherwise [10][11]. - The German court's ruling established that memory in AI models equates to illegal storage, directly challenging the UK perspective [12][13]. Group 4: Defense Strategies - AI developers are likely to rely on the "fair use" doctrine in the U.S. legal framework, arguing that their training practices are transformative [13][14]. - In the EU, the legal framework does not support open fair use but provides statutory exemptions for text and data mining (TDM), which may not cover the extensive memory capabilities of LLMs [15][16]. Group 5: Regulatory Safety Evaluations - The inherent memory characteristics of LLMs could lead to significant legal consequences, necessitating that AI developers take proactive measures to prevent access to copyrighted content [30][31]. - Current protective technologies are easily circumvented, raising questions about their effectiveness and the potential for models to act as illegal retrieval tools [30][31]. Group 6: Judicial Remedies and Consequences - If AI models are determined to contain copies of copyrighted works, companies may face severe penalties, including the destruction of infringing copies and the requirement to retrain models using authorized materials [34][35]. - The legal debate centers on whether models merely contain instructions to create copies or if they substantively include copyrighted works, with significant implications for the AI industry's financial stability [32][34]. Group 7: Crisis Mitigation Strategies - The AI industry must develop a comprehensive internal compliance system to address copyright risks, including stringent data sourcing and filtering mechanisms [40][41]. - Implementing a statutory licensing system and compensation mechanisms can help resolve the challenges posed by massive data requirements in GenAI [42][43].
OpenAI等六大AI巨头遭作家起诉,若蓄意侵权每部作品最高获赔15万美元
Xin Lang Cai Jing· 2025-12-23 02:07
Core Viewpoint - A group of writers led by Pulitzer Prize winner John Carreyrou has filed a class-action lawsuit against six AI companies, including OpenAI, Google, Meta, Anthropic, xAI, and Perplexity AI, accusing them of "willful infringement" by training models on pirated books [1][6]. Group 1: Allegations and Legal Context - The lawsuit claims that the six companies downloaded millions of pirated books from illegal shadow libraries like LibGen and Z-Library, using these works for training large language models and product optimization, creating an illegal closed loop of "pirated acquisition - model training - commercial monetization" [1][6]. - The plaintiffs argue that the intellectual contributions of writers support an AI ecosystem valued at tens of billions of dollars, yet they have received no compensation [1][6]. - If the jury finds the infringement to be willful, each infringing work could result in damages of up to $150,000 [2][7]. Group 2: Previous Legal Issues and Industry Impact - OpenAI has faced at least 14 copyright lawsuits, making it a frequent target in the industry [2][7]. - The New York Times previously sued Microsoft and OpenAI for using millions of its articles to train AI models, claiming billions in damages and demanding the destruction of any AI models using its copyrighted material [2][7]. - Other companies like Google and Meta have also received cease-and-desist notices for unauthorized use of copyrighted works in AI development [9]. - Anthropic was notably ordered to pay $1.5 billion in a settlement for using pirated books to train its Claude model, with a court ruling stating that "pirated data is not subject to fair use" [9]. - The Northern District of California has accepted 25 AI copyright cases, representing over half of similar cases nationwide, and the outcomes may set important precedents for the legality of AI training data [9].
Minimax惹上全球最强法务部
Guan Cha Zhe Wang· 2025-09-24 08:45
Core Viewpoint - Disney, Universal Pictures, and Warner Bros. Discovery have jointly filed a lawsuit against Chinese AI company MiniMax and its international operations in Singapore, Nanonoble Pte Ltd, accusing them of large-scale intellectual property infringement through their product "Hailuo AI" [1][3]. Group 1: Allegations of Infringement - The lawsuit includes 58 pieces of evidence claiming that MiniMax's "Hailuo AI" has unlawfully copied and reproduced copyrighted works during its training and generation processes, violating the U.S. Copyright Act [1][3]. - MiniMax is accused of unauthorized downloading of copyrighted works from the internet for model training, embedding core elements of these works into their AI model [9]. - The AI model can generate high-quality images and videos based on simple text prompts, which include copyrighted characters, leading to claims of direct infringement [11][12]. Group 2: Legal Context and Implications - Legal experts indicate that determining whether AI model training constitutes copying or merely inspiration is complex, and settlements between copyright holders and AI companies are more common than litigation [2][18]. - The lawsuit seeks to recover profits from MiniMax's infringement and requests a permanent injunction to prevent further use of the plaintiffs' works for AI training and content generation [13]. - The case reflects a broader trend in the industry, where AI companies face increasing scrutiny and potential legal challenges regarding copyright issues as the sector rapidly expands [2][22]. Group 3: Industry Impact - MiniMax, valued at approximately $4 billion and currently in Series C funding, has been accused of undermining the legitimate licensing market through its alleged infringement activities [13]. - The lawsuit is part of a larger pattern, as similar legal actions have been taken against other AI companies, indicating a growing concern among content creators regarding the use of their intellectual property [13][21]. - The outcome of this case could set a precedent for how AI companies navigate copyright laws and the potential for future collaborations or settlements with content owners [21][22].
迪士尼等好莱坞巨头起诉MiniMax侵权,涉及超50个IP
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-17 06:14
Core Viewpoint - The copyright battle between Hollywood and AI has escalated, with major studios suing the domestic company MiniMax for copyright infringement related to its AI product "Hai Luo AI" [2] Group 1: Legal Action and Allegations - Disney, Universal Pictures, and Warner Bros. have jointly filed a lawsuit against MiniMax, claiming that "Hai Luo AI" unlawfully reproduces and displays copyrighted works without authorization [2] - The lawsuit accuses MiniMax of not only direct infringement but also of aiding infringement, thus holding it liable for joint responsibility [2] - The plaintiffs include major Hollywood entities such as Marvel, Disney, 20th Century Fox, DC Comics, and DreamWorks, while the defendants include MiniMax's parent company Shanghai Xiyu Technology and its international operations company Nanonoble Pte Ltd [2] Group 2: MiniMax Overview and Product Details - MiniMax, founded in 2021 by former SenseTime vice president Yan Junjie, is one of the "six little dragons" of domestic AI startups, focusing on international expansion [4] - The company claims its self-developed multimodal models and AI applications cover over 200 countries and regions, with 157 million individual users [4] - "Hai Luo AI" specializes in generating images and videos from text prompts, and it gained significant traction in the U.S. AI application market, ranking among the top ten downloads in the first half of 2024 [4] Group 3: Allegations of Inaction and Evidence - The plaintiffs argue that MiniMax had the capability to prevent copyright infringement but chose not to, despite having systems in place to filter out violent and explicit content [6] - A letter from the plaintiffs' lawyers listed around 50 infringing characters, including Iron Man and Spider-Man, but MiniMax did not respond or take down the content [6] - MiniMax has been using copyrighted characters in promotional videos on social media platforms, with specific posts cited as evidence in the lawsuit [7] Group 4: Financial Implications and Demands - MiniMax is currently in its C-round of investment, having previously received funding from major investors like Alibaba, Tencent, and Sequoia China, with an estimated valuation of approximately $4 billion [7] - The lawsuit seeks compensation for actual damages or statutory damages, which could total up to $7.5 million based on the 50 works mentioned [8] - The plaintiffs also request a court injunction to prevent MiniMax from continuing to infringe on copyrighted works and to implement appropriate copyright protection mechanisms in "Hai Luo AI" [8]
日经、朝日加入读卖行列:Perplexity AI 现遭日本三大媒体起诉
Sou Hu Cai Jing· 2025-08-26 08:23
Core Viewpoint - Perplexity AI is facing legal action from major Japanese news outlets, including Nikkei and Asahi, for allegedly violating copyright laws by bypassing content protection measures and providing inaccurate AI-generated summaries of their articles [1][3]. Group 1: Legal Actions - Nikkei and Asahi have jointly filed a lawsuit against Perplexity AI, following a similar action initiated by Yomiuri Shimbun earlier this month [1]. - The lawsuit claims that Perplexity AI collected articles from the servers of Nikkei and Asahi without permission, creating and disseminating summaries that violate Japanese copyright regulations [3]. Group 2: Allegations and Demands - The two news organizations allege that the AI-generated summaries provided by Perplexity are not accurate and fail to faithfully represent the original content, thereby damaging their reputation and infringing on their commercial interests [3]. - Nikkei and Asahi are seeking a court order from the Tokyo District Court to stop Perplexity AI from using their content and to delete the summaries, along with a combined economic compensation of 2.2 billion yen (approximately 107 million RMB) [3].