Workflow
OmniGen
icon
Search documents
腾讯研究院AI速递 20250704
腾讯研究院· 2025-07-03 15:31
Group 1 - Google, Nvidia, and seven other institutions have launched the world's first AI-native UGC game engine, Mirage, which can generate game content in real-time through natural language commands [1] - Mirage supports a smooth experience at 16 FPS, allowing for 5-10 minutes of continuous gameplay, with graphics quality comparable to GTA and Forza [1] - The core technology is based on a "world model" created using Transformer and diffusion models, trained on extensive gaming data to enable dynamic interaction and real-time control [1] Group 2 - Zhiyuan Research Institute has released OmniGen2, a unified image generation model that supports text-to-image, image editing, and theme-driven image generation [2] - The model introduces an innovative image generation reflection mechanism, significantly enhancing context understanding, instruction adherence, and image generation quality [2] - OmniGen2 has an open research experience version, with model weights, training code, and training data fully open-sourced, achieving over 2000 stars on GitHub within a week [2] Group 3 - Google has announced the free provision of the Gemini AI tool suite to global educators, deeply integrated into Google Classroom and ChromeOS [3] - Gemini in Classroom includes over 30 AI tools that can automatically generate lesson plans, classroom activities, and quiz questions, saving teachers preparation time [3] - New AI tools like NotebookLM and Gems, along with data analysis features, aim to create personalized learning experiences and data-driven teaching [3] Group 4 - Xingliu Agent is a multifunctional AI creation platform that can complete various creative tasks such as batch emoji generation, brand VI design, video generation, and 3D modeling through natural language commands [4][5] - Key features include high-quality content generation in bulk, Kontext intelligent image editing, and full media workflow support, establishing a new design paradigm of "Vibe designing" [5] - The platform offers free experience credits and supports diverse creative outputs, shifting the designer's role from "mastering technology" to "understanding needs and expressing creativity" [5] Group 5 - Tencent Yuanbao has introduced a new feature that supports AI-based image and video content search, allowing intelligent matching of content without restrictions on model usage [6] - The results can intelligently reference related video tutorials, facilitating a combination of text and video explanations, with one-click access to watch the videos [6] - Users can continue to ask follow-up questions after receiving initial answers, enhancing the interactive experience [6] Group 6 - The Xie Saineng team has released the Blender Fusion framework, enabling precise control of 3D scenes without relying on text prompts [7] - The core technology involves a three-step process: separating objects and scenes using the SAM model, editing in Blender, and generating high-quality composite images with a diffusion model [7] - The system employs a dual-stream diffusion synthesizer to enhance generalization and realism through techniques like source occlusion and simulated object jitter [7] Group 7 - xAI is set to release the new Grok 4 series, including the flagship Grok 4 and the specialized programming model Grok 4 Code, with a launch expected after the U.S. National Day [8] - Grok 4 features a context window of 130,000 tokens, supports function calls, structured outputs, and reasoning capabilities, but currently lacks visual and image generation functions [8] - Elon Musk aims for Grok 4 to rewrite the human knowledge base, filling in missing information and correcting errors, while Grok 4 Code will serve as a professional programming assistant [8] Group 8 - The U.S. Department of Commerce has lifted temporary bans on the three major EDA companies, Siemens, Synopsys, and Cadence, allowing full access to their software and technology for Chinese customers [11] - Previously, a sudden export restriction led to a significant drop in stock prices, with Synopsys predicting a 28% year-on-year decline in revenue from the China region [11] - The domestic EDA industry faces challenges regarding maturity and market share, as chip design companies prefer using more mature foreign products to ensure successful tape-out [11] Group 9 - The World Economic Forum's "2025 Global Future of Jobs Report" indicates that AI and machine learning specialists will be the fastest-growing occupations, with an expected growth of 86% in job numbers [12] - AI is set to reshape the global labor market, with data analytics, cybersecurity, and technical literacy emerging as the three fastest-growing skills, while traditional roles like data entry clerks and administrative assistants face declining demand [12] - Approximately 39% of employees' skills are expected to change significantly between 2025 and 2030, yet only 50% of employees have received systematic training, with 63% of employers viewing skill gaps as the biggest obstacle to business transformation [12]
智源新出OmniGen2开源神器,一键解锁AI绘图「哆啦 A 梦」任意门
机器之心· 2025-07-03 04:14
机器之心发布 机器之心编辑部 2024 年 9 月,智源研究院发布了 统一图像生成模型 OmniGen 。该模型在单一架构内即可支持多种图像生成任务,包括文本生成图像(Text-to-Image Generation)、图像编辑(Image Editing)和主题驱动图像生成(Subject-driven Image Generation)。用户仅需使用多模态的自然语言指令,便可灵活实现上述功 能,无需依赖额外的上下文提示、插件或预处理模块。凭借其功能的高度通用性与架构的高度简洁性,OmniGen 一经发布便获得社区的广泛好评。随后,随着 Gemini 2.0 Flash 和 GPT-4o 等闭源多模态模型的相继发布,构建统一图像生成模型成为当前最受关注的研究与应用方向之一。 在这一背景下,OmniGen 迎来重大技术升级,正式发布 OmniGen2 。新一代模型在保持简洁架构的基础上, 显著增强了上下文理解能力、指令遵循能力和图像生 成质量 。同时,OmniGen2 全面继承了其基座多模态大模型在上下文理解与生成方面的能力, 同时支持图像和文字生成,进一步打通了多模态技术生态 。同时, 模型权重、训练代码及 ...
国产SOTA新模型精准get“画(3+6)条命的动物” | 开源
量子位· 2025-06-20 03:28
金磊 整理自 凹非寺 量子位 | 公众号 QbitAI 生成图像 这件事, 会推理 的AI才是好AI。 举个例子,以往要是给AI一句这样的Prompt: (3+6)条命的动物。 我们人类肯定一眼就知道是猫咪,但AI的思考过程却是这样的: △ 虽然生成了猫,但思考过程不对 思考的过程还是把"(3+6)"里的数字分开来处理,并没有真正get到其背后 "九条命的动物=猫" 的本意。 以及像ChatGPT,还是执着于在图片里面展示数字: 究其原因,是因为当前主流的基于文本进行图像生成的方法往往依赖固定的文本编码器,仅能处理"纯文本"输入,难以自然接入图像、音频 等模态的信息。 同时,这类系统在应对"复杂世界知识"和"多步骤逻辑推理"方面表现乏力。 但就在最近,清华大学、腾讯ARC Lab、香港中文大学和香港大学联手提出了一个新大模型—— MindOmni ,显著增强了AI的 "推理生成 能力" 。 它不仅能理解复杂指令,还能基于图文内容展开连贯而可信的"思维链"(Chain-of-Thought, CoT),生成具备逻辑性与语义一致性的图像 或文本输出: △ 推理图像生成可视化结果对比 △ 对基于多模态用户输入的推理 ...
Reasons to Add PAHC Stock to Your Portfolio Right Now
ZACKS· 2025-06-18 14:26
Key Takeaways Phibro's Animal Health unit grew 42% with a standout 68% rise in MFAs and other product sales in fiscal Q3. Phibro saw 40% of fiscal Q3 revenues from international markets like Latin America, India and Southeast Asia. PAHC's Mineral Nutrition and Performance Products units posted double-digit sales growth recovery.Phibro Animal Health Corporation’s (PAHC) focus on advancing its Animal Health business is poised to drive growth in the upcoming quarters. The company’s global growth prospects lo ...
知识类型视角切入,全面评测图像编辑模型推理能力:所有模型在「程序性推理」方面表现不佳
量子位· 2025-06-13 05:07
KRIS-Bench团队 投稿 量子位 | 公众号 QbitAI 人类在学习新知识时,总是遵循从"记忆事实"到"理解概念"再到"掌握技能"的认知路径。 AI是否也建立了"先记住单词,再理解原理,最后练习应用"的这种知识结构呢? 测评一下就知道了! 东南大学联合马克斯·普朗克信息研究所、上海交通大学、阶跃星辰、加州大学伯克利分校与加州大学默塞德分校的研究团队,共同提出了 KRIS-Bench (Knowledge-based Reasoning in Image-editing Systems Benchmark)。 首创地 从知识类型的视角 ,对图像编辑模型的推理能力进行系统化、精细化的评测。 借鉴布鲁姆认知分类与教育心理学中的分层教学理念,KRIS-Bench让AI在事实性知识(Factual Knowledge)、概念性知识(Conceptual Knowledge)与程序性知识(Procedural Knowledge)三大层面上,逐步接受更深入、更复杂的编辑挑战。 基于认知分层的三大知识范畴 KRIS-Bench在每个类别下又细化出7大推理维度、22种典型编辑任务,从 "物体计数变化"到"化学反应预测 ...
深度学习与强化学习两大巨头齐聚2025北京智源大会 智源发布“悟界”系列大模型
机器人圈· 2025-06-07 04:02
2025 年 6 月 6 日,第七届 " 北京智源大会 " 在中关村展示中心开幕。 北京智源大会是智源研究院主办的 "AI 内行学术盛会 " ,以 " 全球视野、思想碰撞、前沿引领 " 为特色,汇聚海内外研究 者分享研究成果、探寻前沿知识、交流实践经验。 2025 北京智源大会邀请到了图灵奖得主、深度学习代表人物 Yoshua Bengio ,图灵奖得主、强化学习之父 Richard S. Sutton ,图灵奖得主 Joseph Sifakis 、姚期智, Google 、 DeepMind 、 Meta 、 Mila 、 Physical Intelligence 、 MIT 、斯坦福、 UC Berkeley 、 Linux 基金会等国际明星机构与技术团队代 表,华为、百度、字节跳动、腾讯、阿里等互联网大厂以及智谱、宇树科技、生数科技、面壁等 30 余位 AI 公司创始人、 CEO ,同时,大会还汇聚了 100 余位全球青年科学家、 200 余位人工智能顶尖学者和产业专家,围绕多模态、深度推 理、下一代 AI 路径、 Agent 智能体、具身智能、 AI4S 、 AI 产业、 AI 安全、 AI 开源展 ...
智源发布“悟界”系列大模型,含全球首个原生多模态世界模型Emu3
Feng Huang Wang· 2025-06-06 14:32
凤凰网科技讯 6月6日,在2025北京智源大会上,继"悟道"系列大模型之后,智源研究院推出"悟界"系 列大模型。 "悟界"大模型系列,包括原生多模态世界模型Emu3、脑科学多模态通用基础模型见微Brainμ、跨本体 具身大小脑协作框架RoboOS 2.0与具身大脑RoboBrain 2.0以及全原子微观生命模型OpenComplex2。 Emu3作为原生多模态统一架构让大模型具备理解和推理世界的能力,Brainμ基于Emu3架构,引入脑信 号这一新的模态数据,实现了单一模型完成多种神经科学任务的大一统。多模态与脑科学模型未来可成 为人机交互具身场景下的基础模型。 RoboOS 2.0与RoboBrain 2.0在初代版本基础上,原有性能大幅提升,并新增多机协作规划与物理常识驱 动的空间推理能力。 作为神经科学领域跨任务、跨模态、跨个体的基础通用模型,Brainμ可同步处理多类编解码任务,兼容 多物种动物模型(包括小鼠 狨猴 猕猴)与人类数据,实现科学数据注释、交互式科学结论解读、大脑 感觉信号重建及模拟刺激信号生成。在自动化睡眠分型、感官信号重建与多种脑疾病诊断等任务中,作 为单一模型其性能显著超越现有的专有 ...