Workflow
多模态模型
icon
Search documents
ETF总规模增至5.74万亿元 年内新发产品突破300只
Zheng Quan Ri Bao· 2025-11-09 16:16
Group 1 - The total number of ETFs reached 31.6 trillion shares as of November 9, 2023, an increase of 508.56 billion shares or 19.17% from the end of last year, with a total scale of 5.74 trillion yuan, up by 2,003.92 billion yuan or 53.7% [1][2] - Over 300 new ETF products were launched this year, bringing the total number of ETFs to 1,354 [1] - Among the ETFs, 69 products saw a scale increase of over 10 billion yuan, with several technology-related products performing exceptionally well, such as the Fortune Hong Kong Internet ETF, which increased by 62.65 billion yuan [1] Group 2 - The rapid growth in ETF scale this year is attributed to the increased attractiveness of technology assets and the significant contribution from newly launched products [2] - New ETFs launched this year include 277 equity funds with over 150 billion yuan in issuance and 32 bond funds with over 90 billion yuan in issuance, indicating a strong investor preference for equity assets [2] - The technology sector is expected to remain a crucial part of China's economic development, providing long-term growth momentum for sub-sectors like large models and software applications, as well as benefiting from policy support in areas like cybersecurity and quantum computing [2]
智源研究院王仲远:世界模型的关键是真正预测下一个状态
Jing Ji Guan Cha Wang· 2025-11-01 10:51
Core Insights - The term "World Model" has gained significant attention in the AI field, representing a shift from mere recognition and generation to understanding and predicting the dynamics of the world [2] - Companies are seeking new growth points as the benefits of large models diminish, with DeepMind, OpenAI, and others exploring interactive 3D worlds and robotics [2] - The release of the Emu3.5 multimodal world model by the Zhiyuan Research Institute marks a potential breakthrough in AI, emphasizing the importance of multimodal and world models for future growth [2][3] Group 1 - The Emu3.5 model is trained on over 10 trillion tokens of multimodal data, including 790 years of video data, and has a parameter scale of 34 billion [3] - The "Discrete Diffusion Adaptive (DiDA)" inference method enhances image generation speed by nearly 20 times while maintaining high-quality output [3] - Emu3.5 achieves breakthroughs in three dimensions: understanding higher-level human intentions, simulating dynamic worlds, and providing a cognitive basis for AI-human interaction [3] Group 2 - The core of the world model is not merely video generation but understanding causal and physical laws, essential for tasks like predicting the outcome of robotic actions [3][4] - Emu3.5 supports embodied intelligence and can generate multimodal training data, showcasing an innovative architecture from a Chinese research team [4] - The evolution from Emu3 to Emu3.5 enhances AI's physical intuition and cross-scenario planning capabilities, indicating a future where AI understands the world and acts within it [4]
“100个国产Sora2已经在路上”
投中网· 2025-11-01 07:03
Core Insights - The article discusses the competitive landscape of AI video startups in China, particularly in light of recent significant funding rounds and the launch of OpenAI's Sora2 model, which has raised concerns among entrepreneurs about the viability of their businesses [3][4][5]. Funding Developments - LiblibAI announced a $130 million Series B funding round on October 23, marking the largest single financing in China's AI application sector since 2025, led by Sequoia China and CMC Capital [3]. - A week prior, Aishi Technology completed a 100 million RMB Series B+ funding round, with its products PixVerse and PaiWo AI surpassing 100 million users and achieving an annual recurring revenue (ARR) of over $40 million [3][9]. - The rapid funding activity reflects a response to the competitive pressures introduced by Sora2, which has reinvigorated interest in AI video applications [5]. Sora2's Impact - OpenAI's Sora2 model, released on September 30, represents a significant advancement in video generation capabilities, achieving near-perfect synchronization of voice, sound effects, and lip movements [4][7]. - Sora2's launch has been likened to a "GPT moment" for video, creating a surge of interest and activity in the AI video sector [4][6]. - The SoraApp, associated with Sora2, allows users to create videos easily and remix others' works, positioning it as a potential disruptor in the content creation space [7][8]. Market Dynamics - The emergence of Sora2 has prompted a wave of new AI video startups in China, with many entrepreneurs now actively pursuing opportunities in this space [8][10]. - Companies like Sand.ai have introduced new models like GAGA-1, which focus on audio-visual synchronization, indicating a shift towards consumer-oriented applications [10][11]. - The competitive landscape is characterized by a mix of established players and new entrants, with ByteDance being identified as a significant competitor for Chinese AI video startups [10][12]. Future Outlook - The article suggests that the narrative around AI video models is evolving, with a growing belief that the model capabilities will increasingly overshadow traditional product offerings [13][14]. - Entrepreneurs are encouraged to focus on user experience and innovative applications rather than directly competing with large companies on foundational models [17][18]. - The potential for AI video to transform into a community-driven platform is highlighted, with the possibility of redefining content consumption and creator engagement [16][17].
世界模型有了开源基座Emu3.5,拿下多模态SOTA,性能超越Nano Banana
3 6 Ke· 2025-10-30 11:56
Core Insights - The article highlights the launch of the latest open-source multimodal world model, Emu3.5, developed by the Beijing Academy of Artificial Intelligence (BAAI), which excels in tasks involving images, text, and videos, showcasing high precision in operations like erasing handwriting [1][6][9]. Group 1: Model Capabilities - Emu3.5 demonstrates advanced capabilities in generating coherent and logical content, particularly in simulating dynamic physical worlds, allowing users to experience virtual environments from a first-person perspective [6][12]. - The model can perform complex image editing and generate visual narratives, maintaining consistency and style throughout the process, which is crucial for long-term creative tasks [15][17]. - Emu3.5's ability to understand long sequences and spatial consistency enables it to execute tasks like organizing a desktop through step-by-step instructions [12][22]. Group 2: Technical Innovations - The model is built on a 34 billion parameter architecture using a standard Decoder-only Transformer framework, unifying various tasks into a Next-State Prediction task [17][25]. - Emu3.5 has been pre-trained on over 10 trillion tokens of multimodal data, primarily from internet videos, allowing it to learn temporal continuity and causal relationships effectively [18][25]. - The introduction of the Discrete Diffusion Adaptation (DiDA) technology enhances image generation speed by nearly 20 times without compromising performance [26]. Group 3: Open Source Initiative - The decision to open-source Emu3.5 allows global developers and researchers to leverage a model that understands physics and logic, facilitating the creation of more realistic videos and intelligent agents across various industries [27][29].
Seedream 4.0大战Nano Banana、GPT-4o?EdiVal-Agent 终结图像编辑评测
机器之心· 2025-10-24 06:26
Core Insights - The article discusses the emergence of EdiVal-Agent, an automated, fine-grained evaluation framework for multi-turn image editing, which is becoming crucial for assessing multimodal models' understanding, generation, and reasoning capabilities [2][7]. Evaluation Methods - Current mainstream evaluation methods fall into two categories: 1. Reference-based evaluations rely on paired reference images, which have limited coverage and may inherit biases from older models [6]. 2. VLM-based evaluations use visual language models to score based on prompts, but they struggle with spatial understanding, detail sensitivity, and aesthetic judgment, leading to unreliable quality assessments [6]. EdiVal-Agent Overview - EdiVal-Agent is an object-centric automated evaluation agent that can recognize each object in an image, understand editing semantics, and dynamically track changes during multi-turn editing [8][17]. Workflow of EdiVal-Agent 1. **Object Recognition**: EdiVal-Agent first identifies all visible objects in an image and generates structured descriptions, creating an object pool for subsequent instruction generation and evaluation [17]. 2. **Instruction Generation**: It automatically generates multi-turn editing instructions covering nine editing types and six semantic categories, allowing for dynamic maintenance of object pools [18][19]. 3. **Automated Evaluation**: EdiVal-Agent evaluates model performance from three dimensions: instruction following, content consistency, and visual quality, with a final composite score (EdiVal-O) derived from geometric averages of the first two metrics [20][22]. Performance Metrics - EdiVal-IF measures how accurately models follow instructions, while EdiVal-CC assesses the consistency of unedited content. EdiVal-VQ, which evaluates visual quality, is not included in the final score due to its subjective nature [25][28]. Human Agreement Study - EdiVal-Agent's evaluation results show an average agreement rate of 81.3% with human judgments, significantly outperforming traditional methods [31][32]. Model Comparison - EdiVal-Agent compared 13 representative models, revealing that Seedream 4.0 excels in instruction following, while Nano Banana balances speed and quality effectively. GPT-Image-1 ranks third due to its focus on aesthetics at the expense of consistency [36][37].
不到 3 个月估值破 40 亿,Fal.ai CEO:模型越多,我们越值钱
3 6 Ke· 2025-10-24 00:55
Core Insights - Fal.ai, an AI infrastructure company, has completed a new funding round of $250 million, raising its valuation to over $4 billion, just three months after a previous round at a $1.5 billion valuation [1][6][57] - The company focuses on making AI models accessible and usable for developers rather than competing on model capabilities [3][11][55] - Fal.ai has transitioned from data infrastructure tools to a platform that allows developers to easily integrate and utilize various AI models [10][14][40] Group 1 - Fal.ai currently hosts over 600 models and serves more than 2 million developers, with major clients including Adobe, Canva, and Shopify [6][46] - The company has optimized model calling speed, reducing image generation time from several seconds to under a few seconds, achieving industry-leading performance [20][22] - Fal.ai's approach is to provide a unified API for various models, allowing developers to integrate them without needing extensive technical knowledge [28][30][33] Group 2 - The company operates with a small team of fewer than 50 people, yet has achieved over $100 million in annual revenue [46][55] - Fal.ai's business model focuses on providing a seamless user experience, prioritizing speed and reliability over offering the most advanced models [31][34] - The platform's growth is driven by a natural conversion of users into paying customers, with minimal reliance on traditional sales processes [53][54] Group 3 - The AI landscape is shifting towards a multitude of specialized models, making platforms like Fal.ai increasingly valuable as they consolidate access to these models [39][41][43] - The company emphasizes that the future competition will not be about who has the strongest model, but rather who can effectively serve as the primary platform for all models [58] - Fal.ai's success illustrates the importance of infrastructure in the AI space, as the ability to utilize models becomes more critical than the models themselves [57][58]
史上最全robot manipulation综述,多达1200篇!八家机构联合发布
自动驾驶之心· 2025-10-14 23:33
Core Insights - The article discusses the rapid advancements in artificial intelligence, particularly in embodied intelligence, which connects cognition and action, emphasizing the importance of robot manipulation in achieving general artificial intelligence (AGI) [5][9]. Summary by Sections Overview of Robot Manipulation - The paper titled "Towards a Unified Understanding of Robot Manipulation: A Comprehensive Survey" provides a comprehensive overview of the field of robot manipulation, detailing the evolution from rule-based control to intelligent control systems that integrate reinforcement learning and large models [6][10]. Key Challenges in Embodied Intelligence - Robot manipulation is identified as a core challenge in embodied intelligence due to its requirement for seamless integration of perception, planning, and control, which is essential for real-world interactions in diverse and unstructured environments [9][10]. Unified Framework - A unified understanding framework is proposed, which expands the traditional high-level planning and low-level control paradigm to include language, code, motion, affordance, and 3D representation, enhancing the semantic decision-making role of high-level planning [11][21]. Classification of Learning Control - A novel classification method for low-level learning control is introduced, dividing it into input modeling, latent learning, and policy learning, providing a systematic perspective for research in low-level control [24][22]. Bottlenecks in Robot Manipulation - The article identifies two major bottlenecks in robot manipulation: data collection and utilization, and system generalization capabilities, summarizing existing research progress and solutions for these challenges [27][28]. Future Directions - Four key future directions are highlighted: building a true "robot brain" for general cognition and control, breaking data bottlenecks for scalable data generation and utilization, enhancing multimodal perception for complex object interactions, and ensuring human-robot coexistence safety [35][33].
史上最全robot manioulation综述,多达1200篇!西交,港科,北大等八家机构联合发布
具身智能之心· 2025-10-14 03:50
Core Insights - The article discusses the rapid advancements in artificial intelligence, particularly in embodied intelligence, which connects cognition and action, emphasizing the importance of robot manipulation in achieving general artificial intelligence (AGI) [3][4]. Summary by Sections Overview of Embodied Intelligence - Embodied intelligence is highlighted as a crucial frontier that enables agents to perceive, reason, and act in real environments, moving from mere language understanding to actionable intelligence [3]. Paradigm Shift in Robot Manipulation - The research in robot manipulation is undergoing a paradigm shift, integrating reinforcement learning, imitation learning, and large models into intelligent control systems [4][6]. Comprehensive Survey of Robot Manipulation - A comprehensive survey titled "Towards a Unified Understanding of Robot Manipulation" systematically organizes over 1000 references, covering hardware, control foundations, task and data systems, and cross-modal generalization research [4][6][7]. Unified Framework for Understanding Robot Manipulation - The article proposes a unified framework that extends traditional high-level planning and low-level control classifications, incorporating language, code, motion, affordance, and 3D representations [9][20]. Key Bottlenecks in Robot Manipulation - Two major bottlenecks in robot manipulation are identified: data collection and utilization, and system generalization capabilities, with a detailed analysis of existing solutions [27][28]. Future Directions - Four key future directions are proposed: building a true "robot brain" for general cognition and control, breaking data bottlenecks for scalable data generation and utilization, enhancing multi-modal perception for complex interactions, and ensuring human-robot coexistence safety [34].
恒生大科技们假期表现
小熊跑的快· 2025-10-09 05:06
Core Insights - The article discusses the recent performance and developments in the tech sector, particularly focusing on AMD and its integration with OpenAI, as well as the advancements in AI models like Sora 2 [1][3][4]. Group 1: AMD and AI Integration - AMD has been included in a closed-loop AI ecosystem, which is seen as a positive development despite uncertainties regarding TSMC's production capabilities for 3nm and 2nm chips [1][3]. - The article highlights that traditional cloud companies may not participate in this closed-loop due to their conservative management styles and focus on stable returns [3]. Group 2: Sora 2 Model - The Sora 2 model, set to launch in February 2024, is compared to a significant advancement in video generation akin to GPT-3.5, capable of complex tasks such as simulating Olympic gymnastics movements [3]. - OpenAI's Sora 2 is noted for its improved controllability and ability to follow intricate instructions across multiple scenes while maintaining continuity in the generated content [3]. Group 3: Market Performance - The Sora app achieved the highest download volume during the National Day holiday, indicating strong market interest [4]. - The Hang Seng Tech Index ETF (513180.SH) has shown a year-to-date increase of 43%, with a notable rise of 34.7% since early October [9][13]. - The overall valuation of the Hang Seng Tech Index is significantly lower at 24.9 times earnings compared to the 204 times earnings for the STAR Market, suggesting a potential for catch-up in performance [13].
Being-VL的视觉BPE路线:把「看」和「说」真正统一起来
机器之心· 2025-10-09 02:24
Core Insights - The article discusses the limitations of traditional multimodal models, particularly how CLIP-style encoders prematurely align visual representations with text space, leading to potential hallucinations when detailed, non-language-dependent queries are made [2][6] - A new method called Being-VL is proposed, which emphasizes a post-alignment approach, allowing for the discrete representation of images before aligning them with text, thereby preserving visual structure and reducing the risk of information loss [2][3] Being-VL Implementation - Being-VL consists of three main steps: quantifying images into discrete VQ tokens using VQ-GAN, training a visual BPE that measures both co-occurrence frequency and spatial consistency, and finally unifying visual and text tokens into a single sequence for modeling [3][10] - The visual BPE tokenizer prioritizes both frequency and spatial consistency to create a more semantically and structurally meaningful token set, which is independent of text [8][9] Training Strategy - The training process is divided into three stages: 1. **Embedding Alignment**: Only the new visual token embeddings are trained while freezing other parameters to maintain existing language capabilities [12] 2. **Selective Fine-tuning**: A portion of the LLM layers is unfrozen to facilitate cross-modal interaction at lower representation levels [12] 3. **Full Fine-tuning**: All layers are unfrozen for comprehensive training on complex reasoning and instruction data [12][10] Experimental Results - Experiments indicate that the discrete representation of images followed by visual BPE and unified modeling with text leads to improved reliability in detail-sensitive queries and reduces hallucinations compared to traditional methods [14][16] - The study highlights the importance of a gradual training approach, showing that a combination of progressive unfreezing and curriculum learning significantly outperforms single-stage training methods [14][10] Visual BPE Token Activation - Visualization of embedding weights shows that using visual BPE leads to a more balanced distribution of weights between text and visual tokens, indicating reduced modality gaps and improved cross-modal attention [16][19] Token Size and Training Efficiency - The research explores the impact of BPE token size on training efficiency, finding an optimal balance in resource-limited scenarios, while larger token sizes may lead to diminishing returns due to sparsity [19][20] Development and Summary - The evolution from Being-VL-0 to Being-VL-0.5 reflects enhancements in the unified modeling framework, incorporating priority-guided encoding and a structured training approach [20][24]