人工智能视觉记忆
Search documents
Meta 系 95 后华人明星团队,创业一年就与高通达成合作,让手机拥有多模态记忆
Founder Park· 2025-11-07 00:15
Core Insights - Memories.ai, founded by Shawn Shen, has launched LVMM 2.0, a Large Visual Memory Model, and announced a partnership with Qualcomm for native operation on Qualcomm processors by 2026 [2][9] - The company focuses on developing AI's visual memory capabilities, having completed a $8 million seed round in July 2025, led by Susa Ventures with participation from notable investors like Samsung Next and Fusion Fund [2] Company Background - Founder Shawn Shen holds a PhD in Engineering from Trinity College, Cambridge, and previously worked as a core research scientist at Meta Reality Labs, focusing on human-computer interaction and augmented reality [3] - Co-founder Ben Zhou also has experience at Meta Reality Labs, working on AI assistants for Meta's Ray-Ban glasses [3] - Eddy Wu has been appointed as the Chief AI Officer, bringing five years of experience from Meta, where he was involved in GenAI research [3] Product Development - LVMM 2.0 was released three months after the first generation, maintaining performance while reducing parameter count by 90%, making it more suitable for edge devices [6] - The model converts raw video into structured memory on-device, addressing video searchability issues by encoding and compressing frames to create an index for millisecond retrieval [7][8] Technical Advantages - Running on Qualcomm processors significantly reduces latency, lowers cloud costs, and enhances data localization for improved security [8] - The model integrates video, audio, and images to provide contextual results, ensuring a consistent experience across devices like smartphones and cameras [8] Applications - Practical applications of LVMM 2.0 include enhancing AI capabilities in smart glasses, security systems, and robots, enabling real-time understanding and response [11]
Meta出走华人创业团队,种子轮800万美元,要打造视觉AI记忆大脑
机器之心· 2025-07-25 02:03
Core Viewpoint - The article discusses the recent talent acquisition by Meta from Google and highlights the emergence of a new AI research lab, Memories.ai, founded by former Meta scientists, which has made significant advancements in AI memory systems [2][4][6]. Group 1: Talent Acquisition and Company Formation - Meta has recently hired three top researchers from Google to bolster its AI capabilities [2]. - Memories.ai, an AI research lab founded by former Meta Reality Labs scientists, has successfully completed a $8 million seed funding round led by Susa Ventures [6]. Group 2: Innovations in AI Memory - Memories.ai has developed a Large Visual Memory Model (LVMM) aimed at addressing the "memory loss" issue in AI systems, allowing for a more profound understanding of visual data [7][13]. - The LVMM enables AI systems to retain contextual information, recognize temporal patterns, and perform intelligent comparative analysis, significantly enhancing video processing capabilities [14][15][16]. Group 3: Applications and Market Potential - The LVMM technology shows immense potential across various sectors, including security, media, marketing, and consumer electronics, with capabilities such as efficient video data retrieval and deep emotional analysis of social media content [22]. - The platform transforms raw video into a searchable database, allowing for rapid retrieval and analysis, which can significantly improve operational efficiency in various industries [17][24]. Group 4: User Interaction and Future Prospects - Memories.ai has made its core technology accessible through an API and launched an interactive web application for users to upload videos for quick and precise content analysis [24]. - The introduction of Demo Agents, such as Video Creator and Video Marketer, showcases the practical applications of the LVMM in video creation and marketing strategies [26][27].