Cohere
Search documents
PitchBook Introduces Valuation Model for VC-Backed Companies
Yahoo Finance· 2026-02-11 18:46
Core Insights - PitchBook has launched PitchBook Valuation Estimates, a daily framework aimed at providing a consistent, independent, and data-informed valuation signal for over 15,000 VC-backed companies [1][2] Group 1: Valuation Model - The new model combines machine learning with PitchBook's private-market data, public-market signals, and capital-structure insights to determine valuations [1] - Unlike traditional backward-looking valuations based on financial reports and comparable transactions, this model offers daily valuations by incorporating both public and private market data [2] - The model updates last-known valuations using public and private comparables and includes company-specific indicators such as employee growth and company age [2] Group 2: Integration and Product Development - The valuation model is integrated into the overall PitchBook platform, enhancing its offerings in the private market space [3] - This launch is part of a broader strategy by Morningstar and PitchBook to expand their coverage of private markets, following the introduction of various indices tracking companies transitioning from private to public markets [3][4] - Recent product launches include the Morningstar PitchBook GenAI 20 index, which tracks the 20 largest GenAI companies, and several Evergreen Fund Indexes [4]
Nebius announces agreement to acquire Tavily to add agentic search to its AI cloud platform
Businesswire· 2026-02-10 15:40
Core Insights - Nebius has announced an agreement to acquire Tavily, enhancing its AI cloud platform with agentic search capabilities, which is crucial for the rapidly growing agentic AI market [1] - The acquisition aims to create a unified platform for enterprises to build and operate autonomous AI agents, integrating real-time search infrastructure into Nebius's existing offerings [1] - The agentic AI market is projected to grow significantly, from approximately $7 billion in 2025 to between $140 billion and $200 billion by the early 2030s, indicating a compound annual growth rate exceeding 40% [1] Company Overview - Nebius is positioned as an AI cloud company focused on providing a full-stack platform for developers and enterprises to manage their AI initiatives, from data and model training to production deployment [1] - The company is listed on NASDAQ (NASDAQ: NBIS) and is headquartered in Amsterdam, serving a diverse range of clients including startups and Fortune 500 companies [1] Acquisition Details - The acquisition of Tavily will allow Nebius to enhance its software stack, providing developers with the necessary tools to create enterprise-grade agentic systems without relying on multiple vendors [1] - Tavily's technology will complement Nebius's existing offerings, particularly the Nebius Token Factory, which provides high-performance inference for AI agents [1] - The transaction is expected to close in the coming weeks, although the transaction value has not been disclosed [1] Market Potential - The agentic AI market is anticipated to see exponential growth as enterprises increasingly deploy autonomous AI systems, with Tavily's agentic search representing a critical capability in this landscape [1] - Tavily has achieved over 3 million monthly SDK downloads and serves a developer community of more than one million users, indicating strong product-market fit [1] - Major clients of Tavily include Fortune 500 companies such as IBM, showcasing its relevance across various industries including financial services and logistics [1]
英伟达被起诉,用盗版训练大模型成行业潜规则?
Xin Lang Cai Jing· 2026-02-08 09:51
Core Viewpoint - Nvidia is facing a collective lawsuit regarding copyright infringement related to the use of data from "shadow libraries" for training its AI models, specifically the NeMo Megatron framework, which allegedly includes copyrighted works without permission [3][18]. Group 1: Lawsuit Details - The lawsuit was filed by five authors who claim Nvidia used a dataset from illegal "shadow libraries" to develop its next-generation language model [3][18]. - Nvidia submitted a motion on January 31, 2026, arguing that the plaintiffs failed to provide sufficient evidence of infringement and asserting that its actions fall under "fair use" [4][18]. - A hearing is scheduled for April 2, 2026, to review Nvidia's motion [4]. Group 2: Competitive Pressure - Internal records indicate that Nvidia faced competitive pressure from OpenAI, prompting it to acquire millions of pirated books from shadow libraries to showcase its technology at the 2023 developer conference [19][20]. - The lawsuit highlights that Nvidia provided tools and scripts to clients to facilitate the downloading of pirated datasets [19]. Group 3: Data Sources - Nvidia's NeMo Megatron models were reportedly trained on The Pile dataset, which includes a subset called Books3 sourced from the shadow library Bibliotik, containing approximately 190,000 books [21][22]. - Nvidia is accused of directly collaborating with the largest shadow library, Anna's Archive, to access millions of pirated books, totaling around 500TB of data [24][22]. Group 4: Industry Context - The rise of AI has led to increased litigation over training data copyright issues, with other companies like OpenAI, Anthropic, and Meta also facing similar lawsuits [20][28]. - The competitive landscape has intensified, with Nvidia's need for high-quality training data driving it to engage with shadow libraries, which offer easier access to vast amounts of data [21][27]. Group 5: Legal Precedents - Previous cases have seen significant settlements, such as Anthropic agreeing to pay at least $1.5 billion to settle a copyright infringement lawsuit, potentially setting a record for copyright damages [20][28]. - Courts have ruled on the fair use of copyrighted works for AI training, with some cases determining that using such works can be considered fair use under certain conditions [29][30].
We read every submission from Canada’s AI task force: here’s what they said
BetaKit· 2026-02-06 18:17
Core Insights - Canada is at a crossroads in its AI development, needing to address commercialization and compute capacity while leveraging its research strengths [2][3] Group 1: Current State of AI in Canada - Canada is recognized as a leader in AI research but is lagging in commercialization and lacks the necessary domestic compute capacity and capital [2] - The public sentiment towards AI is negative, which poses a risk to future investments and the overall AI strategy [2] Group 2: Recommendations for AI Strategy - The government should identify AI champions, lead in purchasing Canadian-made AI solutions, and enhance existing programs while building compute capacity [3] - A comprehensive audit of AI deployment across government is necessary to identify high-risk use cases and their impacts on equality [6] - Establish a national AI Readiness Fund to modernize data infrastructure, as AI cannot thrive on outdated systems [6] Group 3: Talent and Workforce Development - Focus on AI skills development beyond just engineering, including soft skills like communication and problem-solving [16] - Fast-track visas for international students in AI fields and create pathways for permanent residency for AI PhD graduates [11] Group 4: Infrastructure and Investment - Propose the establishment of national sovereign AI compute facilities and a Canadian Compute and Infrastructure Initiative to support the growth of the compute ecosystem [11][13] - Launch a $2 billion pre-seed and seed-focused fund-of-funds and a $5 billion sovereign wealth fund targeted at growth equity companies [16] Group 5: Regulatory and Governance Framework - Amend existing laws to include AI platforms and create a Digital Safety Commission to oversee AI-related issues [13][18] - Develop a data governance model that allows safe use of private data for AI applications [13] Group 6: Indigenous and Community Engagement - Dedicate resources to Indigenous governments for establishing AI infrastructure and data governance frameworks [18] - Establish a federal-provincial funding stream to support AI literacy and workforce training for Indigenous peoples [18]
AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau
Alex Kantrowitz· 2026-01-30 11:18
Joelle Pineau is the chief AI officer at Cohere. Pineau joins Big Technology Podcast to discuss where the cutting edge of AI research is headed — and what it will take to move from impressive demos to reliable agents. Tune in to hear why memory, world models, and more efficient reasoning are emerging as the next big frontiers, plus what current approaches are missing. We also cover the “capability overhang” in enterprise AI, why consumer assistants still aren’t lighting the world on fire, what AI sovereignt ...
前谷歌研究员发文:算力崇拜时代该结束了
机器之心· 2026-01-10 07:00
Core Viewpoint - The article discusses the potential end of the scaling era in AI, emphasizing that merely increasing computational power may not yield proportional improvements in model performance, and highlights the rise of smaller models outperforming larger ones [1][5][7]. Group 1: Trends in AI Development - The belief that scaling computational resources leads to better model performance is being challenged, as evidence shows that larger models do not always outperform smaller ones [8][14]. - The past decade has seen a dramatic increase in model parameters, from 23 million in Inception to 235 billion in Qwen3-235B, but the relationship between parameter count and generalization ability remains unclear [14]. - There is a growing trend of smaller models surpassing larger models in performance, indicating a shift in the relationship between model size and effectiveness [8][10]. Group 2: Efficiency and Learning - Increasing model size is becoming a costly method for learning rare features, as deep neural networks are inefficient in learning from low-frequency data [15]. - High-quality data can reduce the dependency on computational resources, suggesting that improving training datasets can compensate for smaller model sizes [16]. - Recent advancements in algorithms have allowed for significant performance improvements without the need for extensive computational resources, indicating a shift in focus from sheer size to optimization techniques [17][18]. Group 3: Limitations of Scaling Laws - Scaling laws, which attempt to predict model performance based on computational power, have shown limitations, particularly when applied to real-world tasks [20][21]. - The reliability of scaling laws varies across different domains, with some areas showing stable relationships while others remain unpredictable [21][22]. - Over-reliance on scaling laws may lead companies to underestimate the value of alternative innovative approaches in AI development [22]. Group 4: Future Directions - The future of AI innovation may not solely depend on scaling but rather on fundamentally reshaping optimization strategies and exploring new architectures [24]. - There is a noticeable shift towards enhancing performance during the inference phase rather than just during training, indicating a new approach to AI development [25]. - The focus is moving from creating stronger models to developing systems that interact more effectively with the world, highlighting the importance of user experience and system design [27][28].
我们期待AI的发展,也要谨慎它变成剥削机器|元旦书摘
Di Yi Cai Jing· 2026-01-02 06:37
Core Insights - The article discusses the rapid rise of AI technology and its implications for labor and the economy, highlighting the hidden labor behind AI systems and the exploitation of workers in the industry [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. AI Technology and Market Growth - AI is defined as a machine-based system that processes data to generate decisions and predictions, with applications ranging from simple tasks to complex military systems [2] - The global AI market has surpassed $200 billion in 2023, growing at an annual rate of approximately 20%, and is projected to reach nearly $2 trillion by 2030 [3] - The core technology driving AI, particularly chatbots, is large language models (LLMs), which are trained on vast datasets, with models like ChatGPT-4 having around 1.76 trillion parameters [3] Labor and Exploitation in AI - The article emphasizes the connection between AI usage and the labor of workers globally, who are often underpaid and overworked in the AI training process [4][5][6][7][8][9][20] - AI systems require significant human labor for tasks such as data labeling and algorithm adjustments, which are often overlooked in discussions about AI's capabilities [7][8] - The exploitation of workers is a central theme, with AI systems designed to extract more value from laborers while reducing the skill level required for tasks, leading to increased work intensity [9][20] Shifts in Industry Dynamics - The transition from the platform era to the AI era is marked by the emergence of new players in the tech industry, including both traditional giants and new AI startups [10][11][12] - Major tech companies are forming strategic partnerships with AI startups, investing billions to maintain competitive advantages in the AI space [11][14] - The infrastructure required for AI, including data centers and specialized hardware, is becoming increasingly important, leading to a concentration of power and resources among a few companies [12][13] Geopolitical and Environmental Considerations - The development of AI is influenced by geopolitical factors, including tensions between the US and China, and the need for sustainable practices in technology [16][17] - The article highlights the environmental impact of AI infrastructure and the importance of considering sustainability in the context of AI development [16] Future of AI and Labor - The article calls for a deeper understanding of the AI industry's labor dynamics and the need for advocacy to improve conditions for workers [20] - It suggests that while AI has the potential for exploitation, there is also an opportunity for change if the mechanisms of the industry are understood and addressed [20]
Hedge Fund and Insider Trading News: Bill Ackman, Warren Buffett, Michael Burry, Boaz Weinstein, Jim Cramer, Vicor Corp (VICR), Dolphin Entertainment Inc (DLPN), and More
Insider Monkey· 2025-12-31 20:30
Core Insights - Generative AI is viewed as a transformative technology by Amazon's CEO Andy Jassy, indicating its potential to significantly enhance customer experiences across the company [1] - Elon Musk predicts that humanoid robots could create a market worth $250 trillion by 2040, representing a major shift in the global economy driven by AI innovation [2] - Major firms like PwC and McKinsey acknowledge the multi-trillion-dollar potential of AI, suggesting a broad consensus on its economic impact [3] Company and Industry Analysis - A breakthrough in AI technology is believed to be redefining work, learning, and creativity, leading to increased interest from hedge funds and top investors [4] - There is speculation about an under-owned company that may play a crucial role in the AI revolution, with its technology posing a threat to competitors [4] - Prominent figures in technology and investment, including Bill Gates and Warren Buffett, recognize AI as a significant advancement with the potential for substantial social benefits [8] Market Trends - The AI ecosystem is expected to reshape business, government, and consumer operations globally, indicating a shift in market dynamics [2] - The investment landscape is becoming increasingly competitive, with major tech companies like Tesla, Nvidia, Alphabet, and Microsoft being closely watched, while a smaller company is suggested to have greater potential [6]
英伟达(NVDA.US)豪掷200亿美元拿下AI芯片初创企业Groq核心资产 创公司史上最大交易
智通财经网· 2025-12-24 22:37
Core Insights - Nvidia has agreed to acquire assets from AI chip startup Groq for $20 billion in cash, marking Nvidia's largest acquisition to date [1] - Groq has raised over $500 million since its founding in 2016, with a recent funding round of $750 million valuing the company at approximately $6.9 billion [1][2] - Groq will continue to operate independently, with its CFO Simon Edwards becoming the CEO, while key executives will join Nvidia to enhance its technology capabilities [1][2] Financial Details - Nvidia's cash and short-term investments reached $60.6 billion as of the end of October, significantly up from $13.3 billion at the beginning of 2023, providing ample resources for large investments [2] - The acquisition is significantly larger than Nvidia's previous record acquisition of Mellanox for about $7 billion in 2019 [2] Strategic Implications - Nvidia plans to integrate Groq's low-latency processors into its AI infrastructure, expanding its capabilities in AI inference and real-time workloads [2] - The acquisition aligns with Nvidia's ongoing strategy to invest heavily in the AI ecosystem, including investments in AI and energy infrastructure companies and partnerships with major players like OpenAI and Intel [3] Market Context - The demand for AI inference acceleration chips is surging, with Groq targeting $500 million in revenue this year [3] - Other AI chip startups, such as Cerebras Systems, are also gaining attention, indicating a competitive landscape in the AI chip market [4]
前Meta首席AI科学家再创业,AI新公司估值直指30亿欧元
Hua Er Jie Jian Wen· 2025-12-19 14:27
Group 1 - Meta's Chief AI Scientist Yann LeCun is seeking €500 million in funding for his newly established AI company, which will value the company at approximately €3 billion before its official launch [1] - The new company, named Advanced Machine Intelligence Labs (AMI Labs), will focus on developing next-generation superintelligent AI systems, particularly "world models" that can simulate and understand the physical world [2] - AMI Labs' technology foundation is based on research led by LeCun during his time at Meta, aiming to create a new AI architecture capable of learning from text, video, and spatial data, with abilities for continuous memory, complex reasoning, and planning [2] Group 2 - Alexandre LeBrun, co-founder of French health tech startup Nabla, has been appointed as the CEO of AMI Labs, while Nabla will maintain a strategic research partnership with AMI Labs [3] - Meta is undergoing a significant strategic shift in its AI approach, with CEO Mark Zuckerberg aiming to compete directly with OpenAI and Google by moving away from long-term exploratory work initiated by LeCun [4] - Meta has recently laid off approximately 600 employees from its AI research team to reduce costs and accelerate the productization process, reflecting ongoing leadership changes within the company [4]