Hugging Face
Search documents
科研党痛失「快乐老家」?Paper With Code宣布关闭,网友对Hugging Face新版块不买账
3 6 Ke· 2025-08-13 07:29
Paper With Code 已经正式停止运营,遍布全球的深度用户纷纷发声,一方面高度赞扬该网站在机器学习研究中的价值,另一方面也表达了真实需求—— 除论文与开源代码的对应外,SOTA、leaderboards 等功能同样重要。 随着 Hugging Face 联合创始人兼 CTO Julien Chaumond 在其 X 账号上官宣推出「Trending Papers」,Paper With Code 关闭的消息一锤定音,「震」得不 少开发者、科研人员心痛不已。 一个时代的结束 从数据集标题、描述等内容乱码,到网站显示「Bad Gateway 502」,Paper With Code 自 7 月初便出现了宕机、访问异常的情况,但官方迟迟没有出面回 应,有用户通过 GitHub Issue 的形式向其运营团队询问进展,同样未有回复。 作为科研人员的「快乐老家」,这个聚集了论文、代码、benchmarks、leaderboards 等多类资源的平台,在全球范围内拥有大量拥趸,随着越来越多的 开发者、科研人员察觉无法访问网站获取信息,「Paper With Code 关站」、「网站被攻击」等多方猜测在 X、Re ...
JFrog (FROG) Q2 Revenue Jumps 23%
The Motley Fool· 2025-08-07 21:24
Core Insights - JFrog reported Q2 FY2025 earnings with GAAP revenue of $127.2 million, exceeding analyst expectations of $122.8 million, and non-GAAP EPS of $0.18, surpassing the expected $0.16 [1][2] - The company experienced significant growth in its cloud segment, with cloud revenue reaching $57.1 million, a 45% increase year-over-year, now accounting for 45% of total revenue [1][5] - Customer expansion was notable, with the number of customers generating over $1 million in annual recurring revenue (ARR) increasing to 61, a 45% rise from the previous year [1][6] Financial Performance - Non-GAAP operating income improved to $19.4 million, up from $13.6 million year-over-year, with a non-GAAP operating margin of 15.2%, an increase of 2 percentage points [2][9] - Free cash flow (non-GAAP) more than doubled to $35.5 million, reflecting a 122.3% increase from the prior year [2][9] - Remaining performance obligations (RPO) stood at $476.7 million, indicating strong momentum in onboarding large customers [10] Business Overview and Strategic Focus - JFrog's platform aids organizations in managing, automating, and securing software packages throughout the development lifecycle, focusing on binary management, vulnerability scanning, and compliance [3] - The company emphasizes integrating security into software development processes and expanding support for emerging technologies like machine learning [4] - Strategic partnerships with major players in cloud and AI sectors are crucial for sustaining growth and enhancing the company's value proposition [4] Product Innovation - New MLOps modules were launched, allowing organizations to manage and secure AI and machine learning model artifacts [7][12] - Enhanced security functions for both standard software components and machine learning models were introduced, addressing the growing need for security in AI applications [7] - Collaborations with NVIDIA, Hugging Face, and GitHub are driving new enterprise deals and platform adoption [8] Outlook and Guidance - For Q3 FY2025, JFrog expects revenue between $127.0 million and $129.0 million, with non-GAAP EPS projected in the range of $0.15 to $0.17 [13] - The full-year revenue outlook for FY2025 has been raised to between $507.0 million and $510.0 million, with non-GAAP operating income projected between $75.0 million and $78.0 million [13][14] - Management maintains a conservative forecasting approach, not factoring in potential upside from large enterprise deals or continued high cloud usage [14]
中国开源AI领跑,美国业界急推新项目组团追赶
Guan Cha Zhe Wang· 2025-08-06 12:03
Core Insights - The U.S. technology sector is increasingly anxious about China's advancements in artificial intelligence (AI), particularly in the open-source AI domain, where Chinese companies dominate the top models [1][2] - The newly launched "American Truly Open Models" (ATOM) initiative aims to enhance the competitiveness of U.S. open-source AI by establishing a domestic lab focused on developing accessible and modifiable software [4][5] - Despite the ambitious goals of the ATOM initiative, challenges such as high costs and lack of coordination remain significant hurdles [5][7] Group 1: Current State of AI - In the top 15 AI models, only 5 are open-source, all developed by Chinese companies, highlighting the lag of U.S. developers in this area [2] - The recent release of four leading open-source AI models by Chinese labs in July contrasts with the absence of significant new releases from U.S. developers during the same period [2] Group 2: ATOM Initiative - The ATOM initiative, launched on August 4, has garnered support from over ten industry leaders, including notable figures from technology and academia [4] - The initiative requires substantial computational resources, specifically up to 10,000 advanced GPU chips, with an estimated funding need of at least $100 million [7] Group 3: Challenges and Opportunities - The slow progress in U.S. open-source AI underscores the necessity of the ATOM initiative, as highlighted by the lack of significant new products since Meta's Llama 4 model [5] - The high costs associated with developing top-tier AI systems pose a significant challenge, with calls for support from tech companies, executives, government agencies, and philanthropic organizations [7] - The ATOM initiative is seen as a potential catalyst for scientific research and could assist resource-limited global AI startups [8]
帮30家独角兽定价,这位最懂AI产品定价的人却说:95%AI初创公司的定价都错了
3 6 Ke· 2025-07-31 12:20
Core Insights - The article emphasizes the critical importance of pricing strategies for AI products, highlighting that traditional SaaS pricing models may not be suitable for AI applications due to their unique value propositions and capabilities [2][3][4]. Group 1: AI Pricing Challenges - AI products create significant value from day one, yet many founders still adopt low subscription pricing, failing to capture the true value [3][4]. - Early user pricing anchors can lead to long-term challenges, making it difficult to raise prices later even if the product delivers substantial value [4][12]. - The "AI Pricing Four Quadrants" model categorizes pricing strategies based on attribution ability and autonomy, suggesting different models for different types of AI products [4][10]. Group 2: Common Pricing Traps - Many AI startups fall into the trap of setting low prices, which can lock them into a low-value perception and hinder future growth [11][12]. - Using free trials for proof of concept (POC) without establishing a clear value proposition can waste resources and fail to convert leads into paying customers [16][23]. - Treating AI as a traditional SaaS product overlooks its potential to replace human roles, necessitating a shift in pricing strategies to reflect the value delivered [17][19]. Group 3: Effective Pricing Strategies - Establishing a commercial attribution model from day one is crucial for demonstrating ROI and justifying pricing [21][22]. - Charging for POCs can filter out non-serious inquiries and set the stage for meaningful commercial discussions [23][24]. - Implementing tiered pricing strategies allows customers to choose options that reflect their perceived value, enhancing the overall pricing framework [27][28]. Group 4: New Pricing Paradigms - The article introduces a dual-engine strategy for AI companies, focusing on both market share and wallet share to ensure sustainable growth [34][36]. - AI products must demonstrate clear attribution of value and possess automation capabilities to justify higher pricing [37][39]. - The ultimate goal is to integrate AI deeply into customer processes, allowing for expanded usage and higher willingness to pay [41][42].
X @TechCrunch
TechCrunch· 2025-07-22 20:40
How do you get people to trust robots? Make them cute.Hugging Face’s co-founder and chief scientist @Thom_Wolf explains why the company is betting on friendly AI hardware.Catch the full conversation on the latest episode of @EquityPod: https://t.co/PV5mO844tB https://t.co/gpzWz7cPyX ...
X @TechCrunch
TechCrunch· 2025-07-20 19:26
How do you get people to trust robots? Make them cute.Hugging Face’s co-founder and chief scientist @Thom_Wolf explains why the company is betting on friendly AI hardware.Catch the full conversation on the latest episode of @EquityPod: https://t.co/PV5mO844tB https://t.co/JV8CCt43QI ...
X @TechCrunch
TechCrunch· 2025-07-18 21:21
How do you get people to trust robots? Make them cute.Hugging Face’s co-founder and chief scientist @Thom_Wolf explains why the company is betting on friendly AI hardware.Catch the full conversation on the latest episode of @EquityPod: https://t.co/PV5mO844tB https://t.co/BD53a2jocG ...
AI大家说 | Kimi K2:全球首个完全开源的Agentic模型
红杉汇· 2025-07-18 12:24
Core Viewpoint - Moonshot AI has officially released the Kimi K2 model, which is designed for Agentic workflows, showcasing advanced capabilities in understanding complex instructions and autonomously executing multi-step tasks [2][3][26] Group 1: Model Architecture and Capabilities - Kimi K2 is built on a sparse MoE (Mixture-of-Experts) architecture, featuring a total of 1 trillion parameters and 32 billion active parameters, with 384 experts [4][5] - The model can dynamically activate relevant experts based on task requirements, allowing for efficient parameter utilization [4][5] - Kimi K2 has a maximum context length of 128K, enhancing its ability to handle long documents and complex retrieval tasks [8] Group 2: Training and Optimization - The model underwent pre-training on 15.5 trillion tokens using the MuonClip optimizer, which effectively addressed gradient instability and convergence issues [7][10] - Kimi K2 incorporates a self-judging mechanism to improve performance on non-verifiable tasks, continuously optimizing its capabilities [7] Group 3: Performance Metrics - Kimi K2 achieved state-of-the-art (SOTA) results in various benchmark tests, including SWE Bench Verified, Tau2, and AceBench, demonstrating superior performance in coding, agent tasks, and mathematical reasoning [8][25] - In programming tasks, Kimi K2 scored 53.7% accuracy in LiveCodeBench, surpassing GPT-4.1 [19] - The model's tool-calling ability reached an accuracy of 65.8% in SWE-bench Verified tests, indicating its proficiency in parsing complex instructions [21] Group 4: Industry Impact and Recognition - Kimi K2 has generated significant discussion within the global AI community, with notable endorsements from industry leaders, including NVIDIA's founder Jensen Huang [9][12] - The model's open-source nature has led to rapid adoption by major platforms such as OpenRouter and Microsoft's Visual Studio Code [12] - Kimi K2 has been recognized as one of the best open-source models globally, with academic and industry consensus on its capabilities [14][16] Group 5: Future Implications - The release of Kimi K2 is expected to enhance the developer ecosystem and expand its applications in various fields, transitioning AI from a mere conversational tool to a productivity engine [26]
X @TechCrunch
TechCrunch· 2025-07-17 21:57
How do you get people to trust robots? Make them cute.Hugging Face’s co-founder and chief scientist @Thom_Wolf explains why the company is betting on friendly AI hardware.Catch the full conversation on the latest episode of @EquityPod: https://t.co/PV5mO844tB https://t.co/xNMWToWZK5 ...
AI输出“偏见”,人类能否信任它的“三观”?
Ke Ji Ri Bao· 2025-07-17 01:25
Core Viewpoint - The article discusses the inherent biases present in AI systems, particularly large language models (LLMs), and questions the trustworthiness of their outputs in reflecting a neutral worldview [1][2]. Group 1: AI and Cultural Bias - AI models are found to propagate stereotypes across cultures, reflecting biases such as gender discrimination and cultural prejudices [2][3]. - The SHADES project, led by Hugging Face, identified over 300 global stereotypes and tested various language models, revealing that these models reproduce biases not only in English but also in languages like Arabic, Spanish, and Hindi [2][3]. - Visual biases are evident in image generation models, which often depict stereotypical images based on cultural contexts, reinforcing narrow perceptions of different cultures [2][3]. Group 2: Discrimination Against Low-Resource Languages - AI systems exhibit "invisible discrimination" against low-resource languages, performing poorly compared to high-resource languages [4][5]. - Research indicates that the majority of training data is centered around English and Western cultures, leading to a lack of understanding of non-mainstream languages and cultures [4][5]. - The "curse of multilinguality" phenomenon highlights the challenges AI faces in accurately representing low-resource languages, resulting in biased outputs [4]. Group 3: Addressing AI Bias - Global research institutions and companies are proposing systematic approaches to tackle cultural biases in AI, including investments in low-resource languages and the creation of local language corpora [6]. - The SHADES dataset has become a crucial tool for identifying and correcting cultural biases in AI models, helping to optimize training data and algorithms [6]. - Regulatory frameworks, such as the EU's AI Act, emphasize the need for compliance assessments of high-risk AI systems to ensure non-discrimination and transparency [6]. Group 4: The Nature of AI - AI is described as a "mirror" that reflects the biases and values inputted by humans, suggesting that its worldview is not autonomously generated but rather shaped by human perspectives [7].