Model Context Protocol (MCP)

Search documents
Research Solutions(RSSS) - 2025 Q4 - Earnings Call Transcript
2025-09-18 22:00
Financial Data and Key Metrics Changes - Total revenue for Q4 2025 was $12.4 million, up from $12.1 million in Q4 2024, marking a strong quarter for the business [8] - Annual recurring revenue (ARR) reached $21 million, growing 20% year-over-year [4][9] - Gross margin for Q4 was 51%, a 450 basis point improvement from Q4 2024, marking the first time blended gross margin exceeded 50% [11] - Net income for Q4 was $2.4 million or $0.07 per diluted share, compared to a net loss of $2.8 million or $0.09 per diluted share in the prior year [13] - Adjusted EBITDA for Q4 was $1.6 million, a new quarterly record with a 13% margin [13] Business Line Data and Key Metrics Changes - Platform subscription revenue increased 21% year-over-year to approximately $5.2 million, driven by growth in both B2C and B2B segments [9] - Transaction revenue for Q4 was approximately $7.3 million, down from $7.9 million in the prior year quarter, reflecting a decline in paid transaction order volumes [10] - The platform business recorded a gross margin of 88.5%, compared to 85.3% in the prior year quarter [11] Market Data and Key Metrics Changes - The total active customer count for Q4 was 1,338, down from 1,398 in the same period a year ago [11] - B2B ARR at quarter end was $14.2 million, while normalized ARR associated with B2C subscribers was approximately $6.7 million [10] Company Strategy and Development Direction - The company aims to reach a $30 million platform ARR target by the end of FY 2027, focusing on product development and unique value delivery [4] - The strategy includes transitioning from a transaction-based model to a vertical SaaS model, leveraging AI to enhance research workflows [6][30] - The company is exploring acquisitions to enhance its product offerings and has a strong acquisition pipeline [4][24] Management's Comments on Operating Environment and Future Outlook - Management expressed optimism about B2B ARR growth momentum, despite competitive pressures in the B2C space [19] - Transaction revenue growth is expected to remain challenging in the first half of FY 2026, with potential for stabilization or low growth in the latter half [20] - The company plans to continue investing in sales and marketing, technology, and product development while reducing general and administrative expenses [21] Other Important Information - The final earnout for the Scite acquisition was determined to be $15.4 million, with payments structured to be 62% in cash [16] - Cash flow from operations for FY 2025 was over $7 million, nearly double the previous year's result [18] - The company ended FY 2025 with a cash balance of $12.2 million, with no outstanding borrowings [18] Q&A Session Summary Question: What drove the sequential uptick in ASP? - The increase in ASP was attributed to larger deals and improved sales execution under the new Chief Revenue Officer [35] Question: How is the Resolute software adapting to the new API strategy? - Resolute's strong API capabilities align well with the headless strategy, allowing integration into customer workflows [36] Question: What is the competitive landscape for the headless strategy? - The company is uniquely positioned as it collaborates with various publishers, unlike competitors who may hesitate to share content [39] Question: Can you discuss the trends in the COGS line on the platform side? - COGS has stabilized with limited headcount growth and cost management strategies, contributing to improved gross margins [41] Question: How do you expect margins to expand in 2026? - The company anticipates EBITDA margins to remain above 10%, with potential for growth while continuing to invest in sales and marketing [57]
MCP:构建更智能、模块化 AI 代理的通用连接器
AI前线· 2025-09-14 05:33
由大语言模型(LLMs)驱动的人工智能代理(AI Agents)有潜力彻底改变我们与信息的互动方式并让复杂任务自动化。然而,要真正发挥作用,它们必 须有效地利用外部上下文和数据源,使用专业工具,并生成及执行代码。虽然 AI Agent 能够使用工具,但将这些外部组件集成进来,并使 AI Agent 与这 些工具协同工作一直是重大的难关,通常需要定制的、与框架绑定的解决方案。这导致了生态系统的碎片化,引入重复劳动并带来了难以维护和扩展的系 统。 于是,模型上下文协议(Model Context Protocol,MCP)应运而生。它由 Anthropic 于 2024 年底推出,正迅速成为"AI 的 USB-C"——一个旨在无缝连 接 AI Agent 与它们所需的工具和数据的开放、通用标准。本文深入探讨了 MCP 的含义,它如何增强 Agent 开发,以及它在领先的开源框架中被采用的 情况。我们还讨论了 MCP 解锁的关键能力和其在现实世界中的应用。对于从业者、工程师和研究人员来说,理解 MCP 对于构建下一代强大、上下文感 知和模块化的 AI 系统来说是愈加重要的事情。 理解模型上下文协议 作者 | San ...
首个基于MCP 的 RAG 框架:UltraRAG 2.0用几十行代码实现高性能RAG, 拒绝冗长工程实现
AI前线· 2025-08-29 08:25
Core Viewpoint - The article discusses the launch of UltraRAG 2.0, a new framework designed to simplify the development of complex retrieval-augmented generation (RAG) systems, allowing researchers to implement multi-stage reasoning systems with minimal code and effort [2][3][12]. Group 1: UltraRAG 2.0 Features - UltraRAG 2.0 is built on the Model Context Protocol (MCP) architecture, enabling researchers to declare complex logic using YAML files, significantly reducing the amount of code needed for implementation [2][12]. - The framework encapsulates core RAG components into standardized, independent MCP servers, allowing for flexible function calls and easy expansion [3][24]. - Compared to traditional frameworks, UltraRAG 2.0 lowers the technical barrier and learning costs, enabling researchers to focus more on experimental design and algorithm innovation rather than lengthy engineering implementations [3][12]. Group 2: Code Efficiency - In the official implementation of IRCoT, the pipeline section requires nearly 900 lines of handwritten logic, while UltraRAG 2.0 achieves the same functionality with approximately 50 lines of code, half of which is YAML pseudocode for orchestration [6][8]. - The article highlights the stark contrast in code structure between FlashRAG and UltraRAG, with UltraRAG requiring significantly less control logic due to its simplified YAML configuration [8][9]. Group 3: Performance and Application - UltraRAG 2.0 supports high-performance, scalable experimental platforms, allowing researchers to quickly build complex reasoning systems similar to DeepResearch, with capabilities for dynamic retrieval, conditional reasoning, and multi-turn interactions [12][22]. - The system demonstrates a performance improvement of about 12% on complex multi-hop questions compared to Vanilla RAG, showcasing its potential for rapid construction of intricate reasoning systems [14][22]. Group 4: MCP Architecture - The MCP architecture standardizes the way context is provided to large language models (LLMs), facilitating seamless reuse of server components across different systems [23][24]. - UltraRAG 2.0's design allows for independent MCP servers to be integrated without invasive modifications to the global code, enhancing flexibility and stability in research environments [24][26].
杜克大学、Zoom推出LiveMCP‑101:GPT‑5表现最佳但未破60%,闭源模型Token效率对数规律引关注
机器之心· 2025-08-28 10:40
Core Insights - The article discusses the introduction of LiveMCP-101, the first evaluation benchmark specifically designed for MCP-enabled Agents in real dynamic environments, consisting of 101 meticulously crafted tasks across various domains such as travel planning, sports entertainment, and software engineering [2][5][27] - The study reveals that even the most advanced models have a success rate of less than 60% on this benchmark, highlighting significant challenges faced by current LLM Agents in practical deployment [2][5][27] Research Background and Motivation - The emergence of external tool interaction capabilities has become central to AI Agents, allowing them to engage dynamically with the real world [5] - Existing benchmarks are limited as they focus on single-step tool calls and synthetic environments, failing to capture the complexity and dynamism of real-world scenarios [5] - User queries in reality often involve detailed context and specific constraints, necessitating precise reasoning across multiple tool calls [5] Evaluation Framework - The benchmark includes 101 high-quality tasks, covering 41 MCP servers and 260 tools, categorized into Easy, Medium, and Hard difficulty levels [6] - A Reference Agent mechanism is established to ensure stable and reproducible results by strictly following predefined execution plans [9] - A dual scoring mechanism is employed, utilizing LLM-as-judge to assess both the results and execution trajectories of the tested agents [11] Key Findings - Among 18 evaluated models, GPT-5 leads with a 58.42% overall success rate, while performance significantly declines with task difficulty [14] - The study identifies a strong correlation between execution quality and task success rates, emphasizing the importance of "process correctness" [17] - Systematic failure modes are categorized into three main types, with planning and orchestration errors being the most prevalent [20] Comparison with Existing Work - LiveMCP-101 offers a more realistic assessment by incorporating a larger tool pool and interference tools, exposing robustness issues under long contexts and selection noise [23] - The benchmark's detailed execution plans and scoring methods provide a clearer differentiation among model capabilities [24] - The framework allows for precise identification of errors in planning, parameters, or post-processing, guiding engineering optimizations [25]
Microsoft Highlights Gieni AI as Vertical AI Reference at Build 2025
GlobeNewswire News Room· 2025-08-06 00:51
Core Insights - Orderfox Schweiz AG's Gieni AI platform was showcased at Microsoft Build 2025 as a reference case for vertical AI integration, demonstrating the capabilities of Microsoft's new Model Context Protocol (MCP) [1][5] - Gieni AI is one of the first vertical AI agents to offer an MCP Connector on the Microsoft Marketplace for Copilot Studio, providing market, competition, and risk intelligence directly within Microsoft 365 tools [2][6] - The integration of Gieni AI with Microsoft Copilot aims to enhance decision-making processes for businesses by delivering real-time insights within their existing workflows [3][9] Company Overview - Orderfox Schweiz AG, based in Zurich, specializes in developing AI-based platforms for the industrial and B2B sectors, including Gieni AI and Partfox [10] - Gieni AI processes data from over 380 million web pages and 5 million company profiles, utilizing proprietary semantic search and classification systems [8] - The platform is designed to help companies make smarter decisions, accelerate go-to-market strategies, and maintain a competitive edge by transforming data into actionable intelligence [9]
Real world MCPs in GitHub Copilot Agent Mode — Jon Peck, Microsoft
AI Engineer· 2025-07-19 07:00
AI Development Capabilities - The industry is focusing on bringing AI development capabilities through Copilot, starting with code completion and moving towards chat interactions for complex prompts and multi-file changes [1] - Agent mode enables complete task execution with deep interaction, allowing for building apps or refactoring large codebases [2] - Agent mode can interpret readme files, including project structure, environment variable configurations, database schemas, API endpoints, and workflow graphs (even as images), to implement tasks [3][4][5] Model Context Protocol (MCP) - MCP is an open protocol (API for AI) that allows LLMs to connect to external data sources for general or account-specific information [9] - VS Code can be configured to use specific MCPs, allowing Copilot to select the appropriate MCP for a task and connect to it, whether local or remote [11][12] - Developers need to grant permission for Copilot to connect to MCPs, ensuring data access is controlled [20] - GitHub has its own MCP server, enabling actions like committing changes to a new branch and creating pull requests directly from the IDE [26][31] Workflow and Best Practices - Copilot Instructions, a specially named file, can be used to pre-inject standards and practices into every prompt, such as code style guidelines and security checks [28][29][30] - Including a change log of everything the agent has done provides a clear record of each step taken [30]
What does Enterprise Ready MCP mean? — Tobin South, WorkOS
AI Engineer· 2025-06-27 09:31
MCP and AI Agent Development - MCP is presented as a way of interfacing between AI and external resources, enabling functionalities like database access and complex computations [3] - The industry is currently focused on building internal demos and connecting them to APIs, but needs to move towards robust authentication and authorization [9][10] - The industry needs to adapt existing tooling for MCP due to its dynamic client registration, which can flood developer dashboards [12] Enterprise Readiness and Security - Scaling MCP servers requires addressing free credit abuse, bot blocking, and robust access controls [12] - Selling MCP solutions to enterprises necessitates SSO, lifecycle management, provisioning, fine-grained access controls, audit logs, and data loss prevention [12] - Regulations like GDPR impose specific logging requirements for AI workloads, which are not widely supported [12] Challenges and Future Development - Passing scope and access control between different AI workloads remains a significant challenge [13] - The MCP spec is actively developing, with features like elicitation (AI asking humans for input) still unstable [13] - Cloud vendors are solving cloud hosting, but authorization and access control are the hardest parts of enterprise deployment [13]
Baidu Launches ERNIE 4.5 Turbo, ERNIE X1 Turbo and New Suite of AI Tools to Empower Developers and Supercharge AI Innovation
Prnewswire· 2025-04-25 17:03
Core Insights - Baidu introduced new AI models ERNIE 4.5 Turbo and ERNIE X1 Turbo at its annual developer conference, focusing on empowering developers and enhancing application capabilities [1][2][3] - The company emphasizes the importance of practical applications over advanced models and chips, predicting a shift towards multimodal models in the AI market [2][7] Model Innovations - ERNIE 4.5 Turbo and ERNIE X1 Turbo feature enhanced multimodal capabilities, strong reasoning, and low costs, available for free on ERNIE Bot [3][10] - ERNIE X1 Turbo is priced at RMB 1 per million tokens for input and RMB 4 for output, making it 25% of the price of DeepSeek R1 [5] - ERNIE 4.5 Turbo offers input at RMB 0.8 per million tokens and output at RMB 3.2, significantly lower than competitors [6] Application Development - Baidu launched a multi-agent collaboration app Xinxiang, capable of handling 200 task types, with plans to expand to over 100,000 [14] - The company introduced highly convincing AI digital humans and a digital human livestream platform, enhancing user interaction and content generation [9][10] Ecosystem and Initiatives - Baidu announced the AI Open Initiative to support developers with traffic, monetization opportunities, and access to AI services [18] - The Model Context Protocol (MCP) was introduced to facilitate seamless connections between external services and large models [19] - Baidu plans to invest up to RMB 70 million in the third ERNIE Cup Innovation Challenge and aims to cultivate an additional 10 million AI talents over the next five years [20] Market Positioning - The company positions itself as a leader in AI, with a strong internet foundation, and aims to simplify technology for users [22] - Baidu's annual tech event serves as a platform for technology launches and knowledge exchange, focusing on the theme "Models Lead, APPs Rule" [21]