Workflow
Large Language Models (LLMs)
icon
Search documents
Analyst Trims Oracle (ORCL) Stake, Says Cloud Margins ‘Significantly Less’ Than Peers
Yahoo Finance· 2025-10-30 21:04
We recently published 10 Stocks Moving on Buzzing News as Analyst Issues Strong Warning About AI Valuations. Oracle Corp (NYSE:ORCL) is one of the stocks moving on buzzing news. Malcolm Ethridge, managing partner at Capital Area Planning Group, said in a recent program on CNBC that he’s trimming his position in Oracle Corp (NYSE:ORCL). The analyst said the company’s Cloud margins are not strong when compared with AWS or Google Cloud. He also shared his concerns about Oracle Corp (NYSE:ORCL) dependence on ...
Analyst Explains What ‘Caught’ His Attention About Oracle (ORCL)- ‘Late-90s Kind of Vibes’
Yahoo Finance· 2025-10-23 13:57
Group 1 - Oracle Corp (NYSE:ORCL) is gaining attention due to its ambitious guidance for cloud revenue, projecting an increase from $10 billion last year to $17-18 billion this year, and aiming for $144 billion by 2030, representing a 14-fold increase [1] - The competitive landscape in the cloud business includes major players like Amazon, Microsoft, and Google, raising questions about revenue generation and efficiency over the next decade [1] - A significant catalyst for Oracle's recent market activity is a 5-year contract with OpenAI valued at $300 billion, which implies an annual contract value of $60 billion starting in 2027 [2] Group 2 - In 2026, five hyperscaler companies, including Oracle, are expected to collectively spend $405 billion on capital expenditures (CAPEX), primarily focused on AI infrastructure [3]
人工智能与人类:人工智能的转折点 -现实检验-AI vs Human_ AI Inflection - the reality check
2025-10-19 15:58
16 October 2025 India Strategy AI vs Human: AI Inflection - the reality check Venugopal Garre +65 6326 7643 venugopal.garre@bernsteinsg.com Nikhil Arela +91 226 842 1482 nikhil.arela@bernsteinsg.com Our AI vs Human conference may be done, but the conversation is far from over. In this report, we discuss the most pervasive myths around AI - which we presented earlier this week, as part of our annual Bernstein University series. We're all still figuring out AI: Tech firms, corporates, and consumers alike. Yet ...
This Tiny Model is Insane... (7m Parameters)
Matthew Berman· 2025-10-10 16:05
Model Performance & Innovation - A 7 million parameter model (TRM - Tiny Recursive Model) is outperforming larger frontier models on reasoning benchmarks [1][2] - TRM achieves 45% test accuracy on ARC AGI 1 and 8% on ARC AGI 2, surpassing models with significantly more parameters (less than 0.01% of the parameters) [2] - The core innovation lies in recursive reasoning with a tiny network, moving away from simply predicting the next token [6][23] - Deep supervision doubles accuracy compared to single-step supervision (from 19% to 39%), while recursive hierarchical reasoning provides incremental improvements [16] - TRM significantly improves performance on tasks like Sudoku (55% to 87%) and Maze (75% to 85%) [18] Technical Approach & Implications - TRM uses a single tiny network with two layers, leveraging recursion as a "virtual depth" to improve reasoning [23][27][28] - The model keeps two memories: its current guess and the reasoning trace, updating both with each recursion [25] - The approach simplifies hierarchical reasoning, moving away from complex mathematical theorems and biological arguments [22][23] - Recursion may represent a new scaling law, potentially enabling powerful models to run on devices like computers and phones [34] Comparison with Existing Models - Traditional LLMs struggle with hard reasoning problems due to auto-regressive generation and reliance on techniques like chain of thought and pass at K [3][5][6] - HRM (Hierarchical Reasoning Model), a previous approach, uses two networks operating at different hierarchies, but its benefits are not well-understood [9][20][21] - TRM outperforms HRM by simplifying the approach and focusing on recursion, achieving greater improvements with less depth [30] - While models like Grok for Thinking perform better on some benchmarks, they require significantly more parameters (over a trillion) compared to TRM's 7 million [32]
平衡创新与严谨
Shi Jie Yin Hang· 2025-05-15 23:10
Investment Rating - The report does not explicitly provide an investment rating for the industry. Core Insights - The integration of large language models (LLMs) in evaluation practices can significantly enhance the efficiency and validity of text data analysis, although challenges in ensuring the completeness and relevance of information extraction remain [2][17][19]. Key Considerations for Experimentation - Identifying relevant use cases is crucial, as LLMs should be applied where they can add significant value compared to traditional methods [9][23]. - Detailed workflows for use cases help teams understand how to effectively apply LLMs, allowing for the reuse of successful components [10][28]. - Agreement on resource allocation and expected outcomes is essential for successful experimentation, including clarity on human resources, technology, and definitions of success [11][33]. - A robust sampling strategy is necessary to facilitate effective prompt development and model evaluation [12][67]. - Appropriate metrics must be selected to measure LLM performance, with standard machine learning metrics for discriminative tasks and human assessment criteria for generative tasks [13][36]. Experiments and Results - The report details a series of experiments conducted to evaluate LLM performance in text classification, summarization, synthesis, and information extraction, with satisfactory results achieved in various tasks [19][49]. - For text classification, the model achieved a recall score of 0.75 and a precision score of 0.60, indicating effective performance [53]. - In generative tasks, the model demonstrated high relevance (4.87), coherence (4.97), and faithfulness (0.90) in text summarization, while also performing well in information extraction [58]. Emerging Good Practices - Iterative prompt development and validation are critical for achieving satisfactory results, emphasizing the importance of refining prompts based on model responses [14][60]. - Including representative examples in prompts enhances the model's ability to generate relevant responses [81]. - A request for justification in prompts can aid in understanding the model's reasoning and improve manual verification of responses [80]. Conclusion - The report emphasizes the potential of LLMs to transform evaluation practices through thoughtful integration, continuous learning, and adaptation, while also highlighting the importance of maintaining analytical rigor [18][21].