Workflow
Large Language Model (LLM)
icon
Search documents
ClearBridge Emerging Markets Strategy Q3 2025 Commentary (undefined:MCEIX)
Seeking Alpha· 2025-11-05 18:00
Market and Performance Overview - Emerging markets experienced a 10.6% increase in Q3 2025, outperforming developed markets, with China leading at 20.4% growth driven by AI opportunities and favorable valuations [2] - Taiwan and Korea also showed strong performance, rising 14.3% and 12.7% respectively, fueled by AI demand, with Taiwan being a key semiconductor manufacturer and Korea a memory product supplier [2] Sector Performance - The materials sector was the top performer, up 24%, largely due to rising gold prices boosting mining shares [4] - Technology-related sectors, including communication services, consumer discretionary, and IT, outperformed the overall market, benefiting from AI and Internet services [4] - Cyclical sectors generally underperformed, with energy and financials showing the greatest weakness [4] Company Contributions - In China, Tencent and CATL were significant contributors, with Tencent benefiting from strong operating results and positive market sentiment, while CATL capitalized on its leadership in battery supply amid rising EV demand [6] - Taiwan's Delta Electronics and South Korea's Samsung Electronics saw share price increases due to their critical roles in AI development, with Delta's market share in data centers and Samsung's memory supply benefiting from high AI demand [7] Portfolio Positioning - The ClearBridge Emerging Markets Strategy outperformed its benchmark, with strong stock selection in China, Taiwan, and South Korea offsetting negative impacts from China and India [5] - New purchases included Sieyuan Electric, expected to grow through grid investment and market share gains, and HD Hyundai Electric, which is positioned to benefit from global power equipment demand [12][13] Outlook - The long-term investment outlook for emerging markets remains robust, with expectations for technology adoption, urbanization, and services sector growth to drive returns [18] - Emerging markets are anticipated to succeed in the next 12 months, particularly in technology, with India expected to recover and China continuing its key role in the asset class [22]
Former Meta exec: See 'prominent features' of what looks like AI bubble
Youtube· 2025-10-16 12:05
Core Viewpoint - The market is experiencing high valuations and rapid deal-making, raising concerns about a potential correction, especially if major tech companies cannot demonstrate sustainable business models for their investments in AI infrastructure [1][2]. Group 1: Market Valuation and Correction Risks - Current market valuations appear inflated, suggesting a possible bubble in the AI sector [2][3]. - The significant investments by hyperscalers in data centers may not yield sustainable returns, which could lead to market corrections [1][3]. - The industry is characterized by hype cycles, with Silicon Valley often overstating the potential of AI technologies [6][8]. Group 2: AI Technology and Its Limitations - Large Language Models (LLMs) may not lead to groundbreaking scientific advancements, as some industry experts express skepticism about their capabilities [3][4]. - The probabilistic nature of LLMs means they are limited by the data input, which can result in clunky outputs and heavy data requirements [7][8]. - While LLMs are not a dying paradigm, they may not be the all-encompassing solution that the industry claims [8]. Group 3: Future of AI and Innovation - Despite concerns, AI technology is expected to persist and drive significant innovation, as evidenced by the capabilities of current AI systems [5][6]. - The infrastructure being developed for AI could be repurposed for various applications, similar to telecom infrastructure post-dotcom boom [1][2].
读万卷书,大模型就能「看」懂视觉世界?Meta揭秘LLM视觉先验的起源
机器之心· 2025-10-11 04:18
Core Insights - The research reveals that visual priors in large language models (LLMs) are not a singular capability but can be divided into two distinct types: reasoning priors and perception priors [4][6][21] - Reasoning priors are abstract, cross-modal abilities acquired through reasoning-focused pre-training data, while perception priors relate to the recognition of specific visual concepts [4][6] Reasoning Priors - Reasoning priors are developed through pre-training on structured texts such as code, mathematics, and academic papers, enabling LLMs to solve complex visual problems [4][11] - The study indicates that increasing the proportion of reasoning-intensive text in pre-training data significantly enhances the model's visual reasoning capabilities until it reaches around 75% [11][13] Perception Priors - Perception priors emerge from diverse general corpora and are sensitive to visual instruction fine-tuning and the choice of visual encoders [6][13] - Unlike reasoning priors, perception priors depend more on post-training visual fine-tuning data and the characteristics of the visual encoder [13][15] Experimental Findings - The research involved over 100 controlled experiments and utilized 500,000 GPU hours to systematically uncover the sources of LLM visual priors [2][8] - The experiments demonstrated that a small amount of visual description is sufficient, while a large amount of reasoning data is crucial for enhancing visual capabilities [7][11] Data Pre-training Recipe - The research team developed an optimal data mixing scheme that balances language capabilities and visual potential, leading to superior performance in both language and visual benchmarks [17][18] - The balanced model trained with this recipe outperformed models optimized solely for language tasks across all visual benchmark tests [19] Implications and Future Directions - This study shifts the cultivation of multimodal model capabilities from downstream fine-tuning to the language pre-training stage, supporting the Platonic Representation Hypothesis [21] - It suggests that model designers can consider future multimodal applications from the outset by embedding visual seeds during the pre-training phase [21]
通往AGI的快车道?大模型驱动的具身智能革命 | Jinqiu Select
锦秋集· 2025-09-01 15:29
Core Insights - Embodied intelligence is seen as a key pathway to achieving Artificial General Intelligence (AGI), enabling agents to develop a closed-loop system of "perception-decision-action" in real-world scenarios [1][2] - The article provides a comprehensive overview of the latest advancements in embodied intelligence powered by large models, focusing on how these models enhance autonomous decision-making and embodied learning [1][2] Group 1: Components and Operation of Embodied AI Systems - An Embodied AI system consists of two main parts: physical entities (like humanoid robots and smart vehicles) and agents that perform cognitive functions [4] - These systems interpret human intentions from language instructions, explore environments, perceive multimodal elements, and execute actions, mimicking human learning and problem-solving paradigms [4] - Agents utilize imitation learning from human demonstrations and reinforcement learning to optimize strategies based on feedback from their actions [4][6] Group 2: Decision-Making and Learning in Embodied Intelligence - The core of embodied intelligence is enabling agents to make autonomous decisions and learn new knowledge in dynamic environments [6] - Autonomous decision-making can be achieved through hierarchical paradigms that separate perception, planning, and execution, or through end-to-end paradigms that integrate these functions [6] - World models play a crucial role by simulating real-world reasoning spaces, allowing agents to experiment and accumulate experience [6] Group 3: Overview of Large Models - Large models, including large language models (LLMs), large vision models (LVMs), and vision-language-action (VLA) models, have made significant breakthroughs in architecture, data scale, and task complexity [7] - These models exhibit strong capabilities in perception, reasoning, and interaction, enhancing the overall performance of embodied intelligence systems [7] Group 4: Hierarchical Autonomous Decision-Making - Hierarchical decision-making structures involve perception, high-level planning, low-level execution, and feedback mechanisms [30] - Traditional methods face challenges in dynamic environments, but large models provide new paradigms for handling complex tasks by combining reasoning capabilities with physical execution [30] Group 5: End-to-End Autonomous Decision-Making - End-to-end decision-making has gained attention for directly mapping multimodal inputs to actions, often implemented through VLA models [55][56] - VLA models integrate perception, language understanding, planning, action execution, and feedback optimization into a unified framework, representing a breakthrough in embodied AI [58] Group 6: Enhancements and Challenges of VLA Models - VLA models face limitations such as sensitivity to visual and language input disturbances, reliance on 2D perception, and high computational costs [64] - Researchers propose enhancements in perception capabilities, trajectory action optimization, and training cost reduction to improve VLA performance in complex tasks [69][70][71]
Orangekloud Signs MOU for Development of Specialized LLM for Software Engineering and Application Development
Globenewswire· 2025-06-30 12:30
Core Insights - Orangekloud Technology Inc. has signed a memorandum of understanding with Evvo Labs to develop a large language model tailored for software engineering and application development [1][4] - The integration of the LLM into Orangekloud's eMOBIQ platform will enhance features such as intelligent suggestions, code generation, testing automation, and system integration support [2] - The project aims to improve ERP implementation and software development cycles through automated documentation, code audits, and AI-guided system configuration [2][3] Company Overview - Orangekloud Technology Inc. is a Singapore-based technology company that offers the eMOBIQ No-Code platform, designed for mobile application development, particularly for SMEs and corporations [5] - The eMOBIQ platform includes a suite of applications that digitalize and streamline operations in various sectors, including Food Services, Manufacturing, Precision Engineering, and Construction [5] Partner Overview - Evvo Labs Pte. Ltd. is an award-winning ITMS technology company in Singapore, specializing in digital transformation and technology development [6] - The company has received recognition for its achievements in cybersecurity and digital media, including winning the Singapore Government Bulk Tender Awards since 2010 [6]