Workflow
Quantitative Model
icon
Search documents
3 Growth ETFs Down This Month and One of Them Is a Buy
247Wallst· 2026-03-19 16:03
Core Viewpoint - The Fidelity Enhanced Large Cap Growth ETF (FELG) is down 7.77% year-to-date, similar to the Vanguard Growth ETF (VUG) at 7.76%, but FELG employs a quantitative model that allows it to shift away from overvalued tech stocks, while VUG, as a passive index fund, lacks this mechanism [1][6][10]. Group 1: Performance Analysis - FELG and VUG have both experienced significant declines this year, with VUG dropping from approximately $488 to around $450 per share, reflecting a 7.76% loss [5][6]. - The Fidelity Nasdaq Composite Index ETF (ONEQ) has fared slightly better, down 4.38% year-to-date, due to its broader exposure across over 700 Nasdaq-listed securities [5][6]. Group 2: Interest Rate Impact - The trajectory of interest rates, particularly the 10-year Treasury yield, is a primary driver of growth stock performance, with the yield rising from 3.97% in late February to 4.20% as of March 17, 2026 [2][8]. - Growth stocks are sensitive to interest rate changes because their valuations are heavily influenced by future earnings, which are discounted more when rates are high [7]. Group 3: Structural Differences - FELG is not a passive index fund; it utilizes a quantitative process to favor companies with improving fundamentals, contrasting with passive funds like VUG and ONEQ that do not adjust exposure during market corrections [10][12]. - FELG's top holdings include significant positions in mega-cap tech stocks, but it also includes healthcare stocks like Eli Lilly, which may provide a different recovery profile if tech continues to lag [11]. Group 4: Future Outlook - If the 10-year Treasury yield stabilizes or decreases, and the Federal Reserve signals further rate cuts, FELG's quantitative model is designed to rotate towards fundamentally improving companies, potentially leading to a different recovery trajectory compared to passive peers [14]. - The VIX index, which peaked at 29.49 on March 6 and has since decreased to 22.37, indicates that while fear is subsiding, volatility compression may precede recoveries in growth-oriented funds [13].
完成逾百个模型适配 量化模型优势显著
Zhi Tong Cai Jing· 2025-11-25 07:04
Core Insights - Paradigm Intelligence recently announced that its "ModelHub XC" has completed the adaptation certification of 108 mainstream AI models on Moore Threads GPUs, covering various task types such as text generation, visual understanding, and multimodal Q&A, with plans to expand to a thousand models in the next six months, injecting continuous momentum into the domestic computing power ecosystem [1][3] - Moore Threads, a domestic GPU company set to launch on the Sci-Tech Innovation Board, has demonstrated significant advantages in quantized models during this adaptation process, with its GPUs effectively reducing model memory usage and enhancing inference speed through hardware-level support for low-precision data types and optimized instruction sets [1] - The official launch of Moore Threads on the Sci-Tech Innovation Board is scheduled for November 24, with an issuance price of 114.28 yuan per share, marking a new high for A-share IPO prices since 2025 [1] - The efficient and stable operation of models on domestic chips is a key challenge for the industry, and Paradigm Intelligence is addressing this by leveraging its self-developed EngineX engine technology to improve model compatibility and operational efficiency on domestic chips, significantly lowering deployment barriers for developers [1][5] Summary by Sections ModelHub XC Overview - ModelHub XC is an AI model and tool platform aimed at the domestic computing power ecosystem, providing a comprehensive solution that covers the entire process from model training and inference to deployment, while also serving community and service functions [5] EngineX Engine - The EngineX engine serves as the underlying support system for ModelHub XC, enabling "engine-driven, multi-model plug-and-play" capabilities, effectively addressing the bottlenecks in model compatibility and scale support on domestic chips [3][5]