量化技术
Search documents
牛市四大陷阱,90%股民都踩过!
Sou Hu Cai Jing· 2025-10-08 04:21
最近金融圈有个事儿挺有意思。标普信用评级(中国)有限公司被北京证监局出具警示函了。这事儿说大不大说小不小,但特别值得玩味。你看啊, 堂堂国际评级巨头在中国市场栽了跟头,原因竟然是"未遵循评级一致性原则"和"未按要求进行信息披露"。这不就是我们散户天天在股市里犯的错吗? 这事儿让我想起前两天跟一个老友喝茶。他一脸愁容地说:"老哥啊,这波行情我又是赚了指数不赚钱。"我问他怎么操作的,他说就盯着那些涨得好 的买呗。我当时就笑了——这不就是典型的"牛市陷阱"吗? 我这十几年观察下来发现,越是行情好的时候,散户越容易掉进这几个坑: 第一个坑叫"持股待涨"。很多人觉得牛市来了就躺着等赚钱,结果往往是坐了一轮过山车。 第二个坑是"只做热点"。追着热门股跑的人十个有九个最后都成了接盘侠。 第三个更绝,"强者恒强"。觉得涨得好的会一直涨,结果买在山顶上。 最后一个叫"超跌反弹"。看着跌多了就去抄底,结果抄在半山腰。 你们说说看,这些坑是不是都踩过?我年轻时候也犯过这些错。后来想明白了:牛市的钱不是等来的,是做出来的。关键是要做到三个坚持:不看冷 热、不看涨跌、不看高低。 说到这个高低点判断啊,简直是个世纪难题。大多数人都是凭感 ...
公募机构大力布局 增强指数型基金
Zhong Guo Zheng Quan Bao· 2025-09-11 22:24
Core Insights - The popularity of enhanced index funds has surged among public fund institutions, with over 100 new funds launched this year, surpassing the total number launched in 2023 and 2024 [1][2] - Enhanced index funds have shown significant excess returns, with 511 out of 512 funds reporting positive returns over the past year, and some funds achieving returns exceeding 100% [4] Fund Issuance and Performance - A total of 106 enhanced index funds have been launched this year, with a combined issuance of 61.097 billion units, exceeding the 2023 and 2024 totals of 42 and 59 funds, respectively [2] - The largest fund launched this year is the GF Growth Enterprise Board Index Enhanced Fund, with 2.393 billion units issued, followed by the Pengyang CSI A500 Index Enhanced Fund and the Bodao CSI All Share Index Enhanced Fund, with 1.940 billion and 1.911 billion units, respectively [2] Reasons for Popularity - Enhanced index funds combine the advantages of index investing with the potential for excess returns, appealing to investors seeking higher returns [3] - The development of quantitative technology allows funds to utilize models to identify excess returns while tracking indices, further attracting institutional interest [3] Excess Returns - Over the past year, 12 enhanced index funds have achieved returns exceeding 100%, with the best performer being the Chuangjin Hexin North Certificate 50 Component Index Enhanced A, yielding 147.23% [4] - More than 60% of enhanced index funds have generated excess returns over the past year, with the highest excess return recorded at over 31 percentage points above the benchmark [4] Market Outlook - The current policy environment supports a positive trend in the capital market, with expectations of a rate cut by the Federal Reserve and increased liquidity, which is likely to attract new capital into the market [5] - Fund managers suggest a cautious approach in the short term, with potential adjustments in asset allocation towards stable assets like bank stocks, while still favoring quality tech stocks with industry trends [5][6]
AI创业圈又冲出一个288亿独角兽......
Tai Mei Ti A P P· 2025-08-15 03:09
Core Insights - Fireworks AI has emerged as a unicorn with a valuation of $28.8 billion, backed by prominent investors including Nvidia and AMD, indicating strong confidence in its business model and technology [1][14][17] - The founder, Qiaolin, has a robust background in AI and technology, having previously led a large engineering team at Meta, which developed PyTorch into a leading tool for AI developers [2][12] - Fireworks AI aims to simplify AI deployment for startups by providing optimized access to powerful AI models through a pay-per-use API, addressing common pain points in the industry [5][12] Company Overview - Fireworks AI was founded in 2022 by Qiaolin and a team of experts from PyTorch and Google, focusing on AI infrastructure and optimization technologies [2][5] - The company operates as an "AI computing central kitchen," renting Nvidia servers and pre-installing popular open-source models for easy access by clients [5][12] Technology and Innovation - Fireworks AI's competitive edge lies in its proprietary optimization techniques that enhance the speed and cost-effectiveness of AI models, making it more than just a server rental service [6][10] - The company has successfully improved the performance of its client, Cursor, by implementing techniques such as quantization and speculative execution, resulting in a significant increase in processing speed [10][12] Market Position and Competition - Fireworks AI has attracted significant investment from top-tier venture capital firms and tech giants, establishing itself as a key player in the AI infrastructure market [13][14] - The relationship with Nvidia is complex, as Nvidia not only invests in Fireworks AI but also competes in the same space, raising concerns about potential conflicts of interest and market dynamics [15][17] - Qiaolin acknowledges the competitive landscape and the necessity for Fireworks AI to scale quickly to establish a strong market position before facing direct competition from Nvidia [16][17]
让强化学习快如闪电:FlashRL一条命令实现极速Rollout,已全部开源
机器之心· 2025-08-12 09:51
Core Viewpoint - The article discusses the development and implementation of FlashRL, an open-source reinforcement learning solution that utilizes quantized rollouts without sacrificing downstream performance, addressing the challenges of rollout-training mismatch through the introduction of Truncated Importance Sampling (TIS) [4][16][37]. Group 1: DAPO and Rollout Challenges - DAPO, developed by Tsinghua AIR and ByteDance, is an open-source SOTA system for large-scale LLM reinforcement learning, achieving a score of 50 on the AIME 2024 benchmark with the Qwen2.5-32B model [1]. - The research team identified that rollout generation is a major bottleneck in reinforcement learning training, consuming approximately 70% of total training time [3]. - The application of 8-bit quantization during rollout generation, combined with TIS technology, significantly accelerates the process while maintaining downstream performance [3][4]. Group 2: FlashRL Implementation - FlashRL is the first open-source reinforcement learning implementation that applies INT8/FP8 during the rollout phase, achieving performance parity with BF16 without any performance loss [4][15]. - The introduction of TIS mitigates the rollout-training mismatch, allowing quantized rollout training to achieve performance levels comparable to BF16 rollout training, and even surpassing naive BF16 rollout training [16][37]. - FlashRL supports online quantization and has been integrated with existing inference engines like vLLM to enhance their capabilities for models with parameter updates [22]. Group 3: Performance and Acceleration - FlashRL's INT8 rollout can provide up to 1.7 times throughput improvement while retaining the advantages of reinforcement learning [23]. - In standard environments, the acceleration observed with 8-bit quantization is more pronounced in larger models, with a speedup of up to 1.75 times for the 32B model compared to BF16 [29]. - In memory-constrained environments, INT8 quantization can lead to over 3 times speedup in generation speed, highlighting its potential for larger models [34]. Group 4: Validation and Usage - The effectiveness of FlashRL was validated in training the DAPO-32B model, demonstrating that INT8 rollout significantly improves training speed without compromising accuracy on the AIME benchmark [36][37]. - FlashRL can be easily implemented with a single command, allowing users to integrate it into their RL training without code modifications [41].
独家网络研讨会:“美”涨船高之际,如何以量化技术把握美股机遇?
彭博Bloomberg· 2025-07-18 05:43
Core Viewpoint - The article discusses the recent strong performance of the US stock market, particularly the S&P 500 index, which has approached historical highs, and highlights the importance of understanding market dynamics and utilizing quantitative techniques for investment opportunities [1]. Group 1: Market Dynamics - The US stock market has shown a strong upward trend, with the S&P 500 index nearing historical highs as of early July [1]. - Goldman Sachs has raised its target for the index to 6900 points for the second time since May, indicating a positive outlook [1]. Group 2: Key Issues to Address - The article raises critical questions regarding the sources of market optimism and how it may evolve in the future [1]. - It emphasizes the need for systematic exploration and evaluation of investment opportunities, from macroeconomic outlooks to individual stock potentials [1]. Group 3: Investment Strategies - The discussion includes the role of options strategies in risk management and enhancing returns during portfolio adjustments [1]. - It highlights the importance of technical indicators in practical applications for investment analysis [1]. Group 4: Event Details - The article promotes a webinar featuring Bloomberg experts who will provide in-depth analysis of recent trends in the US stock and options markets, as well as insights on using the Bloomberg quantitative platform BQuant Desktop for various analyses [1][4].
ICML 2025 | 注意力机制中的极大值:破解大语言模型上下文理解的关键
机器之心· 2025-05-06 04:11
Core Insights - The article discusses a significant phenomenon in large language models (LLMs) related to the concentration of massive values in the self-attention mechanism, particularly in the query (Q) and key (K) representations, which is crucial for contextual knowledge understanding [1][3][4]. Research Highlights - The study reveals that massive values are highly concentrated in Q and K, which is contrary to the expectation of independent operations in each attention head. This consistency across multiple layers and heads is visually demonstrated [3][4]. - The phenomenon of massive values is specifically observed in models using Rotational Position Encoding (RoPE), such as LLaMA, Qwen, and Gemma, while models without RoPE, like GPT-2 and OPT, do not exhibit this pattern [4]. - The research establishes a direct link between the presence of massive values in Q and K and the ability to understand contextual knowledge [4]. Key Findings 1. **Concentration of Massive Values**: Massive values are found to be highly concentrated in specific regions of each attention head, indicating a surprising level of consistency [3][4]. 2. **Impact on Contextual Knowledge Understanding**: The study shows that the presence of massive values is critical for understanding contextual knowledge, as demonstrated through destructive experiments that reset these values to their average [5][6]. 3. **Quantization Techniques**: Specific quantization methods that address massive values, such as AWQ and SmoothQuant, are shown to better preserve contextual knowledge understanding compared to methods that do not focus on massive values [7]. 4. **Origin of Concentration Phenomenon**: The concentration of massive values is attributed to RoPE, which affects low-frequency regions in Q and K, leading to this phenomenon appearing from the early layers of the model [8]. Experimental Results - The experiments reveal a stark contrast in the impact of massive values on different knowledge tasks: - **Resilience in Parametric Knowledge Retrieval**: Tasks relying on parametric knowledge show a decline of only 15-20% in accuracy when massive values are disrupted, maintaining 76%-88% accuracy [10]. - **Catastrophic Decline in Contextual Knowledge Tasks**: Tasks requiring contextual understanding experience a drastic drop in performance, with accuracy in key retrieval tasks plummeting from 100% to near 0% when massive values are disrupted [11]. - **Control Experiments**: When only non-massive values are disrupted, task performance remains stable, confirming the unique importance of massive values in contextual understanding [12]. Future Directions - The research opens several avenues for further exploration, including enhancing or adjusting the distribution of massive values to improve contextual understanding, examining the universality of this phenomenon across different architectures, and designing targeted quantization methods to protect massive values related to contextual understanding [16].