Industry Investment Rating - Buy (Maintained Rating) [1] Core Views - Current large models are at the Emerging AGI level, with language models being the most mature, followed by multimodal models, and embodied intelligence models still in the exploratory phase [1] - Scaling Law remains the optimal method for improving model performance in the short to medium term, with OpenAI estimating it to be effective up to 88 trillion parameters [1] - The backbone network architecture of models has not yet reached its final form, with fine-tuning and sparse structures being important methods to enhance model performance [1] - Open-source models are improving faster than closed-source models, and finding application scenarios may be more important than model development in the current AI wave [2] - Key factors affecting model performance are algorithms, data, and computing power, with related companies benefiting from the continuous advancement of large model training [3] Key Points by Section Distance to AGI - Language models are relatively mature, with GPT-4, Gemini 1.5, and Claude 3 capable of handling text, images, and video inputs but lacking independent decision-making and execution capabilities [10] - AGI is classified into 6 levels by DeepMind, with current top models at Level-1 Emerging AGI [10] - Language models have seen rapid development since GPT-3, with Claude 3 Opus achieving over 85% accuracy in multiple test projects [11] Path to AGI - Scaling Law shows that model performance improves with increases in model size, dataset size, and computational resources [22] - OpenAI's research indicates that Scaling Law will remain effective up to 88 trillion parameters, with GPT-5 expected to reach 10 trillion parameters [25] - Transformer architecture remains the backbone of most large models, with innovations in encoder-decoder choices, multimodal fusion, and self-attention mechanisms [27] Commercialization - Open-source models are catching up to closed-source models, and finding application scenarios may be more critical than model development [47] - High human-replacement rate scenarios, such as chatbots and creative content generation, are more likely to see early adoption due to their tolerance for "hallucinations" [48] - Scenarios with low tolerance for "hallucinations," such as autonomous driving and medical diagnosis, require further model improvements [49] Investment Recommendations - Companies like iFlytek, Kingsoft Office, and Tonghua Shun are recommended for their AI capabilities in education, office automation, and finance [3] - Data engineering suppliers and computing power industry chain companies are also key beneficiaries of large model training advancements [3] - AI+ scenarios in education, enterprise services, and office automation are highlighted as areas with significant potential for AI integration [56]
计算机行业研究:如何实现AGI:大模型现状及发展路径展望
国金证券·2024-04-04 16:00