
Core Insights - The introduction of AI integrated machines has significantly reduced the cost for small and medium enterprises (SMEs) to deploy AI and large models from millions to around 200,000 yuan, with expectations for further reductions to the tens of thousands range in the coming months [1] Group 1: Product Development - Lenovo showcased its latest achievement in edge computing at the Tech World conference, introducing the "Lenovo Inference Acceleration Engine," a platform designed for efficient AI PC inference [1] - This engine, developed in collaboration with Tsinghua University and Wuneng Chip, allows a standard PC's local inference capabilities to rival OpenAI's o1-mini cloud model released last year [1] Group 2: Market Impact - Lenovo's CEO, Yang Yuanqing, stated that the demand for AI is experiencing explosive growth, predicting a tripling of edge AI capabilities within the next 12 months [2] - The showcased desktop product, equipped with the acceleration engine, is priced around 40,000 yuan and is expected to facilitate local training of 32B large models, significantly lowering deployment costs for SMEs in sectors like finance, education, and law [2] Group 3: Cost Efficiency - Traditional training solutions for 32B large language models can cost around 2 million yuan, requiring at least eight NVIDIA graphics cards, making it prohibitively expensive for SMEs [2] - The Lenovo solution, utilizing the "AI Studio" software platform, can reduce local training costs by 98%, making it accessible for SMEs to train models with their own databases and personalized information [2]