PyCharm

Search documents
马斯克新公司叫 “巨硬”,“微软”看了都得吐血
3 6 Ke· 2025-08-08 11:26
Core Viewpoint - The article discusses Elon Musk's new company named "MacroHard," which is a direct challenge to Microsoft, highlighting the rivalry between Musk and Bill Gates in the tech industry [1][4][5]. Company Overview - "MacroHard" has officially applied for a trademark on August 1, with an application fee of $2,300, indicating serious intentions behind the name [4]. - The company aims to focus on developing AI agents for programming and image/video generation, potentially revolutionizing these sectors [13][14]. Business Strategy - In programming, "MacroHard" plans to create a new ecosystem where AI agents can generate high-quality code based on user requirements, significantly improving efficiency and reducing costs [13]. - For image and video generation, the company intends to develop AI agents capable of creating realistic and high-quality content, which could transform industries like film and advertising by shortening production times and lowering costs [14]. Competitive Landscape - "MacroHard" will face competition not only from Microsoft but also from established players like JetBrains in programming tools and Adobe in image/video processing [15]. - The potential capabilities of "MacroHard" could disrupt the current market dynamics, prompting competitors to enhance their offerings [15]. Future Outlook - The establishment of "MacroHard" injects excitement into the tech industry, with expectations for innovative developments that could reshape the landscape [16]. - The ongoing rivalry between Musk and Gates is anticipated to intensify, with both figures representing different approaches to technology and business [18].
China Went HARD...
Matthew Berman· 2025-07-24 00:30
Model Performance & Capabilities - Quen 3 coder rivals Anthropic's Claude family in coding performance, achieving 69.6% on SWEBench verified compared to Claude Sonnet 4's 70.4% [1] - The most powerful variant, Quen 3 coder 480B, features 480 billion parameters with 35 billion active parameters as a mixture of experts model [2][3] - The model supports a native context length of 256k tokens and up to 1 million tokens with extrapolation methods, enhancing its capabilities for tool calling and agentic uses [4] Training Data & Methodology - The model was pre-trained on 7.5 trillion tokens with a 70% code ratio, improving coding abilities while maintaining general and math skills [5] - Quen 2.5 coder was leveraged to clean and rewrite noisy data, significantly improving overall data quality [6] - Code RL training was scaled on a broader set of real-world coding tasks, focusing on diverse coding tasks to unlock the full potential of reinforcement learning [7][8] Tooling & Infrastructure - Quen launched Quen code, a command line tool adapted from Gemini code, enabling agentic and multi-turn execution with planning [2][5][9] - A scalable system was built to run 20,000 independent environments in parallel, leveraging Alibaba cloud's infrastructure for self-play [10] Open Source & Accessibility - The model is hosted on HuggingFace, making it free to use and try out [11]