深度学习
Search documents
从车库创业到冲刺港股,魔视智能3年亏超6.6亿元
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-28 10:42
Core Insights - Magic View Intelligent Technology (Shanghai) Co., Ltd. has submitted its listing application to the Hong Kong Stock Exchange after completing eight rounds of financing, marking its entry into the capital market [1][3] - Despite delivering over 3.3 million solutions across 92 vehicle models, the company has incurred cumulative losses exceeding 660 million RMB over the past three years and has yet to achieve profitability [1][5] Company Overview - Founded in 2015, Magic View Intelligent is an AI-driven provider of intelligent driving solutions, offering integrated hardware and software solutions with L0-L4 level autonomous driving capabilities [3][4] - The founder, Yu Zhenghua, has extensive academic and industry experience, previously serving in various prestigious roles, and recognized the potential of autonomous driving during his first entrepreneurial venture [3][4] Market Position and Performance - The company launched its first generation of deep learning-based embedded ADAS in 2016 and has established partnerships with major automotive manufacturers such as BYD, Geely, and GAC [4][5] - According to its prospectus, Magic View is projected to rank eighth among third-party solution providers in China's intelligent driving solutions market by revenue in 2024, with a market share of approximately 0.4% [4][5] Financial Performance - Revenue is expected to grow from 117.8 million RMB in 2022 to 356.8 million RMB in 2024, representing over a twofold increase, while net losses are projected to rise from 200 million RMB in 2022 to 233 million RMB in 2024 [6][7] - The company reported a revenue of 189 million RMB in the first half of 2025, reflecting a year-on-year growth of 76.4%, but continues to face significant losses [7] Industry Context - The Chinese market for L0 to L2+ intelligent driving solutions is rapidly expanding, projected to grow from 21.6 billion RMB in 2020 to 91.2 billion RMB by 2024, with a compound annual growth rate (CAGR) of 43.3% [5] - As the automotive industry transitions from electrification to intelligence, competition is intensifying, with traditional manufacturers increasingly investing in in-house development of autonomous driving technologies [7]
2025全球前2%顶尖科学家榜单发布,清华国内第一、Bengio全球前十
3 6 Ke· 2025-09-28 03:32
Core Insights - Stanford University and Elsevier jointly released the "Stanford 2025 Global Top 2% Scientists List," highlighting the significant achievements of Chinese scholars, with Tsinghua University ranking fourth globally with 746 scholars included [1][2][3]. Summary by Categories Overall Rankings - A total of 1,435 individuals from China made it to the lifetime "Stanford 2025 Global Top 2% Scientists List," while 2,270 were included in the annual list [2]. - Tsinghua University is ranked fourth globally, just behind the University of Oxford and ahead of Stanford University, with 746 scholars recognized [3][5]. Notable Individuals - Zhou Zhihua from Nanjing University and Zhang Zhengyou from Tencent both entered the global top 1,000, ranked 526 and 969 respectively [5][6]. - Zhou Zhihua is noted for his contributions to artificial intelligence and machine learning, with over 100,000 citations on Google Scholar [9]. - Zhang Zhengyou, a prominent figure in computer vision and robotics, has over 80,000 citations and is recognized for his innovative contributions in the field [12][14]. Methodology of Ranking - The list identifies the top 2% of scientists based on standardized citation metrics across 22 scientific fields and 174 subfields, ensuring a fair representation of research impact [20]. - The composite score (c-score) used for ranking considers multiple citation metrics, emphasizing meaningful impact rather than mere productivity [20].
有一定深度学习基础,该如何入门自动驾驶?
自动驾驶之心· 2025-09-25 23:33
Group 1 - The core viewpoint emphasizes the rapid evolution of the autonomous driving technology stack, highlighting the need for continuous learning to avoid obsolescence in the field [1] - The company has established three platforms focusing on autonomous driving, embodied intelligence, and large models, encouraging exploration and adaptation in a changing environment [2] - The company is actively promoting industry advancement and has launched significant promotional activities during the National Day and Mid-Autumn Festival, offering discounts on courses [2][4] Group 2 - The knowledge community for autonomous driving includes nearly 40 learning paths, covering cutting-edge technologies such as VLA, world models, and closed-loop simulation [8] - The community facilitates face-to-face interactions with industry leaders and offers seven premium courses aimed at beginners, fostering skill development [8]
从Transformer到GPT-5,听听OpenAI科学家 Lukasz 的“大模型第一性思考”
AI科技大本营· 2025-09-23 02:11
Core Viewpoint - The article discusses the revolutionary impact of the paper "Attention Is All You Need," which introduced the Transformer architecture, fundamentally changing the landscape of artificial intelligence and natural language processing [2][17]. Group 1: The Impact of the Transformer - The paper "Attention Is All You Need" has been cited 197,159 times on Google Scholar, highlighting its significant influence in the AI research community [3][26]. - The authors of the paper, known as the "Transformer Eight," have become prominent figures in the AI industry, with seven of them starting their own companies [4][24]. - The introduction of the Transformer architecture has led to a paradigm shift in AI, moving away from RNNs and enabling better handling of long-distance dependencies in language processing [17][18]. Group 2: Lukasz Kaiser's Journey - Lukasz Kaiser, one of the authors, chose to join OpenAI instead of starting a commercial venture, focusing on the pursuit of AGI [4][25]. - Kaiser has a strong academic background, holding dual master's degrees in computer science and mathematics, and has received prestigious awards for his research [7][8]. - His decision to leave a stable academic position for Google Brain in 2013 was driven by a desire for innovation in deep learning [11][12]. Group 3: The Evolution of AI Models - Kaiser and his team introduced the attention mechanism to address the limitations of RNNs, leading to the development of the Transformer model [15][17]. - The success of the Transformer has spurred a wave of entrepreneurship in the AI field, with many authors of the original paper becoming CEOs and CTOs of successful startups [24][27]. - Kaiser has been involved in the development of cutting-edge models like GPT-4 and GPT-5 at OpenAI, contributing to the forefront of AI research [27]. Group 4: Future Directions in AI - Kaiser predicts that the next phase of AI will focus on teaching models to think more deeply, emphasizing the importance of generating intermediate steps in reasoning [29]. - The upcoming ML Summit 2025 will feature Kaiser discussing the history, present, and future of reasoning models, indicating ongoing advancements in AI technology [28][30].
市场舆情监测供应厂家推荐:如何选择高性价比服务商
Sou Hu Cai Jing· 2025-09-18 02:55
Core Insights - Market sentiment monitoring has become a crucial tool for corporate decision-making in the era of information explosion [1] - The selection of a professional and reliable service provider is a focal point for many companies, with key considerations including technical strength, data coverage, and service flexibility [1] Group 1: Data Monitoring Capabilities - A company's technical reserves often determine the depth of its services, exemplified by Beijing Blue Pacific Technology Co., Ltd., which has established a unique technical barrier in the big data field [3] - Blue Pacific has built a nationwide monitoring network that enables efficient collection and analysis of internet information, allowing companies to obtain market dynamics in real-time [3] - The timeliness and accuracy of data are core values of sentiment monitoring, with Blue Pacific leveraging its self-built IDC data center and numerous data detection nodes to ensure broad coverage and high precision [3] Group 2: Innovative Service Models - Blue Pacific integrates big data technology with mobile internet applications, offering customized solutions that transform complex technology into practical tools for non-technical managers [4] - The company's continuous optimization of data models enhances the analytical capabilities of vast information, helping businesses identify potential risks and uncover hidden market opportunities [4] - Blue Pacific's successful data support solutions in government evaluation demonstrate the broad applicability of its technology across various industries [4] Group 3: Sustainable Solutions - Companies should focus on whether service providers can offer sustainable solutions, with Blue Pacific maintaining sensitivity to cutting-edge technologies [4] - The company's rapid technological iteration and deep industry engagement highlight its ability to provide reliable technical support in a fast-changing market environment [4]
谷歌反垄断案折射搜索行业变革
Jing Ji Ri Bao· 2025-09-14 21:46
Core Viewpoint - Google achieved a significant victory in a 5-year antitrust case, avoiding forced breakup, with generative AI companies like OpenAI playing a crucial role in this outcome [2] Group 1: Antitrust Case and Market Impact - The U.S. government has intensified antitrust scrutiny on Silicon Valley giants, with Google being a key target, facing lawsuits since 2020 for its dominance in the search engine market [2] - A recent ruling by Judge Amit Mehta determined that Google does not need to divest its Chrome browser or Android operating system but must open more search result data to competitors and establish an antitrust technology committee [2] - Following the ruling, Google's stock surged over 8%, reflecting increased market confidence [2] Group 2: Role of Generative AI - The ruling highlighted the impact of generative AI, noting that more users are turning to AI chatbots like ChatGPT for information instead of traditional search engines, which reduces the necessity for a complete breakup of Google [2] - New AI browsers, such as Perplexity's Comet and OpenAI's upcoming browser, are redefining information retrieval through deep learning and natural language processing [3] - Despite the emergence of AI search engines, traditional search giants maintain a strong competitive advantage due to their established ecosystems and user data integration [3] Group 3: Future of Search Engines - Traditional search engines hold critical resources for the development of generative AI, including significant computing power and vast amounts of data [4] - The transition to AI-driven search is at a crossroads, with questions about whether new AI search engines can overcome cost and technical barriers, and whether traditional giants can successfully adapt to AI [4] - The ruling is considered one of the most impactful court decisions in the tech industry this century, providing a reference for other companies facing antitrust scrutiny, such as Meta, Amazon, and Apple [4]
斯坦福AI能精准预测死亡,玄学还是大数据?
Hu Xiu· 2025-09-11 13:04
Core Insights - AI technology is being utilized to predict the time of death for terminally ill patients, with accuracy rates improving from 40% to 80% [1] - Danish scientists have developed an AI model that can predict significant events and death dates using data from 5.96 million individuals with 280 dimensions of labels, achieving an accuracy rate of 78% [1] - Concerns have been raised regarding the potential misuse of this technology by insurance companies, leading to hesitance in making the algorithms public [1]
AI+HI系列:DecompGRNv1:基于线性RNN的端到端模型初探
Huachuang Securities· 2025-09-05 08:12
Quantitative Models and Construction Methods 1. Model Name: RNN-LIN - **Model Construction Idea**: Simplify the traditional GRU model by using a linear RNN structure, reducing parameter complexity while maintaining competitive performance[2][17][20] - **Model Construction Process**: - The model uses a linear RNN structure with only a forget gate and an output gate. The hidden state is updated without non-linear activation functions - Equations: $ h_{t} = f_{t} \otimes h_{t-1} + (1 - f_{t}) \otimes c_{t} $ $ y_{t} = o_{t} \otimes h_{t} $ $ f_{t} = Sigmoid(x_{t}W_{f}) $ $ o_{t} = Sigmoid(x_{t}W_{o}) $ $ c_{t} = SiLU(x_{t}W_{c}) $ - $f_{t}$: Forget gate - $o_{t}$: Output gate - $c_{t}$: Candidate state[20][21] - The model reduces parameters by approximately 50% compared to GRU[21] - **Evaluation**: The linear RNN model shows slightly weaker performance than GRU but remains competitive. Adding GLU modules improves its performance significantly[22][53] 2. Model Name: DecompGRN - **Model Construction Idea**: Extend the linear RNN by integrating cross-sectional information directly into the RNN gating mechanism, enabling simultaneous modeling of temporal and cross-sectional data[2][50] - **Model Construction Process**: - The first RNN layer outputs individual stock representations at each time step - Cross-sectional information is incorporated by grouping stocks based on market capitalization and calculating group de-meaned values - The second RNN layer combines temporal and cross-sectional information in the forget and output gates - Equations: $ h_{t} = f_{t} \otimes h_{t-1} + (1 - f_{t}) \otimes c_{t} $ $ y_{t} = o_{t} \otimes h_{t} $ $ f_{t} = Sigmoid(x_{t}W_{f}) $ $ o_{t} = Sigmoid(x_{t}W_{o}) $ $ c_{t} = SiLU(x_{t}W_{c}) $ - $f_{t}$: Forget gate - $o_{t}$: Output gate - $c_{t}$: Candidate state[50][55] - **Evaluation**: DecompGRN outperforms the GRU baseline in terms of RankIC and RankICIR while maintaining only 43% of the GRU's parameter count[74][53] --- Model Backtest Results 1. RNN-LIN - **RankIC**: - CSI All Share: 0.13 - CSI 300: 0.10 - CSI 500: 0.09 - CSI 1000: 0.12[36][37] - **RankICIR**: - CSI All Share: 1.08 - CSI 300: 0.62 - CSI 500: 0.71 - CSI 1000: 0.96[36][37] - **IC Win Rate**: - CSI All Share: 0.88 - CSI 300: 0.74 - CSI 500: 0.78 - CSI 1000: 0.86[36][37] - **Annualized Return (Top Group)**: - CSI All Share: 42.59% - CSI 300: 28.59% - CSI 500: 23.68% - CSI 1000: 32.81%[42] 2. DecompGRN - **RankIC**: - CSI All Share: 0.141 - CSI 300: 0.099 - CSI 500: 0.098 - CSI 1000: 0.127[55][58] - **RankICIR**: - CSI All Share: 1.26 - CSI 300: 0.65 - CSI 500: 0.77 - CSI 1000: 1.08[55][58] - **IC Win Rate**: - CSI All Share: 0.89 - CSI 300: 0.74 - CSI 500: 0.78 - CSI 1000: 0.88[55][58] - **Annualized Return (Top Group)**: - CSI All Share: 57.68% - CSI 300: 31.69% - CSI 500: 26.9% - CSI 1000: 40.35%[57][58] --- Index Enhancement Test Results (DecompGRN) - **Annualized Excess Return**: - CSI 300: 10.24% - CSI 500: 10.05% - CSI 1000: 19.58%[75][85] - **Tracking Error**: - CSI 300: 5.07 - CSI 500: 6.1 - CSI 1000: 6.75[75][85] - **Cumulative Excess Return (as of 2025-08-27)**: - CSI 300: 3.93% - CSI 500: 6.72% - CSI 1000: 18.26%[75][85]
守护我们的专注力(金台随笔)
Ren Min Ri Bao· 2025-09-04 22:57
Core Insights - The article discusses the challenges of maintaining focus and deep learning in the digital age, highlighting a common struggle with distractions from technology and fast-paced lifestyles [1][2][4] - It emphasizes the importance of shifting mindsets to prioritize depth over speed in learning and cultural experiences [2][3] Group 1: Focus and Learning - Many individuals experience a decline in concentration and deep learning abilities due to the fast-paced nature of modern life, leading to superficial engagement with content [1][2] - The pursuit of efficiency can lead to a neglect of depth, as quick consumption of media does not allow for appreciation of artistic or literary nuances [2][3] Group 2: Curiosity and Engagement - Reigniting curiosity is essential for enhancing focus, as it can trigger a chain reaction of inquiry and deeper exploration [3] - Engaging in meaningful conversations, nature experiences, and cultural explorations can foster a natural state of focus, contrasting with the self-discipline often associated with maintaining attention [3] Group 3: Digital Culture and Critical Thinking - The digital age presents a challenge in cultivating a rich cultural life, necessitating skills in attention management, information discernment, and critical thinking [4] - Addressing these challenges is crucial for fully enjoying the benefits of digital technology and enriching the spiritual and cultural dimensions of life [4]
刚刚,李飞飞主讲的斯坦福经典CV课「2025 CS231n」免费可看了
机器之心· 2025-09-04 09:33
Core Viewpoint - Stanford University's classic course "CS231n: Deep Learning for Computer Vision" is officially launched for Spring 2025, focusing on deep learning architectures and visual recognition tasks such as image classification, localization, and detection [1][2]. Course Overview - The course spans 10 weeks, teaching students how to implement and train neural networks while gaining insights into cutting-edge research in computer vision [3]. - At the end of the course, students will have the opportunity to train and apply neural networks with millions of parameters on real-world visual problems of their choice [4]. - Through multiple practical assignments and projects, students will acquire the necessary toolset for deep learning tasks and engineering techniques commonly used in training and fine-tuning deep neural networks [5]. Instructors - The course features four main instructors: - Fei-Fei Li: A renowned scholar and Stanford professor, known for creating the ImageNet project, which significantly advanced deep learning in computer vision [6]. - Ehsan Adeli: An assistant professor at Stanford, focusing on computer vision, computational neuroscience, and medical image analysis [6]. - Justin Johnson: An assistant professor at the University of Michigan, with research interests in computer vision and machine learning [6]. - Zane Durante: A third-year PhD student at Stanford, researching multimodal visual understanding and AI applications in healthcare [7]. Course Content - The curriculum includes topics such as: - Image classification using linear classifiers - Regularization and optimization techniques - Neural networks and backpropagation - Convolutional Neural Networks (CNNs) for image classification - Recurrent Neural Networks (RNNs) - Attention mechanisms and Transformers - Object recognition, image segmentation, and visualization - Video understanding - Large-scale distributed training - Self-supervised learning - Generative models - 3D vision - Visual and language integration - Human-centered AI [16]. Additional Resources - All 18 course videos are available for free on YouTube, with the first and last lectures delivered by Fei-Fei Li [12].