Workflow
深度学习
icon
Search documents
双增长!江苏银行个人金融业务交出亮眼成绩单
Jing Ji Guan Cha Wang· 2025-04-28 10:23
Core Insights - The personal financial business has become a key competitive area for banks as they pursue high-quality development in the financial industry [1] - Jiangsu Bank has reported impressive growth in retail deposits and loans, with retail deposit balance reaching 822.9 billion yuan, a 16.21% increase year-on-year, and retail loan balance at 674.8 billion yuan, a 3.40% increase year-on-year [1] Group 1: Personal Financial Services - Jiangsu Bank focuses on customer needs, utilizing strong data and intelligent analysis to create a "layered + categorized" precise service system, effectively tailoring financial solutions for clients [2] - The bank has launched the "Enterprise Investor" comprehensive service brand to cater to entrepreneurs' needs, achieving an annual growth rate of over 20% in private wealth clients and assets [2][3] - Jiangsu Bank's personal financial business has been recognized with the "Shanghai Securities Financial Management" 2024 Annual Bank Wealth Management Brand Award, highlighting its professional capabilities [3] Group 2: Consumer Activation Initiatives - Jiangsu Bank has launched a 2025 consumption stimulation initiative with 20 measures aimed at revitalizing the consumer market, including a "Home Renovation Subsidy" service [4] - The bank has implemented various regional consumer benefit actions, serving over 57,000 people across multiple cities with initiatives like electric bicycle trade-in subsidies and housing purchase subsidies [5] Group 3: Financial and Non-Financial Ecosystem - Jiangsu Bank is building an "8+1" smart scene ecosystem by integrating cutting-edge technologies like big data and AI into various life scenarios, including healthcare and cultural tourism [7] - The "Credit Medical" service allows patients to receive treatment without upfront payment at over 200 partner hospitals, enhancing the healthcare experience [7] - The bank's upgraded App 10.0 version has over 7 million monthly active users, providing personalized financial services and enhancing customer engagement [8]
全球机器视觉相机市场前10强生产商排名及市场占有率
QYResearch· 2025-04-11 09:06
Core Viewpoint - The global machine vision camera market is projected to reach USD 4.92 billion by 2031, with a compound annual growth rate (CAGR) of 8.5% over the coming years [2]. Market Overview - The global machine vision camera market is dominated by area scan cameras, which hold approximately 65.2% of the market share [9]. - The electronic and semiconductor sectors represent the largest downstream market, accounting for about 37% of the demand [12]. Key Drivers - Growth in industrial automation demand is driving the adoption of machine vision cameras across various industries, enhancing production efficiency and product quality [13]. - The development of artificial intelligence (AI) and deep learning technologies is improving the capabilities of machine vision systems, fostering innovation in the industry [13]. - The advent of 5G and high-speed communication technologies is enhancing real-time processing capabilities of machine vision systems [14]. - Upgrades in the semiconductor and electronics industries are increasing the demand for high-resolution cameras for precision manufacturing [15]. - Applications in autonomous driving and intelligent transportation systems are contributing to market growth [16]. - The demand for high-precision machine vision cameras in medical imaging and pharmaceutical testing is on the rise [17]. - Government policies supporting smart manufacturing and Industry 4.0 are providing financial backing and incentives for the machine vision industry [18]. Major Challenges - High costs and return on investment (ROI) issues are barriers for some companies in adopting high-end machine vision systems [19]. - The complexity of technology and integration challenges with production lines present significant hurdles [20]. - The large data volumes generated by high-resolution cameras increase processing and storage requirements, raising costs for companies [21]. - Intense competition within the industry, particularly in the low-end market, is compressing profit margins [22]. - A shortage of skilled professionals in computer vision, image processing, and automation fields is impacting the industry's growth [23]. Industry Opportunities - The shift towards smart manufacturing and Industry 4.0 is driving the widespread application of machine vision in production lines [24]. - AI integration is expanding the application scenarios for machine vision, enabling higher precision in defect detection and product classification [24]. - Rapid growth in emerging markets, particularly in the Asia-Pacific region, is fueling demand for machine vision cameras [25]. - The development of unmanned factories and smart logistics is accelerating the need for machine vision technologies [26]. - The combination of edge computing and cloud computing is enhancing data processing capabilities and reducing latency [27]. - New applications in healthcare and life sciences, such as microscopy and genetic testing, present significant growth potential [28]. - The growth of electric vehicles and autonomous driving technologies is creating additional opportunities for machine vision applications [28].
与百度自动驾驶并肩的日子
雷峰网· 2025-04-08 10:07
" 十多年过去,百度自动驾驶变得更好了吗? " 作者丨吴彤 铁打的营盘,流水的兵。 01 余凯时代:无人车起步 2011年底,百度向在国际人工智能领域崭露头角的余凯伸出橄榄枝。 彼时,百度财力雄厚,李彦宏登顶中国首富,百度市值位居中国互联网企业之首。在这样的背景下,百度 有底气招揽世界顶级研究人员。当年春节前,李彦宏与余凯在北京见面,这次交流成为双方命运的关键一 步。 2012年春天,余凯正式加入百度,开启了他在百度的三年。在他加入之前,王海峰刚将视觉团队从搜索部 门独立,成立了VIS(视觉技术搜索)。余凯的老本行是视觉,他直接合并了视觉和语音,组建了多媒体 部。后来,余凯又将视觉拿到IDL,语音则还给了王海峰。 余凯 2012年10月,余凯加入百度半年后,辛顿团队的AlexNet算法在计算机视觉会议上震惊了学术界和产业 界。余凯意识到与辛顿合作对百度在深度学习领域弯道超车的重要性,立刻联系辛顿表达合作意愿,不过 结果未能如愿。 虽有遗憾,但这次竞标也让李彦宏深刻意识到深度学习的潜力与重要性。一个月后,百度宣布成立深度学 习研究院(IDL),李彦宏亲自挂帅,余凯出任常务副院长。 IDL的成立在中国互联网行业是 ...
等待13年,AlexNet重磅开源:Hinton团队亲手写的原版代码,甚至还带注释
3 6 Ke· 2025-03-24 11:38
Core Insights - AlexNet's original source code has been open-sourced after 13 years, allowing AI developers and deep learning enthusiasts to access the foundational code that revolutionized computer vision [1][10][11] - The release includes the original 2012 version written by Geoffrey Hinton's team, complete with annotations, providing insights into the development process of deep learning models [1][11] Group 1: Historical Context - AlexNet emerged in 2012 during the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), significantly reducing the Top-5 error rate from 26.2% to 15.3%, marking a pivotal moment in computer vision [2][3] - Prior to AlexNet, neural networks faced skepticism and were largely overlooked due to limitations in computational power and data availability, with a resurgence occurring in the 1980s following the rediscovery of the backpropagation algorithm [4][6] Group 2: Technical Aspects - AlexNet consists of 5 convolutional layers and 3 fully connected layers, totaling 60 million parameters and 650,000 neurons, utilizing GPU acceleration for training [2][3] - The success of AlexNet was facilitated by the availability of the ImageNet dataset, which was crowdsourced and became the largest image dataset at the time, and advancements in GPU technology, particularly NVIDIA's CUDA programming system [5][6] Group 3: Development and Impact - The open-sourcing of AlexNet's code was a collaborative effort between the Computer History Museum and Google, taking five years to navigate licensing complexities [10][11] - AlexNet's publication has led to over 170,000 citations, establishing it as a seminal work in the field of deep learning and influencing subsequent research and development in AI [7][10]
重磅!AlexNet源代码已开源
半导体芯闻· 2025-03-24 10:20
Core Points - The article discusses the release of the source code for AlexNet, a groundbreaking neural network developed in 2012, which has significantly influenced modern AI methods [1][18] - AlexNet was created by researchers from the University of Toronto, including Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, and it is primarily used for image recognition tasks [2][15] Group 1: Background of Deep Learning - Geoffrey Hinton is recognized as one of the fathers of deep learning, which utilizes neural networks and forms the foundation of contemporary AI [4] - The revival of neural network research in the 1980s was led by cognitive scientists who rediscovered the backpropagation algorithm, essential for training multilayer neural networks [5][6] Group 2: ImageNet and GPU Development - The ImageNet project, initiated by Stanford professor Fei-Fei Li, provided a large dataset necessary for training neural networks, significantly contributing to the success of AlexNet [8][9] - NVIDIA played a crucial role in making GPU technology more versatile and programmable, which was essential for the computational demands of training neural networks [9][12] Group 3: Creation and Impact of AlexNet - AlexNet combined deep neural networks, large datasets, and GPU computing, achieving groundbreaking results in image recognition [13] - The paper on AlexNet published in 2012 has been cited over 172,000 times, marking it as a pivotal moment in AI research [17] - The release of AlexNet's source code by the Computer History Museum (CHM) is seen as a significant historical contribution to the field of artificial intelligence [18]
成就GPU奇迹的AlexNet,开源了
半导体行业观察· 2025-03-22 03:17
Core Viewpoint - AlexNet, developed in 2012, revolutionized artificial intelligence and computer vision by introducing a powerful neural network for image recognition [2][3]. Group 1: Background and Development of AlexNet - AlexNet was created by Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever at the University of Toronto [4][3]. - Hinton is recognized as one of the fathers of deep learning, which is a foundational aspect of modern AI [5]. - The resurgence of neural networks in the 1980s was marked by the rediscovery of the backpropagation algorithm, which is essential for training multi-layer networks [6]. - The emergence of large datasets and sufficient computational power, particularly through GPUs, was crucial for the success of neural networks [7][9]. Group 2: ImageNet and Its Role - The ImageNet dataset, completed in 2009 by Fei-Fei Li, provided a vast collection of labeled images necessary for training AlexNet [8]. - ImageNet was significantly larger than previous datasets, enabling breakthroughs in image recognition [8]. - The competition initiated in 2010 aimed to improve image recognition algorithms, but initial progress was minimal until AlexNet's introduction [8]. Group 3: Technical Aspects and Achievements - AlexNet utilized NVIDIA GPUs and CUDA programming to efficiently train on the ImageNet dataset [12]. - The training process involved extensive parameter tuning and was conducted on a computer with two NVIDIA cards [12]. - AlexNet's performance surpassed competitors, marking a pivotal moment in AI, as noted by Yann LeCun [12][13]. Group 4: Legacy and Impact - Following AlexNet, the use of neural networks became ubiquitous in computer vision research [13]. - The advancements in neural networks led to significant developments in AI applications, including voice synthesis and generative art [13]. - The source code for AlexNet was made publicly available in 2020, highlighting its historical significance [14].
诺奖采访深度学习教父辛顿:最快五年内 AI 有 50% 概率超越人类,任何说“一切都会好起来”的人都是疯子
AI科技大本营· 2025-03-18 03:29
作者 | 诺贝尔奖官方 采访中,辛顿表达了对人工智能未来发展的担忧。他认为, 人工智能可能在短短五年内超越人类智慧 ,并就此可能引发的社会风险,例如大规模失业 和虚假信息等问题,提出了警告。更令人深思的是,辛顿暗示,人工智能的潜在风险可能远超我们目前的认知。 编译 | 王启隆 出品丨AI 科技大本营(ID:rgznai100) 杰弗里·辛顿(Geoffrey Hinton),这位被誉为"人工智能教父"的科学家,于去年获得了诺贝尔物理学奖,引起了全网一阵讨论。 最近辛顿接受了诺贝尔奖官方的专访,他回忆起接到诺奖电话时的趣事时,第一反应竟然是疑惑,因为自己研究的并非物理学(这点和全网的疑惑倒是 一样)。 作为深度学习领域的先驱,辛顿最广为人知的成就是神经网络。但很多人其实不知道, 他曾说过自己这辈子"最自豪"也是"最失败"的成就,其实是与 特里·塞诺夫斯基(Terry Sejnowski)共同提出了玻尔兹曼机理论。 详见: 《 深度学习之父 Hinton 万字访谈录:中美 AI 竞赛没有退路可言 》 他们的工作,以及另一位诺奖物理学奖得主约翰·霍普菲尔德(John Hopfield)等神经网络先驱的早期研究,共同 ...
全球前沿创新专题报告(三):AI医药行业报告
CAITONG SECURITIES· 2025-03-12 06:28
Investment Rating - The report maintains a "Positive" investment rating for the AI pharmaceutical industry [1]. Core Insights - The integration of AI technology with biopharmaceutical development can accelerate drug discovery and development, revealing new biological mechanisms and predicting new drug targets, particularly for complex diseases [5]. - The AI pharmaceutical industry has seen significant investment growth, with total investments reaching $60.3 billion by August 2023, a 27-fold increase over the past nine years [12]. - The AI pharmaceutical industry is characterized by a rapid growth trend, particularly in drug discovery and preclinical research, with an average annual growth rate of 36% from 2010 to 2021 [16]. Summary by Sections AI Pharmaceutical Industry Overview - The introduction of AI technology addresses the high costs and low success rates associated with traditional drug development, which averages $2.6 billion and takes over 10 years [8]. - AI in pharmaceuticals has evolved through three phases: early theoretical development (1956-1980), the rise of computer-aided drug design (1981-2011), and rapid growth with increased capital investment since 2012 [9]. Market Size - AI-driven pharmaceutical investments peaked at $13.68 billion in 2021, driven by the COVID-19 pandemic, but fell to $10.2 billion in 2022 due to global economic downturns [12]. - The United States leads in AI pharmaceutical companies, accounting for 55.1% of the total, followed by Europe and the UK [13]. AI Pharmaceutical Technology Principles - The three key components of AI are data, computing power, and algorithms, with advancements in GPU and cloud computing significantly supporting AI pharmaceutical companies [29]. - AI algorithms, including machine learning and deep learning, are crucial for processing diverse data types and improving drug discovery processes [38]. Applications of AI in Pharmaceuticals - AI is primarily utilized in drug discovery and preclinical research stages, focusing on target discovery, compound validation, and drug design [41]. - AI techniques enhance the identification of drug targets by analyzing multi-omics data and utilizing computational methods to discover potential therapeutic targets [45]. AI Pharmaceutical Industry Chain and Policies - The AI pharmaceutical industry chain consists of upstream components (computing power, algorithms, data), midstream applications (AI + biotech, AI + CRO), and downstream traditional pharmaceutical companies [18][19]. - Regulatory policies are gradually emerging to support the AI pharmaceutical sector, with various initiatives launched in the US, Europe, and China to promote AI applications in drug development [22][24].
深度学习研究报告:股价预测之多模态多尺度
GF SECURITIES· 2025-03-07 09:20
Quantitative Models and Factor Analysis Summary Quantitative Models and Construction - **Model Name**: Multi-modal Multi-scale Stock Price Prediction Model **Model Construction Idea**: The model integrates multi-modal (chart data and time-series data) and multi-scale (different frequency data) features to enhance stock price prediction accuracy. It employs four independent deep time-series models and convolutional models for feature extraction, using both regression and classification losses for end-to-end training[14][17][18]. **Model Construction Process**: 1. **Multi-modal Features**: Combines time-series price-volume data and standardized price-volume charts. Time-series models capture abstract numerical relationships, while convolutional models identify chart patterns[17]. 2. **Multi-scale Features**: Incorporates 1-minute high-frequency data, daily data, and weekly data. High-frequency data is factorized into 55 features, which are then input into time-series models[18]. 3. **Lightweight Design**: Reduces the parameter size of each sub-model to 1/4 of the initial version, minimizing overfitting and computational resource dependency[18]. 4. **Multi-head Output**: Outputs include absolute future returns and categorical predictions (up, flat, down), using mean squared error and cross-entropy as loss functions[19]. **Model Evaluation**: The model demonstrates significant improvements in prediction accuracy and excess returns compared to the initial version[14][17][19]. Model Backtesting Results - **RankIC Mean**: - All Market: 8.7% - CSI 300: 7.9% - CSI 500: 6.6% - CSI 800: 6.9% - CSI 1000: 8.2% - CNI 2000: 8.7% - ChiNext: 10.4%[21][116] - **RankIC Win Rate**: - All Market: 86.7% - CSI 300: 69.0% - CSI 500: 73.5% - CSI 800: 75.2% - CSI 1000: 84.8% - CNI 2000: 86.1% - ChiNext: 89.2%[21][116] - **Excess Annualized Returns**: - All Market: 12.97% - CSI 300: 9.17% - CSI 500: 5.30% - CSI 800: 8.38% - CSI 1000: 7.47% - CNI 2000: 7.47% - ChiNext: 11.52%[21][117] Quantitative Factors and Construction - **Factor Name**: Model-derived Factor **Factor Construction Idea**: Derived from the model's predictions, the factor captures both numerical relationships and chart patterns, leveraging multi-modal and multi-scale data[14][17][18]. **Factor Construction Process**: 1. Predictions from time-series models and convolutional models are combined. 2. Multi-frequency data (1-minute, daily, weekly) is processed to extract features. 3. Factor values are generated based on the model's outputs, including both regression and classification results[14][17][18]. **Factor Evaluation**: The factor shows low correlation with traditional Barra style factors, indicating its uniqueness[22][23]. Factor Backtesting Results - **Correlation with Barra Factors**: - Liquidity: -18% - Volatility: -16% - Size: -8%[22][23] - **RankIC Mean**: - All Market: 8.7% - CSI 300: 7.9% - CSI 500: 6.6% - CSI 800: 6.9% - CSI 1000: 8.2% - CNI 2000: 8.7% - ChiNext: 10.4%[21][116] - **RankIC Win Rate**: - All Market: 86.7% - CSI 300: 69.0% - CSI 500: 73.5% - CSI 800: 75.2% - CSI 1000: 84.8% - CNI 2000: 86.1% - ChiNext: 89.2%[21][116] - **Excess Annualized Returns**: - All Market: 12.97% - CSI 300: 9.17% - CSI 500: 5.30% - CSI 800: 8.38% - CSI 1000: 7.47% - CNI 2000: 7.47% - ChiNext: 11.52%[21][117]
【广发金工】神经常微分方程与液态神经网络
广发金融工程研究· 2025-03-06 00:16
广发证券首席金工分析师 安宁宁 anningning@gf.com.cn 广发证券资深金工分析师 陈原文 chenyuanwen@gf.com.cn 联系人:广发证券金工研究员 林涛 gflintao@gf.com.cn 广发金工安宁宁陈原文团队 摘要 神经常微分方程: 在机器学习国际顶会NeurIPS 2018上,Chen等人发表的论文《Neural Ordinary Differential Equations》获得了大会的最佳论文奖。简单来 说,一个常见的ResNet网络通常由多个形如h_{t+1}=f(h_t,_t)+h_t的残差结构所组成。在常规求解中,需计算出每一个残差结构中最能拟合训练数据的网 络参数。而该论文提出,假设当ResNet网络中的残差结构无限堆叠时,则每一个残差结构的参数都可以通过求解同一个常微分方程来获得。 液态神经网络: 基于上述工作,来自麻省理工学院的Ramin Hasani等人,创新性地以常微分方程的形式描述循环神经网络的隐藏状态变化,提出了一类被 称之为液态神经网络的模型,这些研究成果被发表在《Nature:Machine Intelligence》等国际顶级期刊上。此类模 ...