TRS(300229)

Search documents
拓尔思(300229) - 关于控股股东股份变动比例触及5%整数倍暨披露简式权益变动报告书的提示性公告
2025-07-14 10:46
证券代码:300229 证券简称:拓尔思 公告编号:2025-032 拓尔思信息技术股份有限公司 关于控股股东股份变动比例触及 5%整数倍暨披露简式权益变 动报告书的提示性公告 本公司及董事会全体成员保证信息披露内容的真实、准确和完整,没有虚假记载、误导 性陈述或重大遗漏。 特别提示: 1、本次权益变动属于控股股东减持公司股份,不触及要约收购,不会导致 公司控股股东及实际控制人发生变化。 2、本次权益变动后,公司控股股东信科互动科技发展有限公司(以下简称 "信科互动")及其一致行动人持有公司股份总数从 221,504,436 股减少至 218,405,136 股,持股比例由 25.35%减少至 25.00%,触及 5%的整数倍。 3、本次权益变动不会导致公司控股股东、实际控制人发生变化,不会影响 公司的治理结构和持续经营。 拓尔思信息技术股份有限公司(以下简称"公司")2025 年 5 月 20 日在中 国 证 券 监 督 管 理 委 员 会 指 定 的 创 业 板 信 息 披 露 网 站 巨 潮 资 讯 网 (www.cninfo.com.cn)披露了《关于控股股东股份减持计划的预披露公告》(公 告编号:2 ...
拓尔思(300229) - 简式权益变动报告书
2025-07-14 10:46
拓尔思信息技术股份有限公司 简式权益变动报告书 上市公司名称:拓尔思信息技术股份有限公司 股票上市地点:深圳证券交易所 信息披露义务人之一:信科互动科技发展有限公司 通讯地址:西藏自治区拉萨市达孜区工业园区企业服务中心 2 楼 201-1 室 信息披露义务人之二:李渝勤 通讯地址:北京市海淀区建枫路(南延)6 号院 3 号楼 邮政编码:100089 联系电话:010-64848899 信息披露义务人之三:施水才 通讯地址:北京市海淀区建枫路(南延)6 号院 3 号楼 邮政编码:100089 联系电话:010-64848899 股份变动性质:股份减少(因大宗交易减持公司股份导致持有公司股份比例 减少) 简式权益变动报告书签署日期:2025 年 7 月 14 日 股票简称:拓尔思 股票代码:300229 邮政编码:850100 联系电话:0891-64843223 信息披露义务人声明 一、信息披露义务人依据《中华人民共和国证券法》(以下简称《证券法》)、 《上市公司收购管理办法》(以下简称《收购办法》)、《公开发行证券的公司信息 披露内容与格式准则第 15 号—权益变动报告书》(以下简称"准则 15 号")及相 ...
31.74亿主力资金净流入,数字货币概念涨2.31%
Sou Hu Cai Jing· 2025-07-11 09:24
Group 1 - The digital currency concept index rose by 2.31%, ranking 10th among concept sectors, with 78 stocks increasing in value [1][2] - Notable gainers included Gu Ao Technology with a 20% limit up, and Jin Zheng Co., Ji Da Zheng Yuan, and Heng Bao Co. also hitting the limit up [1] - The sector saw a net inflow of 3.174 billion yuan, with 50 stocks receiving net inflows, and 15 stocks exceeding 100 million yuan in net inflow [2] Group 2 - The top net inflow stocks included Heng Bao Co. with 470 million yuan, followed by Jin Zheng Co. and Yu Xin Technology with 453 million yuan and 435 million yuan respectively [2][3] - Gu Ao Technology, Ji Da Zheng Yuan, and Ge Er Software had the highest net inflow ratios at 33.76%, 21.20%, and 16.52% respectively [3] - The digital currency sector's performance was supported by significant trading volumes, with Heng Bao Co. and Jin Zheng Co. showing high turnover rates of 45.74% and 16.10% respectively [3][4]
8.98亿主力资金净流入,MLOps概念涨3.05%
Zheng Quan Shi Bao Wang· 2025-07-11 09:06
Core Viewpoint - The MLOps concept has shown a significant increase of 3.05%, ranking second among concept sectors, with notable stocks like StarRing Technology, Yuxin Technology, and TuoerSi leading the gains [1][2]. Market Performance - The MLOps concept sector saw a net inflow of 899 million yuan, with 13 stocks experiencing net inflows, and 5 stocks exceeding 30 million yuan in net inflow. Yuxin Technology led with a net inflow of 435 million yuan, followed by TuoerSi and Runhe Software with net inflows of 188 million yuan and 183 million yuan respectively [2][3]. Stock Performance - Key stocks in the MLOps sector include: - Yuxin Technology: Increased by 7.18% with a turnover rate of 16.64% and a net inflow of 435 million yuan, resulting in a net inflow ratio of 12.38% [3] - TuoerSi: Increased by 6.72% with a turnover rate of 11.30% and a net inflow of 188 million yuan, resulting in a net inflow ratio of 10.11% [3] - Runhe Software: Increased by 2.77% with a turnover rate of 5.61% and a net inflow of 183 million yuan, resulting in a net inflow ratio of 8.31% [3] Additional Insights - Other stocks with notable performance in the MLOps sector include: - Dongfang Guoxin: Increased by 4.00% with a net inflow of 54.9 million yuan and a net inflow ratio of 6.57% [3] - StarRing Technology: Increased by 12.40% with a net inflow of 24.77 million yuan and a net inflow ratio of 5.20% [3]
今日55只个股跨越牛熊分界线
Zheng Quan Shi Bao Wang· 2025-07-11 04:01
| 证券代 | 证券简 | 今日涨跌幅 | 今日换手率 | 年线 | 最新价 | 乖离率 | | --- | --- | --- | --- | --- | --- | --- | | 码 | 称 | (%) | (%) | (元) | (元) | (%) | | 300207 | 欣旺达 | 9.13 | 5.12 | 20.03 | 21.16 | 5.64 | | 000926 | 福星股 份 | 4.31 | 5.52 | 2.33 | 2.42 | 3.70 | | 601456 | 国联民 生 | 6.66 | 3.68 | 10.87 | 11.21 | 3.15 | | 300229 | 拓尔思 | 3.05 | 3.72 | 18.05 | 18.56 | 2.82 | | 300462 | ST华铭 | 4.83 | 10.28 | 9.94 | 10.21 | 2.73 | | 002340 | 格林美 | 3.01 | 2.26 | 6.35 | 6.50 | 2.32 | | 603997 | 继峰股 份 | 4.96 | 1.55 | 11.99 | 12.27 | 2.31 | ...
华为盘古概念下跌2.40%,9股主力资金净流出超3000万元
Zheng Quan Shi Bao Wang· 2025-06-10 09:23
Group 1 - Huawei Pangu concept declined by 2.40%, ranking among the top declines in the concept sector as of June 10 [1] - The concept sector saw a net outflow of 8.09 billion yuan, with 24 stocks experiencing net outflows, and 9 stocks seeing outflows exceeding 30 million yuan [2] - The stock with the highest net outflow was Tuowei Information, with a net outflow of 1.76 billion yuan [2] Group 2 - The top gainers in the Huawei Pangu concept included Meiansen and Jiecheng Shares, which rose by 1.72% and 0.19% respectively [1][3] - The concept sector's performance was contrasted with other sectors, such as the Transgenic sector which gained 3.15% [2] - The stocks with the highest net inflow included Meiansen, Huakai Yibai, and Fanwei Network, with inflows of 285.5 million yuan, 6.71 million yuan, and 4.69 million yuan respectively [2][3]
2025年中国多模态大模型行业核心技术现状 关键在表征、翻译、对齐、融合、协同技术【组图】
Qian Zhan Wang· 2025-06-03 05:12
Core Insights - The article discusses the core technologies of multimodal large models, focusing on representation learning, translation, alignment, fusion, and collaborative learning [1][2][7][11][14]. Representation Learning - Representation learning is fundamental for multimodal tasks, addressing challenges such as combining heterogeneous data and handling varying noise levels across different modalities [1]. - Prior to the advent of Transformers, different modalities required distinct representation learning models, such as CNNs for computer vision (CV) and LSTMs for natural language processing (NLP) [1]. - The emergence of Transformers has enabled the unification of multiple modalities and cross-modal tasks, leading to a surge in multimodal pre-training models post-2019 [1]. Translation - Cross-modal translation aims to map source modalities to target modalities, such as generating descriptive sentences from images or vice versa [2]. - The use of syntactic templates allows for structured predictions, where specific words are filled in based on detected attributes [2]. - Encoder-decoder architectures are employed to encode source modality data into latent features, which are then decoded to generate the target modality [2]. Alignment - Alignment is crucial in multimodal learning, focusing on establishing correspondences between different data modalities to enhance understanding of complex scenarios [7]. - Explicit alignment involves categorizing instances with multiple components and measuring similarity, utilizing both unsupervised and supervised methods [7][8]. - Implicit alignment leverages latent representations for tasks without strict alignment, improving performance in applications like visual question answering (VQA) and machine translation [8]. Fusion - Fusion combines multimodal data or features for unified analysis and decision-making, enhancing task performance by integrating information from various modalities [11]. - Early fusion merges features at the feature level, while late fusion combines outputs at the decision level, with hybrid fusion incorporating both approaches [11][12]. - The choice of fusion method depends on the task and data, with neural networks becoming a popular approach for multimodal fusion [12]. Collaborative Learning - Collaborative learning utilizes data from one modality to enhance the model of another modality, categorized into parallel, non-parallel, and hybrid methods [14][15]. - Parallel learning requires direct associations between observations from different modalities, while non-parallel learning relies on overlapping categories [15]. - Hybrid methods connect modalities through shared datasets, allowing one modality to influence the training of another, applicable across various tasks [15].
2025年中国多模态大模型行业市场规模、产业链、竞争格局分析及行业发趋势研判:将更加多元和深入,应用前景越来越广阔[图]
Chan Ye Xin Xi Wang· 2025-05-29 01:47
Core Insights - The multi-modal large model market in China is projected to reach 15.63 billion yuan in 2024, an increase of 6.54 billion yuan from 2023, and is expected to grow to 23.48 billion yuan in 2025, indicating strong market demand and government support [1][6][19] Multi-Modal Large Model Industry Definition and Classification - Multi-modal large models are AI systems capable of processing and understanding various data forms, including text, images, audio, and video, using deep learning technologies like the Transformer architecture [2][4] Industry Development History - The multi-modal large model industry has evolved through several stages: task-oriented phase, visual-language pre-training phase, and the current multi-modal large model phase, focusing on enhancing cross-modal understanding and generation capabilities [4] Current Industry Status - The multi-modal large model industry has gained significant attention due to its data processing capabilities and diverse applications, with a market size projected to grow substantially in the coming years [6][19] Application Scenarios - The largest application share of multi-modal large models is in the digital human sector at 24%, followed by gaming and advertising at 13% each, and smart marketing and social media at 10% each [8] Industry Value Chain - The industry value chain consists of upstream components like AI chips and hardware, midstream multi-modal large models, and downstream applications across various sectors including education, gaming, and public services [10][12] Competitive Landscape - Major players in the multi-modal large model space include institutions and companies like the Chinese Academy of Sciences, Huawei, Baidu, Tencent, and Alibaba, with various models being developed to optimize training costs and enhance capabilities [16][17] Future Development Trends - The multi-modal large model industry is expected to become more intelligent and humanized, providing richer and more personalized user experiences, with applications expanding across various fields such as finance, education, and content creation [19]
重磅!2025年中国及部分省市多模态大模型行业政策汇总及解读(全)政策鼓励多模态大模型应用场景创新
Qian Zhan Wang· 2025-05-26 03:25
Core Insights - The article discusses the development and support of the multimodal large model industry in China, highlighting various policies and initiatives at both national and local levels aimed at enhancing AI capabilities and applications [1][4][11]. Policy Development Timeline - In 2023, local policies began to emerge, focusing on computational power to encourage the development of large model technology and innovative application scenarios, starting with Guangdong, Beijing, and Shanghai. By 2024, more regions are expected to introduce relevant policies aimed at improving administrative efficiency [1]. - By 2025, government work reports will emphasize the ongoing promotion of the "Artificial Intelligence +" initiative, with a focus on supporting the widespread application of large models [1]. National Policy Summary - The Chinese government has implemented several measures to support the AI industry, particularly multimodal large models, which are seen as crucial products within the AI sector. The State Council has identified embodied intelligence as a future industry, promoting the integration of digital technology with manufacturing and market advantages [4][5]. - Key national policies include the "Guidelines for the Development of Artificial Intelligence Industry" and the "Three-Year Action Plan for Data Elements," which aim to enhance data utilization and promote high-quality economic development through data-driven initiatives [11][13]. Local Policy Highlights - Various provinces have introduced specific policies to support the development of AI large models. For instance, Guangdong aims to develop a comprehensive technology system for large models with trillion-parameter capabilities, while Beijing targets the creation of 3-5 advanced, controllable foundational model products by the end of 2025 [13][15]. - Local initiatives also include the establishment of intelligent computing centers and the promotion of AI applications in various sectors, such as manufacturing, healthcare, and urban governance [13][14]. Key Development Directions - The article outlines that provinces like Guangdong, Beijing, and Shanghai have set ambitious goals for the development of large models, focusing on creating a robust ecosystem for AI innovation and application [15]. - The emphasis is on fostering collaboration between government, industry, and academia to drive advancements in AI technologies and their practical applications across different sectors [15].
拓尔思(300229) - 第六届董事会第二十次会议决议公告
2025-05-22 14:10
一、董事会会议召开情况 证券代码:300229 证券简称:拓尔思 公告编号:2025-028 拓尔思信息技术股份有限公司 第六届董事会第二十次会议决议公告 本公司及董事会全体成员保证信息披露内容的真实、准确和完整,没有虚假记载、误导 性陈述或重大遗漏。 本议案已经公司董事会薪酬与考核委员会审议通过。 1 董事李琳女士为本次激励计划激励对象,对本议案回避表决。 表决结果:同意 6 票、反对 0 票、弃权 0 票,本议案获得通过。 三、备查文件 拓尔思信息技术股份有限公司(以下简称"公司")第六届董事会第二十次 会议于 2025 年 5 月 22 日在公司会议室以现场结合通讯表决的方式召开,经全体 董事一致同意,豁免本次董事会会议的提前通知期限,会议通知于 2025 年 5 月 22 日以电话、电子邮件及专人送达方式发出,会议召集人已在会议上作出说明。 本次会议应出席董事 7 名,实际出席会议董事 7 名。本次会议由董事长兼总经理 施水才先生主持,公司部分高级管理人员列席了本次会议。本次会议的召集和召 开符合《中华人民共和国公司法》等有关法律、行政法规、部门规章、规范性文 件和《拓尔思信息技术股份有限公司章程》的 ...