Seek .(SKLTY)
Search documents
作为中国农业的“芯片”,国产种子也需要DeepSeek时刻
Guan Cha Zhe Wang· 2025-07-04 09:56
Core Viewpoint - The Chinese agricultural sector is experiencing a transformation with the introduction of innovative "super vegetables" that cater to local tastes while maintaining nutritional value [1][4]. Group 1: Introduction of New Vegetables - The rise of health management has led to the popularity of foreign vegetables like kale, but they often do not align with Chinese culinary preferences [1]. - Domestic retail companies like Hema are pioneering the introduction of localized versions of these vegetables, such as the "Pro Max version" of kale, which has seen an 80% month-on-month sales growth since its launch [1][4]. Group 2: Development Process of New Varieties - The development of the new vegetable, African Wrinkled Leaf Cabbage, involved extensive research and trials, resulting in a product that is more suitable for Chinese consumers in terms of taste and cooking methods [4][9]. - The chairman of Yafei Seed Industry, He Yafei, emphasized the importance of controlling seed sources to ensure agricultural independence and innovation [6][9]. Group 3: Challenges in Seed Industry - The Chinese seed industry faces challenges, including heavy reliance on foreign seeds for certain vegetables, which can hinder agricultural development [6]. - Experts argue that to gain international influence and pricing power, China must develop its own seed varieties and enhance its research capabilities [6][9]. Group 4: Market Potential and Applications - The new vegetable not only meets nutritional needs but also aligns with local taste preferences, making it a viable option for various culinary applications, including juice and salad [1][11]. - There are ongoing market trials to integrate the African Wrinkled Leaf Cabbage into supply chains, such as using it in milk tea, which could enhance its market appeal [11].
DeepSeek发布招聘;罗马仕回应没有倒闭丨新鲜早科技
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-04 03:34
Group 1: Technology Sector Developments - OpenAI and Elon Musk denied any collaboration with Robinhood regarding a new trading product called "stock tokens," clarifying that any transfer of OpenAI equity requires their approval [2] - DeepSeek has posted multiple job openings on LinkedIn, indicating an effort to attract top overseas AI talent [3] - Amazon announced the launch of its new AI foundational model, Deep Fleet, which aims to enhance the efficiency of its industrial mobile robot fleet by 10% [5] - Microsoft announced a new round of layoffs affecting approximately 9,000 jobs, marking its second major layoff this year [6] Group 2: Financial and Investment Activities - Alibaba Group plans to issue zero-coupon exchangeable bonds totaling approximately HKD 12 billion, with proceeds aimed at cloud computing infrastructure and international e-commerce development [4] - Shenzhen-based Romoss Technology confirmed it has not gone bankrupt, despite reports of a complete halt in operations and unpaid wages [11] - Zhuhai Shenkepu Industrial Technology completed a B+ round financing exceeding 100 million yuan, with funds allocated for technology upgrades and international expansion [15] Group 3: Strategic Partnerships and Collaborations - Baidu's Wenxin Intelligent Agent platform announced a deep collaboration with Xiaomi's app store to create a cross-end distribution model for AI agents [6] - Pony.ai has initiated Robotaxi road tests in Luxembourg in collaboration with local transportation company Emile Weber [10] Group 4: Industry Changes and Acquisitions - Siemens announced the lifting of export restrictions on three major chip design software suppliers to China, restoring access to Chinese customers [12] - China Resources has become the actual controller of Konka Group following a recent approval of a share transfer [13] - Xiamen Silan Microelectronics reported progress on its 8-inch silicon carbide project, which is a key construction project in Fujian Province [14]
葛辰皓:DeepSeek和“杭州六小龙”,带动国际投资人对中国新质生产力的重新认知
Feng Huang Wang Cai Jing· 2025-07-04 02:11
Core Insights - The "2025 China Enterprises Going Global Summit" was held in Shenzhen, focusing on creating a high-end platform for Chinese companies to address challenges in international expansion and explore collaborative transformation paths [1] Group 1: Trends in Chinese Companies Going Public - Chinese companies are currently in a recovery phase regarding listings in the U.S., facing challenges in attracting long-term international capital, particularly from Europe and the U.S. [3] - There is a positive trend observed where international funds are returning to Chinese assets, influenced by both internal and external factors [3] - Internal factors include the Chinese government's increased focus on economic challenges and the introduction of supportive policies since September 24 of the previous year [3] - The emergence of new Chinese production capabilities has led to a re-evaluation of the value of Chinese tech stocks [3] - External factors involve changes in global asset allocation, with investors shifting focus from high-valued U.S. stocks to Chinese and European assets due to uncertainties in U.S. policies and currency risks [3] Group 2: Market Recovery and IPO Activity - Many Chinese companies have successfully completed IPOs or secondary financing, indicating that the market is on a recovery path [4]
DeepSeek,加入海外抢人才大战!
Zheng Quan Shi Bao· 2025-07-03 15:04
Core Viewpoint - The competition in AI ultimately revolves around talent acquisition and retention, with major companies intensifying their efforts to attract top AI professionals [1][8]. Group 1: Talent Acquisition Trends - DeepSeek has recently posted job openings on LinkedIn, targeting overseas talent for various positions, including front-end developers and deep learning researchers [3][5]. - Meta has been actively recruiting top talent from OpenAI, with reports indicating that they have successfully hired eight core researchers from the company [9][10]. - The demand for AI talent is surging, with a 33.4% year-on-year increase in job seekers in the AI sector, making it the fastest-growing industry for job applications [6]. Group 2: Salary and Compensation - DeepSeek offers competitive salaries for deep learning researchers, ranging from 50,000 to 80,000 RMB per month, translating to an annual salary of up to 1.12 million RMB for fresh graduates [5]. - The top 20% of AI talent can expect salary increases of 30% to 50% when switching jobs, highlighting the lucrative nature of AI roles [6]. - Meta's aggressive recruitment strategy includes offering substantial compensation packages, with reports of sign-on bonuses reaching up to 100 million RMB for some candidates [9][10]. Group 3: Industry Dynamics - The AI talent market is experiencing a "arms race," with companies like Meta and Nvidia investing heavily to secure top-tier talent, indicating a shift in focus from hardware (like GPUs) to algorithmic expertise [8][10]. - The establishment of Meta's new department, "Meta Super Intelligence Lab," signifies a strategic move to consolidate AI efforts and attract specialized talent [9].
DeepSeek对“王一博案”道歉?假新闻!
Hu Xiu· 2025-07-03 14:51
Core Viewpoint - The news regarding DeepSeek's alleged apology for associating Wang Yibo with the "Li Ai Qing corruption case" is identified as false, with no official apology found on any of DeepSeek's platforms [1][2]. Group 1: Incident Overview - DeepSeek reportedly apologized for linking Wang Yibo to the corruption case due to content review oversights, claiming it harmed his reputation [1]. - Despite widespread media coverage of the alleged apology, no official statement or evidence of such an apology was found on DeepSeek's official channels [1][2]. Group 2: AI Model Implications - The incident highlights the challenges faced by AI models in discerning truth from falsehood in an environment filled with misinformation, leading to the "Rubbish in, Rubbish out" effect [8]. - AI's inability to effectively verify information can result in the propagation of false narratives, emphasizing the need for improved accuracy in AI-generated content [8][9]. - The experience of news professionals indicates that reliance on AI for content generation may reduce efficiency, as significant time is spent verifying AI-generated information [8].
DeepSeek在海外招聘
news flash· 2025-07-03 11:37
7月3日消息,DeepSeek最近在LinkedIn上大举招聘。市场人士分析,DeepSeek可能希望从海外吸引人 才。这家总部位于杭州的公司在过去一周里,在微软旗下的求职和社交平台LinkedIn上发布了10个职 位,这是该公司几个月来首次在该平台发布招聘信息。(中国基金报) ...
DeepSeek加入AI抢人大战,数月来首次在领英上发布招聘信息,剑指海外顶尖人才
Hua Er Jie Jian Wen· 2025-07-03 07:22
全球AI人才竞争白热化,继OpenAI和Meta竞相吸引顶尖AI人才之后,DeepSeek正在LinkedIn上发布招聘信息,可能寻求从海外吸引人才。 周三,这家总部位于杭州的公司在过去一周内在微软旗下的这一求职和社交网络平台领英上发布了10个职位,这是该公司数月来首次在该平台发 布招聘信息。 这些职位包括三个专注于通用人工智能(AGI)的岗位,工作地点位于北京和杭州。所有职位描述均以中文发布。 | 全球的职位 10 条结果 | | 订阅职位 | DeepSeek Al | | --- | --- | --- | --- | | | 前端开发工程师 | 4 | 前端开发工程师 | | | DeepSeek Al | × | 中国 浙江省 杭州 · 1 天前 · 10 位申请者 | | | 中国 浙江省 杭州 (现场办公) | | | | | 已查看 抢先申请 同 快速申请 | | 由招聘者推广·尚无可用回复洞察 | | | 全栈工程师 | × | 现场办公 录品 ● 0 / 3 项技能匹配 | | | DeepSeek Al | | | | | 中国 浙江省 杭州 (现场办公) | | 聞 快速申请 收藏 | ...
Kimi和Minimax,争夺“下一个DeepSeek”心智
3 6 Ke· 2025-07-01 08:41
Core Insights - The emergence of DeepSeek has significantly altered the landscape of China's large model industry, shifting the focus from the previous "six small dragons" to the current "five major models" [1] - Kimi and Minimax have recently made notable advancements, with Kimi launching the Kimi-Researcher model and Minimax introducing the Minimax-M1 inference model, both aiming to establish their presence in the competitive landscape [3][7] Group 1: Kimi's Developments - Kimi is focusing on agent technology, particularly in deep research, targeting sectors like finance and academia, which allows it to differentiate from larger companies that focus on lifestyle services [3][7] - The Kimi-Researcher model, based on end-to-end agentic reinforcement learning, has begun small-scale testing, showcasing its ability to conduct deep research tasks effectively [7][8] - Kimi's model reportedly performs an average of 23 reasoning steps per task, plans 74 keywords, and identifies the top 3.2% of high-quality content from 206 websites, indicating a strong emphasis on practical utility and reliability [8][10] Group 2: Minimax's Innovations - Minimax has launched the Minimax-M1 model, which boasts one of the top two long-context understanding capabilities globally, with a total of 456 billion parameters and support for 1 million tokens in input length [11][20] - The M1 model's performance in specialized context evaluations surpasses all open-source models, including DeepSeek-R1-0528 and Qwen3-235B, and is only slightly behind the state-of-the-art Gemini 2.5 Pro [11][20] - Minimax is also making strides in agent and multimodal technologies, demonstrating practical applications such as AI-driven English learning content on social media platforms [13] Group 3: Competitive Landscape and Future Outlook - The competition in the large model sector is evolving, with Kimi and Minimax seeking to redefine their strategies in response to the dominance of larger players like DeepSeek [3][22] - Both companies are aiming for a "turnaround" in the next phase of competition, focusing on their unique technological strengths and market positioning to capture user attention [22][30] - The industry is witnessing a shift from mere parameter competition to a focus on capturing user perception and establishing a unique identity in the market [27][29]
A股半年收官 北证50指数半年涨近40% DeepSeek概念及兵装重组概念上半年领涨
Xin Hua Cai Jing· 2025-06-30 07:43
Market Performance - The major stock indices in Shanghai and Shenzhen opened mixed on the 30th, with the Shanghai Composite Index slightly lower and the Shenzhen Component and ChiNext indices higher [1] - By the end of the trading day, the Shanghai Composite Index closed at 3444.43 points, up 0.59%, with a trading volume of approximately 567.1 billion [1] - The Shenzhen Component Index closed at 10465.12 points, up 0.83%, with a trading volume of approximately 919.7 billion [1] - The ChiNext Index closed at 2153.01 points, up 1.35%, with a trading volume of approximately 462.2 billion [1] - The STAR Market Index closed at 1229.83 points, up 1.70%, with a trading volume of approximately 112.8 billion [1] - The North Star 50 Index closed at 1447.18 points, up 0.52%, with a trading volume of approximately 30.7 billion [1] Sector Performance - The military industry stocks continued their strong performance, with the sector index rising for six consecutive trading days [1] - The brain-computer interface sector opened significantly higher and saw a steady rise during the morning session [1] - Gaming stocks experienced volatility but maintained high levels during the trading day [1] - Other sectors such as photolithography machines, large aircraft, BC batteries, commercial aerospace, cultivated diamonds, exoskeleton robots, and electronic IDs also saw significant increases [1] - Financial stocks, including banks and securities, experienced slight declines, but the overall drop was minimal [1] Half-Year Performance - The Shanghai Composite Index rose 2.76% in the first half of the year, while the Shenzhen Component Index increased by 0.49% [2] - The ChiNext Index and STAR Market Index both saw gains of 0.53% and 9.93%, respectively, in the same period [2] - The North Star 50 Index had a remarkable increase of 39.45% in the first half of the year [2] - Sectors such as DeepSeek concept, military equipment restructuring, precious metals, controllable nuclear fusion, agricultural machinery, humanoid robots, Xiaohongshu concept, brain-computer interface, AI agents, and rare earth permanent magnets showed strong performance year-to-date [2] Institutional Insights - Market volatility is expected to increase in July due to upcoming earnings, trade, and policy changes, presenting structural investment opportunities [3] - Investors are advised to focus on sectors with high earnings certainty, such as semiconductor equipment and photovoltaic components, while also considering sectors that may benefit from policy support [3] - The market sentiment is anticipated to continue improving, supported by domestic policy measures aimed at addressing economic downturns [3] - The valuation of A-shares remains attractive for medium to long-term investments, with the current equity risk premium index indicating a favorable position [3] Fundraising Trends - The issuance of stock-based funds has reached a near four-year high in the first half of the year, with 663 new funds established, totaling 526.768 billion shares [5] - The proportion of stock-based funds in total fund issuance has increased from 21.14% to 35.35% this year, while the share of bond funds has decreased significantly [5] Futures Industry Performance - In May, futures companies achieved a net profit of 820 million, representing a year-on-year increase of 19.88% [6] - The total operating income for futures companies in May was 3.172 billion, up 2.03% year-on-year [6] - For the first five months of 2025, futures companies reported cumulative operating income of 15.247 billion, a 5.40% increase year-on-year, and a net profit of 4.084 billion, up 34.56% [6]
选择合适的大型语言模型:Llama、Mistral 和 DeepSeek
3 6 Ke· 2025-06-30 05:34
Core Insights - Large Language Models (LLMs) have gained popularity and are foundational to AI applications, with a wide range of uses from chatbots to data analysis [1] - The article analyzes and compares three leading open-source LLMs: Llama, Mistral, and DeepSeek, focusing on their performance and technical specifications [1] Group 1: Model Specifications - Each model series offers different parameter sizes (7B, 13B, up to 65-70B), with the number of parameters directly affecting the computational requirements (FLOP) for inference [2] - For instance, Llama and Mistral's 7B models require approximately 14 billion FLOP per token, while the larger Llama-2-70B model requires about 140 billion FLOP per token, making it ten times more computationally intensive [2] - DeepSeek has a 7B version and a larger 67B version, with similar computational requirements to Llama's 70B model [2] Group 2: Hardware Requirements - Smaller models (7B-13B) can run on a single modern GPU, while larger models require multiple GPUs or specialized hardware [3][4] - For example, Mistral 7B requires about 15GB of GPU memory, while Llama-2-13B needs approximately 24GB [3] - The largest models (65B-70B) necessitate 2-4 GPUs or dedicated accelerators due to their high memory requirements [4] Group 3: Memory Requirements - The raw memory required for inference increases with model size, with 7B models occupying around 14-16GB and 13B models around 26-30GB [5] - Fine-tuning requires additional memory for optimizer states and gradients, often needing 2-3 times the memory of the model size [6] - Techniques like LoRA and QLoRA are popular for reducing memory usage during fine-tuning by freezing most weights and training fewer additional parameters [7] Group 4: Performance Trade-offs - In production, there is a trade-off between latency (time taken for a single input to produce a result) and throughput (number of results produced per unit time) [9] - For interactive applications like chatbots, low latency is crucial, while for batch processing tasks, high throughput is prioritized [10][11] - Smaller models (7B, 13B) generally have lower per-token latency compared to larger models (70B), which can only generate a few tokens per second due to higher computational demands [10] Group 5: Production Deployment - All three models are compatible with mainstream open-source tools and have active communities [12][13] - Deployment options include local GPU servers, cloud inference on platforms like AWS, and even running on high-end CPUs for smaller models [14][15] - The models support quantization techniques, allowing for efficient deployment and integration with various service frameworks [16] Group 6: Safety Considerations - Open-source models lack the robust safety features of proprietary models, necessitating the implementation of safety layers for deployment [17] - This may include content filtering systems and rate limiting to prevent misuse [17] - Community efforts are underway to enhance the safety of open models, but they still lag behind proprietary counterparts in this regard [17] Group 7: Benchmark Performance - Despite being smaller, these models perform well on standard benchmarks, with Llama-3-8B achieving around 68.4% on MMLU, 79.6% on GSM8K, and 62.2% on HumanEval [18] - Mistral 7B scores approximately 60.1% on MMLU and 50.0% on GSM8K, while DeepSeek excels with 78.1% on MMLU and 85.5% on GSM8K [18][19][20] - The performance of these models indicates significant advancements in model design and training techniques, allowing them to compete with larger models [22][25]