Workflow
算法题
icon
Search documents
实习生面试使用AI作弊,被一眼识破。
猿大侠· 2025-06-10 04:54
以下文章来源于数据结构和算法 ,作者博哥 数据结构和算法 . 《算法秘籍》作者王一博,专注于互联网大厂热点事件和算法题讲解。 最近一网友发文称,组里一直缺实习生,前两天好容易有个候选人,面试一直觉得没啥问题,结果 交叉面的时候被识破了,因为 眼镜反光(使用AI作弊) ,只能说面试官眼睛够犀利。 现在视频面试也逐渐成为常态,我觉得这也挺好,省的求职者来回跑,但视频面试也出现一些弊 端,容易作弊。有网友说视频面试的时候把眼镜摘了是不是就能过了,实际上面试的时候使用AI 作弊还是有难度的,因为正常的交流和读稿区别还是比较大的,只要面试官稍微细心一点,很容易 就能发现。 结果 = 24 - 9 = 15 示例2: --------------下面是今天的算法题-------------- 来看下今天的算法题,这题是LeetCode的第1281题:整数的各位积和之差,难度是简单。 给你一个整数 n,请你帮忙计算并返回该整数「各位数字之积」与「各位数字之和」的差。 示例1: 输入 :n = 234 输出 :15 解释 : 各位数之积 = 2 * 3 * 4 = 24 各位数之和 = 2 + 3 + 4 = 9 输入 :n ...
原来人家早就招满了,后面约的面试是遛狗呢。
猿大侠· 2025-05-31 12:55
以下文章来源于数据结构和算法 ,作者博哥 数据结构和算法 . 《算法秘籍》作者王一博,专注于互联网大厂热点事件和算法题讲解。 有的人一找不到工作就自怨自艾,怨天尤人,一度怀疑自己,甚至破罐破摔,自甘堕落,有的甚至 为此感到焦虑,导致最后发展成了抑郁症。 实际上找不到工作并不都是你的错,而是人家已经招满了,还在继续招主要是 给公司做宣传 ,所 以这个时候你怎么可能过,就算是爱因斯坦来了一样收不到offer。下面一位网友透露出了校招的 实情,原来都是套路。 解释 :子数组 [4,3] 是该条件下的长度最小的子数组。 示例2: 输入 :target = 4, nums = [1,4,4] 输出 :1 --------------下面是今天的算法题-------------- 来看下今天的算法题,这题是LeetCode的第209题:长度最小的子数组。 问题描述 来源:LeetCode第209题 难度:中等 给定一个含有 n 个正整数的数组和一个正整数 target 。找出该数组中满足其总和大于等于 target 的长度最小的连续子数组,并返回其长度。如果不存在符合条件的子数组,返回 0 。 示例1: 输入 :targ ...
某校严查夜不归宿、严禁学生校外实习。
猿大侠· 2025-05-27 03:14
Group 1 - The article discusses the challenges faced by students from non-prestigious universities in securing internships, particularly in the context of strict school policies against external internships [1] - Many students express frustration over the school's restrictions, which prevent them from gaining practical experience necessary for employment [1] - The author reflects on their own experience, noting that while their school did not encourage internships, it also did not prohibit them, recognizing the importance of practical experience for job placement [1] Group 2 - The article presents a coding problem from LeetCode, specifically problem number 1509, which involves minimizing the difference between the maximum and minimum values in an array after performing up to three operations [2][12] - The problem allows for changing any element in the array to any value, with the goal of achieving the smallest possible difference between the maximum and minimum values after three operations [12] - The solution involves sorting the array and considering different scenarios of removing elements to minimize the difference, particularly when the array length exceeds four [13]
爆冷!字节Seed 在CCPC 决赛只做出一道签到题,而DeepSeek R1 直接挂零?
AI前线· 2025-05-16 07:48
Core Viewpoint - The performance of large language models (LLMs) in algorithm competitions, specifically the China Collegiate Programming Contest (CCPC), has revealed significant limitations, indicating that while these models can excel in certain tasks, they struggle with unique and creative problem-solving required in competitive programming [10][11]. Group 1: Competition Overview - The 10th China Collegiate Programming Contest (CCPC) recently took place, with Byte's Seed sponsoring and participating through Seed-Thinking, which only managed to solve a simple "check-in" problem [1][3]. - The number of problems in the CCPC final typically ranges from 10 to 13, but specific details about this year's problems have not been disclosed [1]. Group 2: Model Performance - Various models, including Seed-Thinking, o3, o4, Gemini 2.5 Pro, and DeepSeek R1, participated in the competition, with results showing that most models struggled significantly, with DeepSeek R1 failing to solve any problems [5][9]. - The models' performances were evaluated against their expected capabilities based on previous ratings, with many participants expressing surprise at the low scores achieved by these models [3][11]. Group 3: Model Architecture and Training - Seed-Thinking employs a MoE architecture with 200 billion total parameters and 20 billion active parameters, integrating various training methods for STEM problems and logical reasoning [8]. - o3 features a specialized reasoning architecture with 128 layers of Transformer, while o4-mini is optimized for efficiency, reducing parameters significantly while maintaining performance [8]. - Gemini 2.5 Pro supports multi-modal inputs and has a large context window, allowing it to handle extensive documents and codebases [8]. Group 4: Insights on Model Limitations - The results from the CCPC indicate that large models have inherent weaknesses in solving algorithmic problems, which may not be adequately addressed by their training [10][11]. - The competitive programming environment requires unique problem-solving skills that differ from the models' training data, making it challenging for them to perform well [11][12]. Group 5: Comparative Analysis - A benchmark test conducted by Microsoft on various models showed that while all models performed well on known problems, their success rates dropped significantly on unseen problems, particularly in medium and hard categories [14][17]. - Models that utilized reasoning modes demonstrated superior performance compared to their base versions, highlighting the importance of reasoning capabilities in tackling complex algorithmic challenges [17][18].