Workflow
attention
icon
Search documents
The landmark in today's digital landscape | Ananya Kadam | TEDxUniversity of Birmingham Dubai
TEDx Talks· 2025-07-01 15:55
Resilience. When you hear the word resilience, what you're probably thinking of is bouncing back, falling down six times, and getting up seven. It's the story of the underdog who refuses to quit.the phoenix that rises from the ashes. The comeback kid who proves everyone wrong. And you're absolutely spot on.That is resilience. We often relate this word to great examples like JK Rowling who got rejected by 12 publishers before Harry Potter even got accepted or Steve Jobs who got kicked out of the company he f ...
X @BREAD | ∑:
BREAD | ∑:· 2025-07-01 14:25
"Most viewers stopped watching at 0:01"Zoomer attention span is worse than SOL shitter hold time 😭 https://t.co/El0ZN4AGwnBREAD | ∑: (@0xBreadguy):So beginneth the tiktok arc.https://t.co/1orP9nSmfT https://t.co/wxNFiwM4ie ...
Rewired Minds: ADHD, Attention, and the AI Generation | Komila Alikhodjaeva | TEDxYouth@TKA
TEDx Talks· 2025-06-30 16:13
Good afternoon. Hello everyone. My name is Camila.And now it took me like approximately 8 seconds to get your attention. And according to statistics, I have 47 more seconds of your focus. After that, you won't be likely listening to me.So I better start. Have you ever sit down to write an email, then you see a notification from Instagram and you're like, "Okay, let me check it real fast." You see the cute friendship reals and you're like, "Oh, I haven't responded to that video message from my friends on Tel ...
IAS Launches First-to-Market AI-Powered Social Attention Measurement for Snap
Prnewswire· 2025-06-30 12:00
Core Insights - Integral Ad Science (IAS) has formed a strategic partnership with Snap Inc. and Lumen Research to introduce a customized attention measurement tool for Snapchat campaigns, enabling advertisers to gain social attention metrics through a unique Snapchat attention score within the IAS Signal platform [1][3][5] Company Developments - The Snap Attention Measurement combines Lumen's eye-tracking technology with IAS's AI-powered media quality data, allowing advertisers to move beyond traditional viewability metrics and obtain deeper insights into consumer behavior [4][6] - IAS's CEO, Lisa Utzschneider, emphasized the importance of understanding consumer engagement with media, highlighting the partnership's role in providing a comprehensive view of attention to enhance media performance on social platforms [3][5] Industry Impact - This partnership is seen as a significant advancement in the Attention Economy, providing advertisers on Snapchat with the ability to measure how attention influences consumer actions [5] - The new attention measurement tool will be integrated into IAS Signal, which is designed to deliver essential data and insights for optimizing digital campaigns across various channels [5][9]
从语言到意识的“一步之遥”,AI究竟要走多远?
腾讯研究院· 2025-06-26 07:58
以下文章来源于追问nextquestion ,作者追问 追问nextquestion . 科研就是不断探索问题的边界 George Musser 作者 张旭晖 编译 人工智能的终极梦想,从来不局限于打造一个能击败国际象棋特级大师的博弈引擎,或是设计出花言巧 语蛊惑人心的聊天机器人。它的真正使命,是成为一面映照人类智慧的明镜,帮助我们更深刻地认识自 我。 科研工作者的目标,也不止于是狭义的人工智能,他们追求的是通用型人工智能 (A GI ) ——一种具有 类人的适应力与创造力的智能系统。 诚然,如今大语言模型 (LLM) 的问题解决能力已然让大多数研究者刮目相看,但它们依然有着明显的 短板,例如缺乏持续学习的能力——一旦完成基于书籍、网络文本等材料的训练后,它们的知识库就被 冻结了,再也无法"更新"。正如AI公司SingularityNET的本·格策尔 (Ben Goertzel) 形象地比喻:"你没法 让大语言模型去上大学,甚至连幼儿园都进不了。"它们通过不了有"机器人高考"之名的综合测验。 "掌握"了语言,离模拟思维还有多远? 在语言处理方面,目前的LLM确实展现出了专家所称的AGI"形式能力":即使你提供 ...
小米小爱同学:资源受限下,实现端侧大模型的高性能推理
AI前线· 2025-06-25 04:15
采访嘉宾|杨永杰,小米 小爱同学端侧 AI 负责人 编辑|罗燕珊 近日,InfoQ 对话 小米 / 小爱同学端侧 AI 负责人杨永杰,带你深入了解其团队如何从架构、系 统和算法三层着手,推进大模型在端侧的工程化落地。他们通过自研推理框架实现了 180 tokens/s 的实时推理性能 ,借助 LoRA 插件化 + 共享基座模型 支持多业务复用,并在推理性 能和资源占用上实现了极致优化。 面向未来,杨永杰认为,端侧大模型的突破将依赖两方面:一是面向大模型优化的硬件能力提 升,二是模型架构的演进,比如 Linear Attention 架构。 6 月 27~28 日,在即将于北京举办的 AICon 全球人工智能开发与应用大会 上,杨永杰将发表演 讲《 小爱同学在高性能端侧大模型推理的实践 》,分享其团队自研的大模型推理框架在实际业务 中的落地实践。围绕架构设计、量化策略、并行解码、跨芯片兼容、热更新策划等方面展开,结 合真实的系统优化痛点,解析端侧大模型商业化的关键路径。 敬请期待: https://aicon.infoq.cn/2025/beijing/presentation/6444 InfoQ:端侧大模型 ...
Rebuilding Concentration: How Can Schools Help? | Michelle Lee | TEDxLANNA Intl School Youth
TEDx Talks· 2025-06-23 15:39
Attention Span & Social Media Impact - The average human attention span is reportedly 8.25 seconds, shorter than a goldfish's [1] - Social media, particularly through dopamine reward loops, significantly contributes to the decline in attention spans [2] - Infinite scrolling, exemplified by the inventor's regret, highlights the addictive nature of quick, constant change [3] Educational Implications & Proposed Solutions - Declining attention spans pose a problem in schools, necessitating proactive interventions [4] - Schools should incorporate scheduled breaks to optimize student concentration and improve working memory [6][7] - Implementing three 20-minute breaks throughout the day is suggested, including snack/free time, movement/stretching, and lunch/free time [8] - Integrating mandated meditation sessions, even for just 5 minutes, can enhance academic performance and regulate emotions [9][10][11] - Training the brain through interactive methods, like journaling and doodling, can improve focus and information recall [12][13][14] Call to Action for Schools - Schools are urged to prioritize student mental well-being and concentration by shifting focus from appearance to mental health [15] - Implementing small changes, such as more breaks with meditation and snacks, can significantly impact student well-being and concentration [15]
The Age of Distraction | Mariam Neri | TEDxYouth@OCSA
TEDx Talks· 2025-06-23 15:34
Hello everyone. My name is Miam Ner and today I will be talking about the age of distraction. Are we consuming or being consumed.Now I want you guys, the audience, to picture this. You're just sitting down and chilling and you get a notification on your phone. One video leads to another when you're scrolling on TikTok.One episode becomes another episode. Next thing you know, it's been hours, the sun's gone down, and your brain feels like a mush. By show of hands, how many times has this happened to you.Oh, ...
MiniMax追着DeepSeek打
Jing Ji Guan Cha Wang· 2025-06-18 11:32
Core Viewpoint - MiniMax has launched its self-developed MiniMax M1 model, which competes directly with DeepSeek R1 and Google's Gemini 2.5 Pro in terms of key technical specifications, architecture design, context processing capabilities, and training costs [1][2]. Group 1: Model Specifications - MiniMax M1 supports a context length of 1 million tokens, which is 8 times larger than DeepSeek R1's 128,000 tokens and only slightly behind Google's Gemini 2.5 Pro [1]. - The total parameter count for MiniMax M1 is 456 billion, with 45.9 billion parameters activated per token, while DeepSeek R1 has a total of 671 billion parameters but activates only 37 billion per token [1]. Group 2: Cost Efficiency - MiniMax M1 consumes only 25% of the floating-point operations compared to DeepSeek R1 when generating 100,000 tokens, and requires less than half the computational power for inference tasks of 64,000 tokens [2]. - The training cost for MiniMax M1 was only $535,000, significantly lower than the initial expectations and much less than the $5-6 million GPU cost for training DeepSeek R1 [2]. Group 3: Pricing Strategy - MiniMax M1 has a tiered pricing model for its API services based on the number of input or output tokens, with the first tier charging 0.8 yuan per million input tokens and 8 yuan per million output tokens, which is lower than DeepSeek R1's pricing [3]. - The pricing for the first two tiers of MiniMax M1 is lower than that of DeepSeek R1, and the third tier for long text is currently not covered by DeepSeek [3]. Group 4: Technology Innovations - MiniMax M1's capabilities are supported by two core technologies: the linear attention mechanism (Lightning Attention) and the reinforcement learning algorithm CISPO, which enhances efficiency and stability in training [2].
200亿AI独角兽反击,MiniMax首款推理模型对标DeepSeeK,算力成本仅53万美元
Hua Er Jie Jian Wen· 2025-06-17 11:57
Core Insights - MiniMax, a Chinese AI startup valued at 20 billion RMB, has launched its first inference model, M1, which challenges leading models like DeepSeek and others with significantly lower training costs and superior efficiency [1][6]. Performance and Efficiency - M1 outperforms domestic closed-source models and approaches the performance of the best overseas models, surpassing DeepSeek, Alibaba, ByteDance, OpenAI, Google, and Anthropic in certain tasks [1]. - In terms of efficiency, M1 consumes less than 50% of the computational power of DeepSeek R1 when generating 64K tokens, and only 25% for 100K tokens [7]. - The model has a total of 456 billion parameters and supports context inputs of up to 1 million tokens, which is eight times that of DeepSeek R1 [3]. Cost Efficiency - The entire training process for M1 utilized 512 NVIDIA H800 GPUs over three weeks, with a rental cost of approximately 537,400 USD (around 3.8 million RMB), which is an order of magnitude lower than initially expected [6]. - MiniMax has developed a new reinforcement learning algorithm named CISPO, which achieved double the speed of ByteDance's recent DAPO algorithm, requiring only 50% of the training steps to reach similar performance [6]. Market Positioning - MiniMax has adopted a tiered pricing strategy for its API, making M1 more cost-effective compared to DeepSeek R1, especially in the input length ranges of 0-32K and 32K-128K tokens [8]. - M1 is positioned as a "price killer" in the market, receiving positive feedback from developers for its cost-performance ratio [8]. Future Developments - M1 is just the first product in a series of releases planned by MiniMax, which aims to introduce intelligent agent applications and further updates in video and music model capabilities [9]. - The company believes that M1's efficient architecture will provide unique advantages in future intelligent agent applications that require extensive reasoning and integration of long-context information [9].