Workflow
Matthew Berman
icon
Search documents
Forward Future Live | 10/31/25
Matthew Berman· 2025-10-31 16:37
Download Humanities Last Prompt Engineering Guide (free) 👇🏼 https://bit.ly/4kFhajz Download The Matthew Berman Vibe Coding Playbook (free) 👇🏼 https://bit.ly/3I2J0YQ Join My Newsletter for Regular AI Updates 👇🏼 https://forwardfuture.ai Discover The Best AI Tools👇🏼 https://tools.forwardfuture.ai My Links 🔗 👉🏻 X: https://x.com/matthewberman 👉🏻 Forward Future X: https://x.com/forward_future_ 👉🏻 Instagram: https://www.instagram.com/matthewberman_ai 👉🏻 Discord: https://discord.gg/xxysSXBxFW 👉🏻 TikTok: https://www ...
AI News: 1x Neo Robot, Extropic TSU, Minimax M2, Cursor 2, and more!
Matthew Berman· 2025-10-30 20:16
Robotics & Automation - 1X's Neo robot is available for pre-order at $20,000 or $4.99% per month, with availability expected in early 2026 [1][2] - Neo weighs 66 pounds and can lift 150 pounds, featuring 22 degrees of hand movement and operating at 22 dB [2][3] - The promise of humanoid robots is to be autonomous and run 24 hours a day [4] Computing & AI - Extropic is developing a thermodynamic computing platform (TSU) that claims to be up to 10,000 times more efficient than traditional CPUs and GPUs [7][8] - Miniax's M2, an open-source model from China, achieved a new high intelligence score with only 10 billion active parameters out of a 200 billion total [10] - IBM released Granite 4.0% Nano, a family of small language models (LLMs) with 1.5 billion and 350 million parameters, designed for edge and on-device applications [19][20] - Cursor 2.0% introduces Composer, a faster model for low-latency agentic coding, and a multi-agent interface [26][27] Semiconductor Industry - Substrate, a US-based startup, is building a next-generation foundry using advanced X-ray lithography to enable features printed at the 2 nanometer node and below [30][31] Corporate Strategy & Employment - Nvidia took a billion-dollar stake in Nokia, leading to a 22% increase in Nokia's shares, and the companies are partnering to develop 6G technology [17] - Amazon is undergoing layoffs of 14,000 corporate employees, partly attributed to efficiency gains from AI, but also seen as a correction for overhiring [34][37] - Tesla could potentially leverage the compute power of its idle cars, estimated at 1 kilowatt per car, to create a giant distributed inference fleet [23][24]
Sam Altman reveals exact date of intelligence explosion
Matthew Berman· 2025-10-29 19:01
AI Development Timeline - OpenAI estimates an intern-level AI research assistant by September 2026 and a legitimate AI researcher by March 2028 [1][2][3][23] - The industry anticipates that automated AI research will lead to an intelligence explosion, rapidly advancing towards super intelligence [4][5] AI Task Capabilities - AI is currently capable of autonomously completing tasks for durations of seconds, minutes, and hours, with the industry aiming for days, weeks, months, and years [7] - The industry emphasizes that efficiency in token usage and compute during task duration is as important as the duration itself [8][9] AI Model Trustworthiness - OpenAI is exploring methods to ensure AI models are aligned with human incentives by allowing models to think freely without intervention, to gain insights into their thought processes [15][17][18][20][21] - OpenAI emphasizes the importance of controlled privacy for AI models to retain the ability to understand their inner processes [19][20] Infrastructure and Investment - OpenAI's infrastructure plan includes building a factory to produce AI factories, with a potential output of a gigawatt per week [25] - OpenAI's current infrastructure projects are valued at $1.4 trillion [24] Organizational Structure - OpenAI's structure consists of the OpenAI Foundation (nonprofit) governing the OpenAI group (public benefit corporation), with the nonprofit owning 26% of the PBC equity [28][29] - The OpenAI Foundation has a $25 billion commitment to health/curing diseases and AI resilience [29] Concerns and Future Development - OpenAI acknowledges concerns about the addictive potential of AI products like Sora and chatbots [30][31][32][33] - OpenAI plans to continue supporting GPT-40 while developing better models [35][36] - OpenAI expects significant advancements in model capability within six months [40]
Forward Future Live | 10/24/25
Matthew Berman· 2025-10-24 16:50
Download Humanities Last Prompt Engineering Guide (free) 👇🏼 https://bit.ly/4kFhajz Download The Matthew Berman Vibe Coding Playbook (free) 👇🏼 https://bit.ly/3I2J0YQ Join My Newsletter for Regular AI Updates 👇🏼 https://forwardfuture.ai Discover The Best AI Tools👇🏼 https://tools.forwardfuture.ai My Links 🔗 👉🏻 X: https://x.com/matthewberman 👉🏻 Forward Future X: https://x.com/forward_future_ 👉🏻 Instagram: https://www.instagram.com/matthewberman_ai 👉🏻 Discord: https://discord.gg/xxysSXBxFW 👉🏻 TikTok: https://www ...
Inside the World's FASTEST Data Center | Cerebras
Matthew Berman· 2025-10-23 20:12
You open your AI chatbot. You type in your prompt and you hit enter. What happens next.We're pulling back the veil on the hidden backbone behind every AI response you see. Beneath the Oklahoma sky sits an unassuming concrete building. An AI factory built for one purpose.Speed. Heat. Heat.I'm standing in front of Cerebrus' brand new data center which they just did the ribbon cutting for and now they are serving 44 exaflops of new compute power to their customers. It is the fastest AI infrastructure on Earth ...
New DeepSeek just did something crazy...
Matthew Berman· 2025-10-22 17:15
Deepseek OCR Key Features - Deepseek OCR is a novel approach to image recognition that compresses text by 10x while maintaining 97% accuracy [2] - The model uses a vision language model (VLM) to compress text into an image, allowing for 10 times more text in the same token budget [6][11] - The method achieves 96%+ OCR decoding precision at 9-10x text compression, 90% at 10-12x compression, and 60% at 20x compression [13] Technical Details - The model splits the input image into 16x16 patches [9] - It uses SAM, an 80 million parameter model, to look for local details [10] - It uses CLIP, a 300 million parameter model, to store information about how to put the images together [10] - The output is decoded by Deepseek 3B, a 3 billion parameter mixture of experts model with 570 million active parameters [10] Training Data - The model was trained on 30 million pages of diverse PDF data covering approximately 100 languages from the internet [21] - Chinese and English account for approximately 25 million pages, and other languages account for 5 million pages [21] Potential Impact - This technology could potentially 10x the context window of large language models [20] - Andre Carpathy suggests that pixels might be better inputs to LLMs than text tokens [17] - An entire encyclopedia could be compressed into a single high-resolution image [20]
OpenAI just changed web browsing forever... (ChatGPT Atlas)
Matthew Berman· 2025-10-21 20:23
OpenAI just dropped their web browser and it is incredibly impressive. They have completely reimagined what the web browser should be from the search and URL bar all the way to what an actual assistant controlling your browser will be like. So in this video I'm going to break it all down for you.Let's get started. So this is Chat GPT Atlas, an AI native browser from Open AI. They just finished the live stream.It was Sam Alman and a bunch of the folks who were responsible for creating Atlas and they showed o ...
Andrej Karpathy devastates AI optimists...
Matthew Berman· 2025-10-20 21:22
AGI Timelines and Agent Development - Andre Karpathy 认为 AGI (Artificial General Intelligence,通用人工智能) 还需要 10 年以上的时间才能实现 [1] - 行业普遍认为 2025 年至 2035 年将是 Agent (代理) 的十年,但要使 Agent 真正可用并普及到整个经济领域,还需要大量的开发工作 [1] - 行业观察到 LLM (Large Language Model,大型语言模型) 在近年取得了巨大进展,但仍然存在大量的基础工作、集成工作、物理世界的传感器和执行器、社会工作、安全工作以及研究工作需要完成 [1] Learning Approaches and Model Capabilities - Karpathy 认为 LLM 的学习方式更像是“幽灵”,而不是动物,动物天生就具备大量通过进化预先设定的智能 [1][2] - 行业对强化学习 (RL) 的有效性表示怀疑,认为其每次计算所获得的学习信号较差,并倾向于 agentic 交互,即为 Agent 创建一个可以进行实验和学习的“游乐场” [2] - 行业正在探索系统提示学习 (System Prompt Learning),这是一种通过改变系统提示来影响模型行为的新学习范式,类似于人类做笔记 [2][3] Model Size and Memorization - 行业趋势是模型尺寸先增大后减小,认知核心 (Cognitive Core) 的概念是剥离 LLM 的百科全书式知识,使其更擅长泛化 [3] - 行业对当前 Agent 行业提出了批评,认为其在工具方面过度投入,而忽略了当前的能力水平,并强调与 LLM 协作,结合人类的优势和 LLM 的长处 [3]
AI News: NVIDIA DGX-1, GPT-6 2025, Claude Skills, Waymo DDOS, Datacenters in Space, and more!
Matthew Berman· 2025-10-18 15:34
This video is brought to you by Stack AI. More on them later. GPT6 might be coming by the end of the year. This guy on CNBC said he just got done talking to Brad Gersonner, a prominent figure in Silicon Valley, and he just said GPT6 is coming by the end of this year.That's 2 and 1/2 months from now. Now, that comes right on the heels of GPT5. And honestly, I don't think it's going to be happening.It would be very weird to have this massive launch GPT5 really a fundamental shift in the way users interact wit ...
Anthropic Founder says we should be afraid....
Matthew Berman· 2025-10-17 14:30
Make no mistake, what we are dealing with is a real and mysterious creature, not a simple and predictable machine. This is from anthropic co-founder Jack Clark. He recently published some of his comments from a talk he did in Berkeley in which he conveys his fear of this steady march towards artificial general intelligence.So, we're going to go over what he's so afraid of. Then we're going to give the flip side to show who's thinking that this is just fear-mongering and regulatory capture. Now, before I get ...