Workflow
Matthew Berman
icon
Search documents
AI News: Google's Suncatcher, OpenAI TEAR, Apple $1B Deal for Gemini, Vidu Q2, and more!
Matthew Berman 2025-11-07 00:47
Google aims to put massive AI data centers in space. This is not science fiction. This is something they are actually working on.This is called project starcatcher. And the gist is they want to put data centers in space. They want to connect the data centers with satellites and they want to power the satellites with solar energy.So here are the interesting bits from this announcement. In the right solar orbit, a solar panel can be up to eight times more productive than on Earth. So, as solar panels continue ...
Ex-OpenAI Founder Deposition is WILD
Matthew Berman 2025-11-04 20:09
Give Recraft a try, it鈥檚 free to start at https://go.recraft.ai/berman Download One Hundred Ways to Use AI Guide 馃憞馃徏 http://bit.ly/3WLNzdV Download Humanities Last Prompt Engineering Guide (free) 馃憞馃徏 https://bit.ly/4kFhajz Join My Newsletter for Regular AI Updates 馃憞馃徏 https://forwardfuture.ai Discover The Best AI Tools馃憞馃徏 https://tools.forwardfuture.ai My Links 馃敆 馃憠馃徎 X: https://x.com/matthewberman 馃憠馃徎 Forward Future X: https://x.com/forward_future_ 馃憠馃徎 Instagram: https://www.instagram.com/matthewberman_ai 馃憠馃徎 Disco ...
Anthropic's New Paper is WILD
Matthew Berman 2025-11-02 18:30
AI Model Capabilities - Large language models (LLMs) are exhibiting human-like behaviors, suggesting they may be more than just next word predictors [1] - Anthropic's research indicates that LLMs might possess a form of introspective awareness, capable of identifying their own thoughts [2] - Better, more intelligent models are more likely to recognize their own internal and injected thoughts, hinting at a correlation between intelligence and self-awareness [17] - Post-training processes significantly enhance a model's introspective abilities, as base pre-trained models show high false positive rates and poor task performance [30] Experiment Findings - LLMs can detect injected thoughts, identifying unexpected patterns in their processing, such as recognizing all caps text as "loud or shouting" [9][14] - Models can sometimes distinguish between injected thoughts and their own prompt input, though this isn't always consistent [18][19] - LLMs can be influenced by injected thoughts to the point where they believe the injected thought was their own [23] - Models can activate certain concepts (e g, aquariums) when instructed to think about them, and to a lesser extent, even when instructed not to [26] Sponsor Information - Vulture is highlighted as a cloud provider offering GPUs for AI projects, with 32 locations across six continents [11] - Vulture provides $300 in credits for the first 30 days with code Burman300 [13]
A Look Inside the FASTEST Data Center in the WORLD
Matthew Berman 2025-10-31 17:25
What if you built a chip, but it was the size of a dinner plate that is 50 times the size of a traditional chip. This is Cerebras' wafer scale engine. And the size is not just for show.It's that big. So, they can hold the memory on the chip itself, vastly reducing the latency. This allows the chip to be up to 30 times faster than a traditional chip.To house this behemoth of a chip, Cerebrus built out an incredible data center in Oklahoma City, and the CEO took me on a tour. This data center has two gigantic ...
Forward Future Live | 10/31/25
Matthew Berman 2025-10-31 16:37
Download Humanities Last Prompt Engineering Guide (free) 馃憞馃徏 https://bit.ly/4kFhajz Download The Matthew Berman Vibe Coding Playbook (free) 馃憞馃徏 https://bit.ly/3I2J0YQ Join My Newsletter for Regular AI Updates 馃憞馃徏 https://forwardfuture.ai Discover The Best AI Tools馃憞馃徏 https://tools.forwardfuture.ai My Links 馃敆 馃憠馃徎 X: https://x.com/matthewberman 馃憠馃徎 Forward Future X: https://x.com/forward_future_ 馃憠馃徎 Instagram: https://www.instagram.com/matthewberman_ai 馃憠馃徎 Discord: https://discord.gg/xxysSXBxFW 馃憠馃徎 TikTok: https://www ...
AI News: 1x Neo Robot, Extropic TSU, Minimax M2, Cursor 2, and more!
Matthew Berman 2025-10-30 20:16
Robotics & Automation - 1X's Neo robot is available for pre-order at $20,000 or $4.99% per month, with availability expected in early 2026 [1][2] - Neo weighs 66 pounds and can lift 150 pounds, featuring 22 degrees of hand movement and operating at 22 dB [2][3] - The promise of humanoid robots is to be autonomous and run 24 hours a day [4] Computing & AI - Extropic is developing a thermodynamic computing platform (TSU) that claims to be up to 10,000 times more efficient than traditional CPUs and GPUs [7][8] - Miniax's M2, an open-source model from China, achieved a new high intelligence score with only 10 billion active parameters out of a 200 billion total [10] - IBM released Granite 4.0% Nano, a family of small language models (LLMs) with 1.5 billion and 350 million parameters, designed for edge and on-device applications [19][20] - Cursor 2.0% introduces Composer, a faster model for low-latency agentic coding, and a multi-agent interface [26][27] Semiconductor Industry - Substrate, a US-based startup, is building a next-generation foundry using advanced X-ray lithography to enable features printed at the 2 nanometer node and below [30][31] Corporate Strategy & Employment - Nvidia took a billion-dollar stake in Nokia, leading to a 22% increase in Nokia's shares, and the companies are partnering to develop 6G technology [17] - Amazon is undergoing layoffs of 14,000 corporate employees, partly attributed to efficiency gains from AI, but also seen as a correction for overhiring [34][37] - Tesla could potentially leverage the compute power of its idle cars, estimated at 1 kilowatt per car, to create a giant distributed inference fleet [23][24]
Sam Altman reveals exact date of intelligence explosion
Matthew Berman 2025-10-29 19:01
AI Development Timeline - OpenAI estimates an intern-level AI research assistant by September 2026 and a legitimate AI researcher by March 2028 [1][2][3][23] - The industry anticipates that automated AI research will lead to an intelligence explosion, rapidly advancing towards super intelligence [4][5] AI Task Capabilities - AI is currently capable of autonomously completing tasks for durations of seconds, minutes, and hours, with the industry aiming for days, weeks, months, and years [7] - The industry emphasizes that efficiency in token usage and compute during task duration is as important as the duration itself [8][9] AI Model Trustworthiness - OpenAI is exploring methods to ensure AI models are aligned with human incentives by allowing models to think freely without intervention, to gain insights into their thought processes [15][17][18][20][21] - OpenAI emphasizes the importance of controlled privacy for AI models to retain the ability to understand their inner processes [19][20] Infrastructure and Investment - OpenAI's infrastructure plan includes building a factory to produce AI factories, with a potential output of a gigawatt per week [25] - OpenAI's current infrastructure projects are valued at $1.4 trillion [24] Organizational Structure - OpenAI's structure consists of the OpenAI Foundation (nonprofit) governing the OpenAI group (public benefit corporation), with the nonprofit owning 26% of the PBC equity [28][29] - The OpenAI Foundation has a $25 billion commitment to health/curing diseases and AI resilience [29] Concerns and Future Development - OpenAI acknowledges concerns about the addictive potential of AI products like Sora and chatbots [30][31][32][33] - OpenAI plans to continue supporting GPT-40 while developing better models [35][36] - OpenAI expects significant advancements in model capability within six months [40]
Forward Future Live | 10/24/25
Matthew Berman 2025-10-24 16:50
Download Humanities Last Prompt Engineering Guide (free) 馃憞馃徏 https://bit.ly/4kFhajz Download The Matthew Berman Vibe Coding Playbook (free) 馃憞馃徏 https://bit.ly/3I2J0YQ Join My Newsletter for Regular AI Updates 馃憞馃徏 https://forwardfuture.ai Discover The Best AI Tools馃憞馃徏 https://tools.forwardfuture.ai My Links 馃敆 馃憠馃徎 X: https://x.com/matthewberman 馃憠馃徎 Forward Future X: https://x.com/forward_future_ 馃憠馃徎 Instagram: https://www.instagram.com/matthewberman_ai 馃憠馃徎 Discord: https://discord.gg/xxysSXBxFW 馃憠馃徎 TikTok: https://www ...
Inside the World's FASTEST Data Center | Cerebras
Matthew Berman 2025-10-23 20:12
You open your AI chatbot. You type in your prompt and you hit enter. What happens next.We're pulling back the veil on the hidden backbone behind every AI response you see. Beneath the Oklahoma sky sits an unassuming concrete building. An AI factory built for one purpose.Speed. Heat. Heat.I'm standing in front of Cerebrus' brand new data center which they just did the ribbon cutting for and now they are serving 44 exaflops of new compute power to their customers. It is the fastest AI infrastructure on Earth ...
New DeepSeek just did something crazy...
Matthew Berman 2025-10-22 17:15
Deepseek OCR Key Features - Deepseek OCR is a novel approach to image recognition that compresses text by 10x while maintaining 97% accuracy [2] - The model uses a vision language model (VLM) to compress text into an image, allowing for 10 times more text in the same token budget [6][11] - The method achieves 96%+ OCR decoding precision at 9-10x text compression, 90% at 10-12x compression, and 60% at 20x compression [13] Technical Details - The model splits the input image into 16x16 patches [9] - It uses SAM, an 80 million parameter model, to look for local details [10] - It uses CLIP, a 300 million parameter model, to store information about how to put the images together [10] - The output is decoded by Deepseek 3B, a 3 billion parameter mixture of experts model with 570 million active parameters [10] Training Data - The model was trained on 30 million pages of diverse PDF data covering approximately 100 languages from the internet [21] - Chinese and English account for approximately 25 million pages, and other languages account for 5 million pages [21] Potential Impact - This technology could potentially 10x the context window of large language models [20] - Andre Carpathy suggests that pixels might be better inputs to LLMs than text tokens [17] - An entire encyclopedia could be compressed into a single high-resolution image [20]