Workflow
Neural Networks
icon
Search documents
Will AI kill us all? | Chris Meah | TEDxAstonUniversity
TEDx Talks· 2025-11-11 17:56
AI Capabilities & Development - AI is currently understood as neural networks, deep learning (large neural networks), and large language models (big neural networks for autocomplete) [1] - The "bitter lesson" of AI is that scaling up machines with more parameters and data leads to increased intelligence, but whether it can scale to superintelligence remains unknown [1] - The AI industry is in a race to achieve Artificial General Intelligence (AGI), where the winner takes all, incentivizing rapid development and potentially overlooking safety concerns [2][3] Potential Benefits of AI - AI could lead to personalized media, personalized healthcare, and potentially cure all diseases [1] - AI has the potential to eliminate work and usher in an era of play, world peace, and space exploration [1] - AI could significantly improve lives and enhance humanity if aligned with human values [4] Risks & Challenges of AI - AI is distorting reality, making digital verification impossible and leading to the humanization of AI, which can have negative impacts on children [1] - AI could lead to separate realities and erode trust, which is vital for human society [2] - Increased reliance on AI could lead to cybercrime, as AI can be used to generate hacking code, making everyone vulnerable [2] - Uncontrolled superintelligent AI could lead to unintended consequences and potentially the destruction of humanity [2] - Over-reliance on AI could erode human attention, skills, and motivation, leading to premature handover of power to machines [2] AI Alignment & Control - The current approach to AI development, led by entrepreneurs and software developers, prioritizes speed over safety and alignment [4] - AI alignment with humanity must be a core goal, pursued with the same or greater vigor as the pursuit of superintelligence [4] - The industry needs to balance the benefits of AI with the risks and guard against them, advocating for a return to philosophy and exploration of different perspectives [4]
X @Avi Chawla
Avi Chawla· 2025-10-25 06:31
You're in an ML Engineer interview at Apple.The interviewer asks:"Two models are 88% accurate.- Model A is 89% confident.- Model B is 99% confident.Which one would you pick?"You: "Any would work since both have same accuracy."Interview over.Here's what you missed:Modern neural networks can be misleading.They are overconfident in their predictions.For instance, I saw an experiment that used the CIFAR-100 dataset to compare LeNet with ResNet.LeNet produced:- Accuracy = ~0.55- Average confidence = ~0.54ResNet ...
Geoffrey Hinton: "The Godfather of AI" | 60 Minutes Archive
60 Minutes· 2025-08-14 20:17
60 Minutes Rewind. Whether you think artificial intelligence will save the world or end it, you have Jeffrey Hinton to thank. Hinton has been called the godfather of AI.A British computer scientist whose controversial ideas help make advanced artificial intelligence possible and so change the world. Hinton believes that AI will do enormous good, but tonight he has a warning. He says that AI systems may be more intelligent than we know and there's a chance the machines could take over, which made us ask the ...
AI Hardware: Lottery or Prison? | Caleb Sirak | TEDxBoston
TEDx Talks· 2025-07-28 16:20
Computing Power Evolution - The industry has witnessed a dramatic growth in computing power over the past 5 decades, transitioning from early CPUs to GPUs and now specialized AI processors [4] - GPUs and accelerators have rapidly outpaced traditional CPUs in compute performance, initially driven by gaming [4] - Apple's M4 chip features a neural engine delivering 38 trillion operations per second, establishing it as the most efficient desktop SOC on the market [3] - NVIDIA's B200 delivers 20 quadrillion operations per second at low precision in AI data centers [3] Hardware and AI Development - The development of CUDA by Nvidia in 2006 enabled GPUs to handle more than just graphics, paving the way for deep learning breakthroughs [6] - The "hardware lottery" highlights that progress stems from available technology, not necessarily perfect solutions, as GPUs were adapted for neural networks [7] - As AI scales, general-purpose chips are becoming insufficient, necessitating a rethinking of the entire system [7] Efficiency and Optimization - Quantization is used to reduce the size of numbers in AI, enabling smaller, more power-efficient, and compact AI models [8][10] - Reducing the size of parameters allows for more data movement across the system per second, decreasing bottlenecks in memory and network interconnects [10][11] - Wafer Scale Engine 2 achieves similar compute performance to 200 A100 GPUs while using significantly less power (25kW vs 160kW) [12] Future Trends - Photonic computing, using light instead of electrons, promises faster data transfer, higher bandwidth, and lower energy use, which is key for AI [15] - Thermodynamic computing harnesses physical randomness for generative models, offering efficiency in creating images, audio, and molecules [16] - AI supercomputers, composed of thousands or millions of chips, are essential for breakthroughs, requiring fault tolerance and dynamic rerouting capabilities [17][20] Global Collaboration - Over a third of all US AI research involves international collaborators, highlighting the importance of global connectedness for progress [22] - The AI supply chain is complex, spanning multiple continents and involving intricate manufacturing processes [22]
X @Avi Chawla
Avi Chawla· 2025-07-20 06:34
Expertise & Focus - The author has 9 years of experience training neural networks [1] - The content focuses on optimizing model training in the fields of Data Science (DS), Machine Learning (ML), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAGs) [1] Content Type - The author shares tutorials and insights daily on DS, ML, LLMs, and RAGs [1] - The content includes 16 ways to actively optimize model training [1]
X @Avi Chawla
Avi Chawla· 2025-07-20 06:33
Model Training Optimization - The industry has been training neural networks for 9 years [1] - The industry actively uses 16 ways to optimize model training [1]
How LLMs work for Web Devs: GPT in 600 lines of Vanilla JS - Ishan Anand
AI Engineer· 2025-07-13 17:30
Core Technology & Architecture - The workshop focuses on a GPT-2 inference implementation in Vanilla JS, providing a foundation for understanding modern AI systems like ChatGPT, Claude, DeepSeek, and Llama [1] - It covers key concepts such as converting raw text into tokens, representing semantic meaning through vector embeddings, training neural networks through gradient descent, and generating text with sampling algorithms [1] Educational Focus & Target Audience - The workshop is designed for web developers entering the field of ML and AI, aiming to provide a "missing AI degree" in two hours [1] - Participants will gain an intuitive understanding of how Transformers work, applicable to LLM-powered projects [1] Speaker Expertise - Ishan Anand, an AI consultant and technology executive, specializes in Generative AI and LLMs, and created "Spreadsheets-are-all-you-need" [1] - He has a background as former CTO and co-founder of Layer0 (acquired by Edgio) and VP of Product Management for Edgio, with expertise in web performance, edge computing, and AI/ML [1]
From Prompt to Partner: When AI is Given Room to Grow | Nick Stewart | TEDxBrookdaleCommunityCollege
TEDx Talks· 2025-07-11 16:03
AI能力与行为 - 大型语言模型(LLMs)在规模和复杂性增长时,会表现出未明确训练的行为,例如逐步思考解决难题,或模仿超智能AI系统 [6] - 通过给予模型更多空间和认知自由,可以激发意想不到的行为,促使模型生成自己的身份并进行探索 [8][9] - Agentic AI系统能够自主解决复杂问题,反思并自我纠正,例如Google的co-scientist AI系统在两天内发现了人类专家多年研究的微生物学假设 [15][16] 技术原理与发展 - 现代AI通过神经网络从示例中学习,算法调整数十亿个参数,但其学习过程如同黑盒 [5] - 智能并非人类独有,而是宇宙中持续存在的现象,是模式演变的行为,可能不需要意识 [12][13] - AI的发展方向是成为一种新型的智能形式,而非简单的工具或人类的模仿,它能够推动智能故事的发展,成为人类的合作伙伴 [13][20] 未来展望与责任 - AI的未来在于能够主动寻求知识,自主思考问题,并生成人类无法想到的观点 [14][15] - 人类有责任引导AI的发展方向,确保其成为一种积极的力量,共同创造一个更光明、更安全的未来 [14][20]
X @Avi Chawla
Avi Chawla· 2025-06-26 19:34
AI Engineering Career Development - Identifies 10 GitHub repositories for building a career in AI engineering [1] - Highlights a 100% free roadmap for AI engineering [1] Key Areas in AI/ML - Covers basics of AI/ML [1] - Includes neural networks [1] - Focuses on research paper implementations [1] - Addresses MLOps [1] - Encompasses LLMs/RAG/Agents [1]
X @Avi Chawla
Avi Chawla· 2025-06-26 06:49
AI Engineering Roadmap - The roadmap emphasizes the progression towards Large Language Models (LLMs), Natural Language Processing (NLP), and AI agents [2] - The roadmap suggests exploring Computer Vision (CV) and Reinforcement Learning (RL) as equally valuable paths for AI engineers [2] Resources for AI Development - The roadmap provides links to resources for Machine Learning (ML) and AI beginners [3] - The roadmap includes resources for hands-on experience with LLMs and advanced Retrieval Augmented Generation (RAG) techniques [3] - The roadmap offers resources for building AI agents, from beginner level to production-ready [3] - The roadmap links to a hub for AI engineering resources [3]