Neural Nets

Search documents
Understanding Neural Nets: Mechanical Interpretation w/ Goodfire CEO Eric HO #ai #machinelearning
Sequoia Capital· 2025-07-08 18:44
Feasibility of Understanding Large Language Models - The field of mechanistic interpretability has a significant advantage due to perfect access to neurons, parameters, weights, and attention patterns in neural networks [1] - Understanding large language models is deeply necessary and critical for the future [2] - Establishing a norm to explain a percentage of the network by reconstructing it and extracting its concepts and features is crucial [2] Approaches to Understanding - Progress can be made by trying to understand all aspects of the network [2] - A baseline rudimentary understanding can be used to improve and understand more of the network [3]
X @Herbert Ong
Herbert Ong· 2025-06-26 18:41
RT Amy (@_SFTahoe)MORE DIVERSE SENSORS IS NOT SAFERWhy should Tesla add sensors? Adding more sensors doesn’t necessarily mean it’s safer. In fact, it could be less safe. More sensors introduces multiple sources of truth, and if one fails, the computer must decide which to trust. Aviation has used multiple sensors for years—but the most critical ones are in triplicate of the same sensor (a bad one can be easily isolated because it doesn’t agree). And the pilot provides the tie break. Waymo doesn’t follow thi ...