Workflow
Dropout
icon
Search documents
被拒≠失败!这些高影响力论文都被顶会拒收过
具身智能之心· 2025-12-12 01:22
Core Insights - Waymo has released a deep blog detailing its AI strategy centered around its foundational model, emphasizing the use of distillation methods to create high-efficiency models for onboard operations [1][2] - Jeff Dean highlighted the significance of knowledge distillation, comparing it to the creation of the Gemini Flash model, which showcases the importance of distillation in AI model efficiency [1][2] Historical Context of Rejected Papers - Many foundational technologies in AI, such as optimizers for large models and computer vision techniques, were initially rejected by top conferences, showcasing a historical pattern of oversight in recognizing groundbreaking innovations [6] - Notable figures in AI, including Geoffrey Hinton and Yann LeCun, have faced rejection for their pioneering work, which was later recognized as transformative [6] Case Studies of Rejected Innovations - LSTM, a milestone for sequence data processing, was rejected by NIPS in 1996 but later became crucial in speech recognition and machine translation, highlighting the delayed recognition of its value [7][10] - SIFT, a dominant algorithm in computer vision, faced rejection from ICCV and CVPR due to its perceived complexity, yet proved to be vital in real-world image processing [11][13] - Dropout, a key regularization method for deep neural networks, was initially rejected for its radical approach but later became essential in training deep networks effectively [17][19] - Word2Vec, despite being rejected at ICLR, became a cornerstone in NLP due to its efficiency and practical application, eventually receiving recognition for its impact [20][24] - YOLO transformed object detection by prioritizing speed over precision, facing rejection for its perceived shortcomings but later becoming a widely adopted framework in the industry [28][30] Reflection on Peer Review Limitations - The peer review system often struggles to recognize disruptive innovations, leading to a systematic cognitive lag in evaluating groundbreaking research [40][41] - The tendency to equate mathematical complexity with research contribution can hinder the acceptance of simpler yet effective methods [41] - Historical examples illustrate that the true measure of a research's impact is not determined by initial peer review outcomes but by its long-term relevance and problem-solving capabilities [43][47]
X @Avi Chawla
Avi Chawla· 2025-09-22 19:59
Dropout Mechanism - During training, the average neuron input is significantly lower compared to inference, potentially causing numerical instability due to activation scale misalignment [1] - Dropout addresses this by multiplying inputs during training by a factor of 1/(1-p), where 'p' is the dropout rate [2] - For example, with a dropout rate of 50%, an input of 50 is scaled to 100 (50 / (1 - 0.5) = 100) [2] - This scaling ensures coherence between training and inference stages for the neural network [2] Training vs Inference - Consider a layer with 100 neurons, each with an activation value of 1, and a weight of 1 from each neuron to neuron 'A' in the next layer [2] - With a 50% dropout rate, approximately 50 neurons are active during training [2] - During inference, all 100 neurons are active since Dropout is not used [2]
X @Avi Chawla
Avi Chawla· 2025-09-22 06:39
Here's a hidden detail about Dropout that many people don't know.Assume that:- There are 100 neurons in a layer, and all activation values are 1.- The weight from 100 neurons to a neuron ‘A’ in the next layer is 1.- Dropout rate = 50%Computing the input of neuron ‘A’:- During training → Approx. 50 (since ~50% of values will be dropped).- During inference → 100 (since we don't use Dropout during inference).So essentially, during training, the average neuron input is significantly lower than that during infer ...
大模型“记性差一点”反而更聪明,金鱼损失随机剔除token,让AI不再死记硬背
3 6 Ke· 2025-09-03 23:54
Core Idea - The article discusses a new method called "Goldfish Loss" that allows large language models to avoid memorizing training data while still learning language patterns [1][2]. Group 1: Methodology - Goldfish Loss involves randomly removing a small portion of tokens during the loss function calculation, preventing the model from memorizing the training data verbatim [2][3]. - A hashing-based masking strategy is designed to ensure consistency in the tokens that are removed, allowing the model to "guess" rather than reproduce the training data [3][7]. - The method contrasts with traditional regularization techniques like Dropout, which can still lead to memorization if the same tokens are removed inconsistently across training iterations [5][7]. Group 2: Experimental Results - Experiments were conducted in two scenarios: an extreme scenario with repeated training on a small sample and a standard scenario simulating typical batch processing [8][10]. - In the extreme scenario, standard training led to the model verbatim memorizing 84 out of 100 articles, while Goldfish Loss resulted in no memorization [8][10]. - The performance of the model using Goldfish Loss was comparable to standard loss models, indicating that the ability to generate text was not significantly affected [12]. Group 3: Implications - The core of Goldfish Loss is to ignore the gradients of certain tokens, which may require the model to process more data to compensate for the missing information, potentially affecting computational efficiency [13].