Avi Chawla
Search documents
X @Avi Chawla
Avi Chawla· 2025-12-02 19:45
RT Avi Chawla (@_avichawla)Few people know this about L2 regularization:(Hint: it is NOT just a regularization technique)Most models intend to use L2 Regularization for just one thing:↳ Reduce overfitting.However, L2 regularization is a great remedy for multicollinearity.Multicollinearity arises when:→ Two (or more) features are highly correlated, OR,→ Two (or more) features can predict another feature.To understand how L2 regularization addresses multicollinearity, consider a dataset with two features and ...
X @Avi Chawla
Avi Chawla· 2025-12-02 13:20
Regularization Techniques - L2 regularization is commonly used to reduce overfitting in models [1] - L2 regularization serves as an effective solution for multicollinearity [1] Key Insight - L2 regularization is not just a regularization technique [1]
X @Avi Chawla
Avi Chawla· 2025-12-02 06:50
In fact, this is where “ridge regression” also gets its name from:Using an L2 penalty eliminates the RIDGE in the likelihood function of a linear model.Check this👇 https://t.co/h07l36upoQ ...
X @Avi Chawla
Avi Chawla· 2025-12-02 06:50
Few people know this about L2 regularization:(Hint: it is NOT just a regularization technique)Most models intend to use L2 Regularization for just one thing:↳ Reduce overfitting.However, L2 regularization is a great remedy for multicollinearity.Multicollinearity arises when:→ Two (or more) features are highly correlated, OR,→ Two (or more) features can predict another feature.To understand how L2 regularization addresses multicollinearity, consider a dataset with two features and a dependent variable (y):→ ...
X @Avi Chawla
Avi Chawla· 2025-12-01 20:25
技术洞察 - 展示了 KMeans 聚类算法的 @3blue1brown 风格动画 [1] - 行业关注机器学习算法的可视化呈现 [1]
X @Avi Chawla
Avi Chawla· 2025-12-01 06:37
A @3blue1brown style animation of KMeans clustering: https://t.co/Irb2yzUCR1 ...
X @Avi Chawla
Avi Chawla· 2025-11-30 12:18
Research Methodology - Randomly splitting data can lead to significant errors in research papers [1] - Andrew Ng's team made a mistake in a research paper due to random data splitting [1] Insights & Resources - Tutorials and insights on DS (Data Science), ML (Machine Learning), LLMs (Large Language Models), and RAGs (Retrieval-Augmented Generation) are shared daily [1]
X @Avi Chawla
Avi Chawla· 2025-11-30 06:47
A few days later, Andrew Ng's team updated the paper after using the same group shuffle split strategy to ensure the same patients did not end up in both the training and validation sets.👉 Over to you: Have you faced this issue before? https://t.co/GES6FESMZm ...
X @Avi Chawla
Avi Chawla· 2025-11-30 06:47
First, we import the GroupShuffleSplit from sklearn and instantiate the object.Next, the split() method of this object lets us perform group splitting. It returns a generator, and we can unpack it to get the following output:- The data points in groups “A” and “C” are together in the training set.- The data points in group “B” are together in the validation/test set.Check this 👇 ...
X @Avi Chawla
Avi Chawla· 2025-11-30 06:47
Andrew Ng's team once made a big mistake in a research paper.And it happened due to randomly splitting the data.Here's exactly what happened (with solution): ...