About 497,000 results
Open links in new tab
  1. What is regularization in plain english? - Cross Validated

    Is regularization really ever used to reduce underfitting? In my experience, regularization is applied on a complex/sensitive model to reduce complexity/sensitvity, but never on a …

  2. How does regularization reduce overfitting? - Cross Validated

    Mar 13, 2015 · A common way to reduce overfitting in a machine learning algorithm is to use a regularization term that penalizes large weights (L2) or non-sparse weights (L1) etc. How can …

  3. L1 & L2 double role in Regularization and Cost functions?

    Mar 19, 2023 · Regularization - penalty for the cost function, L1 as Lasso & L2 as Ridge Cost/Loss Function - L1 as MAE (Mean Absolute Error) and L2 as MSE (Mean Square Error) …

  4. When should I use lasso vs ridge? - Cross Validated

    The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and the lasso make different assumptions on the class of …

  5. What are Regularities and Regularization? - Cross Validated

    Is regularization a way to ensure regularity? i.e. capturing regularities? Why do ensembling methods like dropout, normalization methods all claim to be doing regularization?

  6. neural networks - L2 Regularization Constant - Cross Validated

    Dec 3, 2017 · When implementing a neural net (or other learning algorithm) often we want to regularize our parameters $\\theta_i$ via L2 regularization. We do this usually by adding a …

  7. Difference between weight decay and L2 regularization

    Apr 6, 2025 · I'm reading [Ilya Loshchilov's work] [1] on decoupled weight decay and regularization. The big takeaway seems to be that weight decay and $L^2$ norm …

  8. machine learning - Why use regularisation in polynomial …

    Aug 1, 2016 · Compare, for example, a second-order polynomial without regularization to a fourth-order polynomial with it. The latter can posit big coefficients for the third and fourth powers so …

  9. regularization - Why is logistic regression particularly prone to ...

    Why does regularization work You can solve it with regularization, but you should have some good ways to know/estimate by what extent you wish to regularize. In the high-dimensional …

  10. What is the meaning of regularization path in LASSO or related …

    Does it mean the regularization path is how to select the coordinate that could get faster convergence? I'm a little confused although I have heard about sparsity often. In addition, …