The classic for the effectiveness of in NNs is that it prevents overfitting by simplifying the model (Occam’s razor). I have an additionally and am curious to know the community’s thoughts.

Imagine a weight such that on some batches it has a positive gradient, on some a negative but on average is near 0. the network without regularization would not cause this weight to converge to its optimal value but rather it will remain near its initial value or possibly move about in a random fashion. When it comes to test , this weight on average will decrease the accuracy of the model. Regulation pushes such a weight to zero preventing its negative interference.

This is different from the classic explanation of regularization as it claims that in some cases the weights are not overfitting but rather failing to converge.

:

  1. Do such weights exist in practice?

  2. Does a random weight (which will not converge from SGD) decrease the accuracy of the model?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9i708f/d_an__explanation_to_regularization/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here