Abstract: Dropout has been one of standard approaches to train deep neural networks, and it is known to regularize large models to avoid overfitting. The effect of dropout has been explained by avoiding co-adaptation. In this paper, however, we propose a new explanation of why dropout works and propose a new technique to better . First, we show that dropout is an optimization technique to push the input towards the saturation area of nonlinear activation function by accelerating information flowing even in the saturation area in backpropagation. Based on this explanation, we propose a new technique for activation , gradient in activation function (GAAF), that accelerates gradients to flow even in the saturation area. Then, input to the activation function can climb onto the saturation area which makes the network more robust because the model converges on a flat region. Experiment results our explanation of dropout and confirm that the proposed GAAF technique improves performances with expected properties.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/8xi4kt/r_dropout_does_not_prevent_coadaption_and_a/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here