Title: Variance Networks: When Does Not Your

Authors: Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

Abstract: Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better objective than conventional parameterizations where the mean is being learned.

PDF link Landing page



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/8wevkc/r_variance_networks_when_expectation_does_not/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here