In the Chen, Kingma et al. Encoder it discusses about the possibility of VAE ignoring the latent .

On p.4 it says this:

“one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = prod_i p(x_i|z)”

where “putting information into the code” meaning into the latent, z.

see screenshot at

My : can anyone explain this: Why does using a factorized decoder encourage the latent to be used?

In their notation, I believe x_i are individual dimensions of the output, such as individual pixels of an image.

Source link
thanks you RSS link


Please enter your comment!
Please enter your name here