In the Chen, Kingma et al. Encoder it discusses about the possibility of VAE ignoring the latent .

On p.4 it says this:

“one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = prod_i p(x_i|z)”

where “putting information into the code” meaning into the latent, z.

see screenshot at https://imgur.com/a/J79sEPR

My : can anyone explain this: Why does using a factorized decoder encourage the latent to be used?

In their notation, I believe x_i are individual dimensions of the output, such as individual pixels of an image.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9o7dih/d_question_about_variational_lossy_/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here