I’m trying to remember a I remember stumbling across on arXiv in the last few . I only vaguely remember the gist of it but it was about variational autoencoders and the authors had proposed a way to decouple the of the latent layer and the visualization of the learned representation.

I think I remember them basically making the claim that with their model they could only with two dimensions in the latent layer but still extract useful 2-D latent visualizations. I am sorry the memory is vague but I only skimmed it but it recently popped back into my head. Hoping someone noticed the paper and kept a better record of it.

Thanks, r/ML.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/7xlmk2/d__a_paper_with__/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here