I’m trying to remember a paper I remember stumbling across on arXiv in the last few months. I only vaguely remember the gist of it but it was about variational autoencoders and the authors had proposed a way to decouple the dimensionality of the latent layer and the visualization of the learned representation.
I think I remember them basically making the claim that with their model they could avoid only training with two dimensions in the latent layer but still extract useful 2-D latent visualizations. I am sorry the memory is vague but I only skimmed it but it recently popped back into my head. Hoping someone noticed the paper and kept a better record of it.