I’ve spent the last couple days researching , but I’m honestly getting a tad overwhelmed. I’d love any input/insight from people who have used any of these (or have others to recommend).

Essentially what I’d like to do is just between , generating realistic looking faces in-between.

I know a little about VAEs so that was my main starting point, but they have the issue of blurry looking results. I stumbled upon VAE-GANs and was pretty impressed by the results in their paper, but I noticed there haven’t been many posts or discussions on it in the past ~year, and I haven’t seen anything outside the original paper with results that looked that good. The idea of combining VAEs and GANs does seem intuitively appealing though.

Info-gan looked really cool as well, but I’m just generally confused by it’s relationship with VAEs. It says it gets a disentangled representation, how does that compare with the latent representations that VAEs get you? Can you use it to interpolate between different samples in the same way that you can with AEs?

Open ’s was producing some amazing images, but I was a bit thrown off by people mentioning it took hundreds of days of GPU time to train, I’m wondering if it would be possible to scale down the images and make the model less deep, to around ~1-2 GPU days of train time, and achieve results competitive with the options above.

Honestly though any insight/perspective here would help me out a ton, thanks.

Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9fc0ff/d_has_anyone_had__with_generative_models/)


Please enter your comment!
Please enter your name here