Laplacian Pyramid of Conditional Variational Autoencoders

Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill Campbell, Simon Prince, Ivor Simpson

Research output: Contribution to conferencePaperpeer-review

3 Citations (SciVal)
166 Downloads (Pure)


Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.
Original languageEnglish
Number of pages9
Publication statusPublished - 11 Dec 2017
EventCVMP 2017: The European Conference on Visual Media Production -
Duration: 11 Dec 201712 Dec 2017


ConferenceCVMP 2017: The European Conference on Visual Media Production


  • Deep Neural Networks
  • Generative Models
  • Faces

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this