Laplacian Pyramid of Conditional Variational Autoencoders

Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill Campbell, Simon Prince, Ivor Simpson

Research output: Contribution to conferencePaperpeer-review

4 Citations (SciVal)
227 Downloads (Pure)

Abstract

Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.
Original languageEnglish
Number of pages9
DOIs
Publication statusPublished - 11 Dec 2017
EventCVMP 2017: The European Conference on Visual Media Production -
Duration: 11 Dec 201712 Dec 2017

Conference

ConferenceCVMP 2017: The European Conference on Visual Media Production
Period11/12/1712/12/17

Keywords

  • Deep Neural Networks
  • Generative Models
  • Faces

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Laplacian Pyramid of Conditional Variational Autoencoders'. Together they form a unique fingerprint.

Cite this