Laplacian Pyramid of Conditional Variational Autoencoders

Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill Campbell, Simon Prince, Ivor Simpson

Research output: Contribution to conferencePaper

Abstract

Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.

Conference

ConferenceCVMP 2017: The European Conference on Visual Media Production
Period11/12/1712/12/17

Keywords

  • Deep Neural Networks
  • Generative Models
  • Faces

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this

Dorta, G., Vicente, S., Agapito, L., Campbell, N., Prince, S., & Simpson, I. (2017). Laplacian Pyramid of Conditional Variational Autoencoders. Paper presented at CVMP 2017: The European Conference on Visual Media Production, . https://doi.org/10.1145/3150165.3150172

Laplacian Pyramid of Conditional Variational Autoencoders. / Dorta, Garoe; Vicente, Sara; Agapito, Lourdes; Campbell, Neill; Prince, Simon; Simpson, Ivor.

2017. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .

Research output: Contribution to conferencePaper

Dorta, G, Vicente, S, Agapito, L, Campbell, N, Prince, S & Simpson, I 2017, 'Laplacian Pyramid of Conditional Variational Autoencoders' Paper presented at CVMP 2017: The European Conference on Visual Media Production, 11/12/17 - 12/12/17, . https://doi.org/10.1145/3150165.3150172
Dorta G, Vicente S, Agapito L, Campbell N, Prince S, Simpson I. Laplacian Pyramid of Conditional Variational Autoencoders. 2017. Paper presented at CVMP 2017: The European Conference on Visual Media Production, . https://doi.org/10.1145/3150165.3150172
Dorta, Garoe ; Vicente, Sara ; Agapito, Lourdes ; Campbell, Neill ; Prince, Simon ; Simpson, Ivor. / Laplacian Pyramid of Conditional Variational Autoencoders. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .9 p.
@conference{542165584bd348818e087d42aa8e69c6,
title = "Laplacian Pyramid of Conditional Variational Autoencoders",
abstract = "Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.",
keywords = "Deep Neural Networks, Generative Models, Faces",
author = "Garoe Dorta and Sara Vicente and Lourdes Agapito and Neill Campbell and Simon Prince and Ivor Simpson",
year = "2017",
month = "12",
day = "11",
doi = "10.1145/3150165.3150172",
language = "English",
note = "CVMP 2017: The European Conference on Visual Media Production ; Conference date: 11-12-2017 Through 12-12-2017",

}

TY - CONF

T1 - Laplacian Pyramid of Conditional Variational Autoencoders

AU - Dorta, Garoe

AU - Vicente, Sara

AU - Agapito, Lourdes

AU - Campbell, Neill

AU - Prince, Simon

AU - Simpson, Ivor

PY - 2017/12/11

Y1 - 2017/12/11

N2 - Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.

AB - Variational Autoencoders (VAE) learn a latent representation of image data that allows natural image generation and manipulation. However, they struggle to generate sharp images.To address this problem, we propose a hierarchy of VAEs analogous to a Laplacian pyramid. Each network models a single pyramid level, and is conditioned on the coarser levels. The Laplacian architecture allows for novel image editing applications that take advantage of the coarse to fine structure of the model. Our method achieves lower reconstruction error in terms of MSE, which is the loss function of the VAE and is not directly minimised in our model. Furthermore, the reconstructions generated by the proposed model are preferred over those from the VAE by human evaluators.

KW - Deep Neural Networks

KW - Generative Models

KW - Faces

U2 - 10.1145/3150165.3150172

DO - 10.1145/3150165.3150172

M3 - Paper

ER -