Unsupervised Attention-guided Image-to-Image Translation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Citations (Scopus)

Abstract

Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms that are jointly adversarially trained with the generators and discriminators. We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 31 (NIPS), 2018
EditorsS. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett
PublisherNeural Information Processing Systems Foundation, Inc.
Pages1-22
Number of pages22
Publication statusPublished - 31 Dec 2018
EventNIPS 2018 - 32nd Conference on Neural Information Processing Systems -
Duration: 3 Dec 20188 Dec 2018

Publication series

NameNIPS Proceedings
PublisherNeural Information Processing Systems Foundation, Inc.
ISSN (Electronic)1049-5258

Conference

ConferenceNIPS 2018 - 32nd Conference on Neural Information Processing Systems
Period3/12/188/12/18

Projects

  • Cite this

    Alami Mejjati, Y., Richardt, C., Tompkin, J., Cosker, D., & Kim, K. I. (2018). Unsupervised Attention-guided Image-to-Image Translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 31 (NIPS), 2018 (pp. 1-22). (NIPS Proceedings). Neural Information Processing Systems Foundation, Inc.. https://arxiv.org/pdf/1806.02311.pdf