Example-guided image synthesis aims to synthesize an image from a semantic label map and an exemplary image indicating style. We use the term “style” in this problem to refer to implicit characteristics of images, for example: in portraits “style” includes gender, racial identity, age, hairstyle; in full body pictures it includes clothing; in street scenes it refers to weather and time of day and such like. A semantic label map in these cases indicates facial expres- sion, full body pose, or scene segmentation. We propose a solution to the example-guided image synthesis problem us- ing conditional generative adversarial networks with style consistency. Our key contributions are (i) a novel style consistency discriminator to determine whether a pair of im- ages are consistent in style; (ii) an adaptive semantic con- sistency loss; and (iii) a training data sampling strategy, for synthesizing style-consistent results to the exemplar. We demonstrate the efficiency of our method on face, dance and street view synthesis tasks.
|Name||2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
|Conference||2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
|Country/Territory||USA United States|
|Period||16/06/19 → 20/06/19|