Research has shown that convolutional neural networks for object recognition are vulnerable to changes in depiction because learning is biased towards the low-level statistics of texture patches. Recent works concentrate on improving robustness by applying style transfer to training examples to mitigate against over-fitting to one depiction style. These new approaches improve performance, but they ignore the geometric variations in object shape that real art exhibits: artists deform and warp objects for artistic effect. Motivated by this observation, we propose a method to reduce bias by jointly increasing the texture and geometry diversities of the training data. In effect, we extend the visual object class to include examples with shape changes that artists use. Specifically, we learn the distribution of warps that cover each given object class. Together with augmenting textures based on a broad distribution of styles, we show by experiments that our method improves performance on several cross-domain benchmarks.
Original languageEnglish
Publication statusPublished - 28 Mar 2022
EventIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022 -
Duration: 19 Jun 202224 Jun 2022


ConferenceIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022


Dive into the research topics of 'Geometric and Textural Augmentation for Domain Gap Reduction'. Together they form a unique fingerprint.

Cite this