Abstract
Despite the recent impressive breakthroughs in text-to-image generation, generative models have difficulty in capturing the data distribution of underrepresented attribute compositions while over-memorizing overrepresented attribute compositions, which raises public concerns about their robustness and fairness. To tackle this challenge, we propose ACTIG, an attribute-centric compositional text-to-image generation framework. We present an attribute-centric feature augmentation and a novel image-free training scheme, which greatly improves model’s ability to generate images with underrepresented attributes. We further propose an attribute-centric contrastive loss to avoid overfitting to overrepresented attribute compositions. We validate our framework on the CelebA-HQ and CUB datasets. Extensive experiments show that the compositional generalization of ACTIG is outstanding, and our framework outperforms previous works in terms of image quality and text-image consistency. The source code and trained models are publicly available at https://github.com/yrcong/ACTIG.
Original language | English |
---|---|
Journal | International Journal of Computer Vision |
Early online date | 13 Mar 2025 |
DOIs | |
Publication status | E-pub ahead of print - 13 Mar 2025 |
Data Availability Statement
The data during the current study are publicly available. The source code and trained models are publicly available at https://github.com/yrcong/ACTIG.Funding
This work has been supported by the Federal Ministry of Education and Research (BMBF), under the project LeibnizKILabor (grant no. 01DD20003), the Center for Digital Innovations (ZDIN), the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122), and EU HORIZON-CL42023-HUMAN-01-CNECT XTREME (grant no.101136006). We sincerely appreciate all valuable comments and suggestions from all reviewers, which help us to improve the quality of the paper.