Abstract

CLIP is a widely used foundational vision-language model that is used for zero-shot image recognition and other image-text alignment tasks. We demonstrate that CLIP is vulnerable to change in image quality under compression. This surprising result is further analysed using an attribution method-Integrated Gradients. Using this attribution method, we are able to better understand both quantitatively and qualitatively exactly the nature in which the compression affects the zero-shot recognition accuracy of this model. We evaluate this extensively on CIFAR-10 and STL-10. Our work provides the basis to understand this vulnerability of CLIP and can help us develop more effective methods to improve the robustness of CLIP and other vision-language models.
Original languageEnglish
Publication statusPublished - 23 Nov 2023
EventWorkshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models at NeurIPS 2023 (R0-FoMo) -
Duration: 15 Dec 2023 → …

Conference

ConferenceWorkshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models at NeurIPS 2023 (R0-FoMo)
Period15/12/23 → …

Fingerprint

Dive into the research topics of 'Understanding the Vulnerability of CLIP to Image Compression'. Together they form a unique fingerprint.

Cite this