Vision Transformer Adapters for Generalizable Multitask Learning

Deblina Bhattacharjee, Sabine Süsstrunk, Mathieu Salzmann

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

13   Link opens in a new tab Citations (SciVal)

Abstract

We introduce the first multitasking vision transformer adapters that learn generalizable task affinities which can be applied to novel tasks and domains. Integrated into an off-the-shelf vision transformer backbone, our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner, unlike existing multitasking transformers that are parametrically expensive. In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added. We introduce a task-adapted attention mechanism within our adapter framework that combines gradient-based task similarities with attention-based ones. The learned task affinities generalize to the following settings: zero-shot task transfer, unsupervised domain adaptation, and generalization without fine-tuning to novel domains. We demonstrate that our approach outperforms not only the existing convolutional neural network-based multitasking methods but also the vision transformer-based ones. Our project page is at https://ivrl.github.io/VTAGML.
Original languageEnglish
Title of host publication2023 IEEE/CVF International Conference on Computer Vision (ICCV)
Place of PublicationU. S. A.
PublisherIEEE
Pages18969-18980
ISBN (Electronic)979-8-3503-0718-4
DOIs
Publication statusPublished - 1 Oct 2023

Fingerprint

Dive into the research topics of 'Vision Transformer Adapters for Generalizable Multitask Learning'. Together they form a unique fingerprint.

Cite this