Vision Transformer Adapters for Generalizable Multitask Learning

Deblina Bhattacharjee, Sabine Süsstrunk, Mathieu Salzmann

Research output: Working paper / PreprintPreprint

18 Downloads (Pure)

Abstract

We introduce the first multitasking vision transformer adapters that learn generalizable task affinities which can be applied to novel tasks and domains. Integrated into an off-the-shelf vision transformer backbone, our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner, unlike existing multitasking transformers that are parametrically expensive. In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added. We introduce a task-adapted attention mechanism within our adapter framework that combines gradient-based task similarities with attention-based ones. The learned task affinities generalize to the following settings: zero-shot task transfer, unsupervised domain adaptation, and generalization without fine-tuning to novel domains. We demonstrate that our approach outperforms not only the existing convolutional neural network-based multitasking methods but also the vision transformer-based ones. Our project page is at \url{https://ivrl.github.io/VTAGML}.
Original languageEnglish
PublisherarXiv
Publication statusPublished - 23 Aug 2023

Bibliographical note

Accepted to ICCV 2023

Keywords

  • cs.CV
  • cs.CL

Fingerprint

Dive into the research topics of 'Vision Transformer Adapters for Generalizable Multitask Learning'. Together they form a unique fingerprint.

Cite this