Abstract
We introduce the first multitasking vision transformer adapters that learn generalizable task affinities which can be applied to novel tasks and domains. Integrated into an off-the-shelf vision transformer backbone, our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner, unlike existing multitasking transformers that are parametrically expensive. In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added. We introduce a task-adapted attention mechanism within our adapter framework that combines gradient-based task similarities with attention-based ones. The learned task affinities generalize to the following settings: zero-shot task transfer, unsupervised domain adaptation, and generalization without fine-tuning to novel domains. We demonstrate that our approach outperforms not only the existing convolutional neural network-based multitasking methods but also the vision transformer-based ones. Our project page is at https://ivrl.github.io/VTAGML.
Original language | English |
---|---|
Title of host publication | 2023 IEEE/CVF International Conference on Computer Vision (ICCV) |
Place of Publication | U. S. A. |
Publisher | IEEE |
Pages | 18969-18980 |
ISBN (Electronic) | 979-8-3503-0718-4 |
DOIs | |
Publication status | Published - 1 Oct 2023 |