TransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognition

Haodong Duan, Nanxuan Zhao, Kai Chen, Dahua Lin

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

8 Citations (SciVal)


Recognizing transformation types applied to a video clip (RecogTrans) is a long-established paradigm for selfsupervised video representation learning, which achieves much inferior performance compared to instance discrimination approaches (InstDisc) in recent works. However, based on a thorough comparison of representative Recog-Trans and InstDisc methods, we observe the great potential of RecogTrans on both semantic-related and temporalrelated downstream tasks. Based on hard-label classification, existing RecogTrans approaches suffer from noisy supervision signals in pre-training. To mitigate this problem, we developed TransRank, a unified framework for recognizing Transformations in a Ranking formulation. TransRank provides accurate supervision signals by recognizing transformations relatively, consistently outperforming the classification-based formulation. Meanwhile, the unified framework can be instantiated with an arbitrary set of temporal or spatial transformations, demonstrating good generality. With a ranking-based formulation and several empirical practices, we achieve competitive performance on video retrieval and action recognition. Under the same setting, TransRank surpasses the previous state-of-the-art method [28] by 6.4% on UCF101 and 8.3% on HMDB51 for action recognition (Topl Acc); improves video retrieval on UCF101 by 20.4% (R@1). The promising results validate that RecogTrans is still a worth exploring paradigm for video self-supervised learning. Codes will be released at

Original languageEnglish
Title of host publicationProceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Number of pages11
ISBN (Electronic)9781665469463
Publication statusPublished - 24 Jun 2022
Event2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022 - New Orleans, USA United States
Duration: 19 Jun 202224 Jun 2022

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919


Conference2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Country/TerritoryUSA United States
CityNew Orleans

Bibliographical note

Funding Information:
To conclude, we demonstrate the great potential of RecogTrans-based video self-supervised learning by introducing a unified framework named TransRank. We have shown its effectiveness through extensive experiments on ablation studies and comparisons with state-of-the-art methods. Given the initial success on marrying RecogTrans with InstDisc [9, 27, 57], how to use TransRank to further boost this research line is also worth exploring. We will release our code and pre-train models to facilitate future research. Broader Impact. Self-supervised learning is a data-hungry task, consuming expensive computational resources, though we have mitigated the effort and expense of collecting annotation. Since we have verified our model in multiple aspects and downstream tasks, we hope our released code and models can serve as a solid baseline for RecogTrans methods and deliver good initializations to benefit downstream tasks. Besides, data-driven methods often bring the risk of learning biases and preserve them in downstream tasks. We encourage users to carefully consider the consequences of the biases when adopting our model. Acknowledgement. This study is supported by the General Research Funds (GRF) of Hong Kong (No.14203518) and Shanghai Committee of Science and Technology, China (No. 20DZ1100800).


  • Representation learning
  • Self-& semi-& meta- & unsupervised learning
  • Video analysis and understanding

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'TransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognition'. Together they form a unique fingerprint.

Cite this