Eclectic domain mixing for effective adaptation in action spaces

Arshad Jamal, Dipti Deodhare, Vinay Namboodiri, K. S. Venkatesh

Research output: Contribution to journalArticlepeer-review

2 Citations (SciVal)

Abstract

Although videos appear to be very high-dimensional in terms of duration × frame-rate × resolution, temporal smoothness constraints ensure that the intrinsic dimensionality for videos is much lower. In this paper, we use this idea for investigating Domain Adaptation (DA) in videos, an area that remains under-explored. An approach that has worked well for the image DA is based on the subspace modeling of the source and target domains, which works under the assumption that the two domains share a latent subspace where the domain shift can be reduced or eliminated. In this paper, first we extend three subspace based image DA techniques for human action recognition and then combine it with our proposed Eclectic Domain Mixing (EDM) approach to improve the effectiveness of the DA. Further, we use discrepancy measures such as Symmetrized KL Divergence and Target Density Around Source for empirical study of the proposed EDM approach. While, this work mainly focuses on Domain Adaptation in videos, for completeness of the study, we comprehensively evaluate our approach using both object and action datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets.

Original languageEnglish
Pages (from-to)29949-29969
Number of pages21
JournalMultimedia Tools and Applications
Volume77
Issue number22
Early online date23 Jun 2018
DOIs
Publication statusPublished - 1 Nov 2018

Keywords

  • Domain adaptation
  • Human action recognition
  • Subspace learning

ASJC Scopus subject areas

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Eclectic domain mixing for effective adaptation in action spaces'. Together they form a unique fingerprint.

Cite this