Videoscapes: exploring sparse, unstructured video collections

James Tompkin, Kwang In Kim, Jan Kautz, Christian Theobalt

Research output: Contribution to journalArticlepeer-review

38 Citations (SciVal)

Abstract

The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyzes collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating -- spatially and/or temporally -- between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections.
Original languageEnglish
Article number68
Number of pages12
JournalACM Transactions on Graphics
Volume31
Issue number4
DOIs
Publication statusPublished - Jul 2012

Fingerprint

Dive into the research topics of 'Videoscapes: exploring sparse, unstructured video collections'. Together they form a unique fingerprint.

Cite this