Projects per year
Abstract
We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration.
We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.
We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.
Original language | English |
---|---|
Publication status | Acceptance date - 30 Jan 2017 |
Event | ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) - Colorado Convention Center, Denver, Colorado Duration: 6 May 2017 → 11 May 2017 https://chi2017.acm.org |
Conference
Conference | ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) |
---|---|
Abbreviated title | CHI |
City | Denver, Colorado |
Period | 6/05/17 → 11/05/17 |
Internet address |
Fingerprint
Dive into the research topics of 'Responsive Action-Based Video Synthesis'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA)
Cosker, D. (PI), Bilzon, J. (CoI), Campbell, N. (CoI), Cazzola, D. (CoI), Colyer, S. (CoI), Fincham Haines, T. (CoI), Hall, P. (CoI), Kim, K. I. (CoI), Lutteroth, C. (CoI), McGuigan, P. (CoI), O'Neill, E. (CoI), Richardt, C. (CoI), Salo, A. (CoI), Seminati, E. (CoI), Tabor, A. (CoI) & Yang, Y. (CoI)
Engineering and Physical Sciences Research Council
1/09/15 → 28/02/21
Project: Research council