Responsive Action-Based Video Synthesis

Corneliu Ilisescu, Aytac Kanaci, Matteo Romagnoli, Neill D. F. Campbell, G.J. Brostow

Research output: Contribution to conferencePaper

Abstract

We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration.

We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.

Conference

ConferenceACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017)
Abbreviated titleCHI
CityDenver, Colorado
Period6/05/1711/05/17
Internet address

Fingerprint

Level control
Video cameras
Semantics

Cite this

Ilisescu, C., Kanaci, A., Romagnoli, M., Campbell, N. D. F., & Brostow, G. J. (Accepted/In press). Responsive Action-Based Video Synthesis. Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), Denver, Colorado, .

Responsive Action-Based Video Synthesis. / Ilisescu, Corneliu; Kanaci, Aytac; Romagnoli, Matteo; Campbell, Neill D. F.; Brostow, G.J.

2017. Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), Denver, Colorado, .

Research output: Contribution to conferencePaper

Ilisescu, C, Kanaci, A, Romagnoli, M, Campbell, NDF & Brostow, GJ 2017, 'Responsive Action-Based Video Synthesis' Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), Denver, Colorado, 6/05/17 - 11/05/17, .
Ilisescu C, Kanaci A, Romagnoli M, Campbell NDF, Brostow GJ. Responsive Action-Based Video Synthesis. 2017. Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), Denver, Colorado, .
Ilisescu, Corneliu ; Kanaci, Aytac ; Romagnoli, Matteo ; Campbell, Neill D. F. ; Brostow, G.J. / Responsive Action-Based Video Synthesis. Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), Denver, Colorado, .
@conference{f1cba0202fe4449abc5154793d6777e8,
title = "Responsive Action-Based Video Synthesis",
abstract = "We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration.We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.",
author = "Corneliu Ilisescu and Aytac Kanaci and Matteo Romagnoli and Campbell, {Neill D. F.} and G.J. Brostow",
year = "2017",
month = "1",
day = "30",
language = "English",
note = "ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), CHI ; Conference date: 06-05-2017 Through 11-05-2017",
url = "https://chi2017.acm.org",

}

TY - CONF

T1 - Responsive Action-Based Video Synthesis

AU - Ilisescu, Corneliu

AU - Kanaci, Aytac

AU - Romagnoli, Matteo

AU - Campbell, Neill D. F.

AU - Brostow, G.J.

PY - 2017/1/30

Y1 - 2017/1/30

N2 - We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration.We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.

AB - We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration.We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.

M3 - Paper

ER -