Dynamic Mixed-Reality Compositing with Unity

Research output: Contribution to conferencePaper

Abstract

We present a system for dynamic mixed-reality compositing, or how to insert dynamic computer-generated (CG) elements into live-action video footage in real time. The goal of compositing is to combine visual content from different sources, such as live-action footage, still images and animations, in a way that they match each other regarding colour, lighting, scale, perspective, camera movement and timing. Most of these aspects can be matched using geometric calibration of the camera and mixed- reality rendering techniques. To ensure that both sources of visual content are composited seamlessly, our approach combines the accuracy of off-line camera tracking with real-time mixed-reality rendering performed in the Unity game engine.

Conference

ConferenceCVMP 2017: The European Conference on Visual Media Production
Period11/12/1712/12/17

Fingerprint

Cameras
Animation
Lighting
Calibration
Engines
Color

Cite this

Tarko, J., Richardt, C., & Hall, P. (2017). Dynamic Mixed-Reality Compositing with Unity. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .

Dynamic Mixed-Reality Compositing with Unity. / Tarko, Joanna; Richardt, Christian; Hall, Peter.

2017. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .

Research output: Contribution to conferencePaper

Tarko, J, Richardt, C & Hall, P 2017, 'Dynamic Mixed-Reality Compositing with Unity' Paper presented at CVMP 2017: The European Conference on Visual Media Production, 11/12/17 - 12/12/17, .
Tarko J, Richardt C, Hall P. Dynamic Mixed-Reality Compositing with Unity. 2017. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .
Tarko, Joanna ; Richardt, Christian ; Hall, Peter. / Dynamic Mixed-Reality Compositing with Unity. Paper presented at CVMP 2017: The European Conference on Visual Media Production, .1 p.
@conference{b7b89c1412614f7db4a0254fdfc5497d,
title = "Dynamic Mixed-Reality Compositing with Unity",
abstract = "We present a system for dynamic mixed-reality compositing, or how to insert dynamic computer-generated (CG) elements into live-action video footage in real time. The goal of compositing is to combine visual content from different sources, such as live-action footage, still images and animations, in a way that they match each other regarding colour, lighting, scale, perspective, camera movement and timing. Most of these aspects can be matched using geometric calibration of the camera and mixed- reality rendering techniques. To ensure that both sources of visual content are composited seamlessly, our approach combines the accuracy of off-line camera tracking with real-time mixed-reality rendering performed in the Unity game engine.",
author = "Joanna Tarko and Christian Richardt and Peter Hall",
year = "2017",
month = "12",
day = "11",
language = "English",
note = "CVMP 2017: The European Conference on Visual Media Production ; Conference date: 11-12-2017 Through 12-12-2017",

}

TY - CONF

T1 - Dynamic Mixed-Reality Compositing with Unity

AU - Tarko, Joanna

AU - Richardt, Christian

AU - Hall, Peter

PY - 2017/12/11

Y1 - 2017/12/11

N2 - We present a system for dynamic mixed-reality compositing, or how to insert dynamic computer-generated (CG) elements into live-action video footage in real time. The goal of compositing is to combine visual content from different sources, such as live-action footage, still images and animations, in a way that they match each other regarding colour, lighting, scale, perspective, camera movement and timing. Most of these aspects can be matched using geometric calibration of the camera and mixed- reality rendering techniques. To ensure that both sources of visual content are composited seamlessly, our approach combines the accuracy of off-line camera tracking with real-time mixed-reality rendering performed in the Unity game engine.

AB - We present a system for dynamic mixed-reality compositing, or how to insert dynamic computer-generated (CG) elements into live-action video footage in real time. The goal of compositing is to combine visual content from different sources, such as live-action footage, still images and animations, in a way that they match each other regarding colour, lighting, scale, perspective, camera movement and timing. Most of these aspects can be matched using geometric calibration of the camera and mixed- reality rendering techniques. To ensure that both sources of visual content are composited seamlessly, our approach combines the accuracy of off-line camera tracking with real-time mixed-reality rendering performed in the Unity game engine.

M3 - Paper

ER -