Abstract
We present a fast, novel image-based technique, for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. In order to recover our pseudo-BTF, we estimate the 3D structure and a set of yarn parameters (e.g. yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern, and from this build data set. In contrast however, we use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a DSLR camera under controlled uniform lighting, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results.
Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.
We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.
Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.
We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.
Original language | English |
---|---|
Article number | 165 |
Pages (from-to) | 1-13 |
Number of pages | 13 |
Journal | ACM Transactions on Graphics |
Volume | 36 |
Issue number | 5 |
Early online date | 31 Oct 2017 |
DOIs | |
Publication status | Published - 31 Oct 2017 |
Keywords
- Appearance Modeling, Textiles, Weave Pattern, Depth Map, Pseudo BTF