Bunny Basekt

Texture Synthesis and Reconstruction

Ongoing Project


Synopsis

Photorealism in computer graphics can only be achieved with a detailed representation of geometric and photometric properties of a scene. In interactive applications we are traditionally limited to a polygonal representation, which becomes very inefficient at surface material's macro scales. The standard solution to this problem is to approximate geometric detail with textures. Various texture representations offer a trade-off between visual realism and rendering/storage costs. The deciding factor in choosing the appropriate representation is usually dictated by the complexity of the modeled material. Relatively simple geometry of various rough materials, such as concrete, wood, painted surfaces, can be visually reproduced with a normal map. More complex geometry with a visible macro scale features require a more advanced representation for the same level of realism. Adding a displacement texture provides a way to represent materials for which geometry can be described by a depth map. More complicated materials typically require volumetric representations. For rendering, the volume can be re-sampled into a stack of semi-transparent textures simulating the original material. Although efficient algorithms for rendering volumetric textures have been known for years, capturing the richness of a real-life volumetric materials remains a challenging problem. Even though various methods have been proposed to render volumetric textures, there has been very little work on how to reconstruct them from image data. Existing rendering algorithms typically use computer-generated representations or, on rare occasions, volumes scanned using 3-D scanning techniques. We propose a technique for generating a volumetric representation of a complex 3-D texture with unknown reflectance and structure. From texture images acquired with an ordinary camera under controlled conditions, the proposed algorithm creates an efficient volumetric representation in the form of a stack of semi-transparent layers each representing a slice through the texture's volume. In addition to negligible storage requirements, this representation is ideally suited for hardware-accelerated real-time rendering.

Project Details

VolumetricSurfaceTextureReconstruction
Modeling, synthesis, and visualization.

Photorealism in computer graphics can only be achieved with a detailed representation of geometric and photometric properties of a scene. In interactive applications we are traditionally limited to a polygonal representation, which becomes very inefficient at surface material's macro scales. The standard solution to this problem is to approximate geometric detail with textures. Various texture representations offer a trade-off between visual realism and rendering/storage costs. The deciding factor in choosing the appropriate representation is usually dictated by the complexity of the modeled material. Relatively simple geometry of various rough materials, such as concrete, wood, painted surfaces, can be visually reproduced with a normal map. More complex geometry with a visible macro scale features require a more advanced representation for the same level of realism. Adding a displacement texture provides a way to represent materials for which geometry can be described by a depth map. More complicated materials typically require volumetric representations. For rendering, the volume can be re-sampled into a stack of semi-transparent textures simulating the original material. Although efficient algorithms for rendering volumetric textures have been known for years, capturing the richness of a real-life volumetric materials remains a challenging problem. Even though various methods have been proposed to render volumetric textures, there has been very little work on how to reconstruct them from image data. Existing rendering algorithms typically use computer-generated representations or, on rare occasions, volumes scanned using 3-D scanning techniques. We propose a technique for generating a volumetric representation of a complex 3-D texture with unknown reflectance and structure. From texture images acquired with an ordinary camera under controlled conditions, the proposed algorithm creates an efficient volumetric representation in the form of a stack of semi-transparent layers each representing a slice through the texture's volume. In addition to negligible storage requirements, this representation is ideally suited for hardware-accelerated real-time rendering.

In general, the appearance of a surface material can be described with the Bidirectional Texture Function (BTF). This 6-D function of light and viewing directions describes the apparent reflectance of a surface material when projected onto a plane. Therefore the BTF implicitly encapsulates both the geometry and the reflectance. A BTF of real material can be sampled using a stage with a controlled camera and a light source. Although a BTF of a texture can be directly rendered by re-sampling the original data, it requires efficient compression and rendering algorithms for use in real-time applications. Despite recent progress in the area, this problem remains open. Our own approach is to model rather than resample the original BTF data. We can observe that, for many materials, the visual complexity is the result of geometry, rather than the actual reflectance. This is illustrated by the diagram below showing examples of different textures with varying geometric and photometric complexity:

Consider some examples in the lower right corner of the diagram. Although these materials have relatively simple reflectance, their BTFs will be very complex due to geometry alone. Our approach is to separate the geometry from the reflectance to obtain a much more compact texture representation than the one offered by a sampled BTF. As a second benefit, extracting the geometry information also allows for a more realistic rendering of materials at grazing angles, while BTF re-sampling methods cannot reproduce the geometry on the occluding contour. We have developed a novel algorithm that allows volumetric representations to be generated from a set of images taken with an ordinary camera and parametrized as a BTF. In addition to geometry, the representation captures reflectance properties as a chosen parametric BRDF model. This compact representation effectively compresses a BTF. The algorithm proposed here focuses on the most generic form of materials, that can only be faithfully reproduced with a volumetric representation. The model used by our algorithm consists of a stack of semi-transparent slices representing the original BTF data. At each layer point a vector of BRDF parameters approximates the local reflectance. The geometry is represented with volumetric attenuation sampled into each layer. At runtime, layers are rendered over a surface, giving an impression of a 3-D texture.

Below are some examples of reconstructed volumetric surface textures from our texture database tiled on a cylinder and rendered in real-time. The yellow line indicates the light direction. Note the appearance of the texture on the silhouettes:

Volumetric textures modeled using our compact representation can also be used to synthesize textures over large polygonal meshes. Examples shown below were generated using our fast patch-based texture synthesis algorithm and were rendered in real-time:

Key advantages of our algorithm can be summarized as follows:

  • Addresses an open problem of geometry extraction from BTFs.
  • Works for textures with complex geometry, reflectance, and shadowing.
  • Separates the geometry from the reflectance into a very compact representation.
  • Can be used to render the appearance of texture on silhouettes.
  • The resulting volumetric texture model is very easy to use in real-time applications and rendering can be fully implemented for hardware shaders.

 

Non-UCSD Vision People also Involved

Sebastian Magda (UIUC)