Henrik Wann Jensen
Biography
My area of interest is computer graphics with a focus on realistic image synthesis in particular global illumination, appearance modeling, and commercial rendering.
Global illumination involves the simulation of all types of light scattering in a 3D model. My early research involved understanding how to simulate caustics (with a focus on caustics through a glass of cognac :-) This lead to the development of photon mapping, which was the first general method capable of simulating global illumination including specular-diffuse-specular paths in complex scenes (including scenes where path tracing and bidirectional path tracing would fail). Photon mapping is a two-pass method where the first pass involves tracing photons into the model and the second pass is rendering model utilizing the photons for caustics, path guiding, global illumination, and scattering media (including volume caustics). The accuracy of the original photon map is limited by the number of photons stored in the photon map. To work around this limitation my research group developed progressive photon mapping (PPM) including stochastic progressive photon mapping (SPPM), which are the first global illumination methods capable of simulating global illumination with specular-diffuse-specular light transport in general scenes to any desired accuracy. To increase the efficiency of progressive photon mapping we combined it with bidirectional path tracing using multiple importance sampling applied to a path space extension (also known as VCM). We also worked on improving the efficiency of sampling methods in computer graphics by going beyond the traditional image plane sampling and developing a multi-dimensional sampling and reconstruction technique.
Appearance modeling involves understanding and simulating the appearance of materials and the world around us. My primary research has been in the area of simulating translucent materials such as human skin, milk, and marble. This involved developing models for simulating subsurface scattering using techniques such as photon mapping (used to simulate weathered stone), path tracing (used to simulate wet materials), and the much faster dipole diffusion model (used to simulate translucent materials such as human skin and milk). The diffusion model was first used on Gollum in Lord of the Rings and Dobby in Harry Potter, and subsequently in numerous visual effects movies - and I am the recipient of an Academy Award for technical achievement for pioneering research in rendering translucent materials. My research also involved developing a physically based hair appearance model (modeling hair as an elliptical dielectric cylinder with cuticle scales - adding the important secondary highlight), an artist friendly physically based hair model (separating and normalizing the lobes in the physically based model). The hair model has been extended to modeling fur by adding a medula (the double cylinder model - leading to a more diffuse-like appearance). My group developed a practical microcylinder cloth model capable of simulating a wide variety of textiles. Additionally, we have used Lorenz-Mie theory to predict the appearance of a mixture of various molecules (this theory can be used to simulate milk and has been applied widely in the food industry for quality control). Later, we went beyond the Lorenz-Mie theory and used a general ray tracing techniques to simulate rainbows (including the first simulation of the elusive double primary bow).
Commercial rendering is the use of rendering techniques for movies, architecture, product visualization in industry. I worked at mental images and helped adding photon mapping and shader graphs to mental ray (which was used on a number of movies including Titanic). I later worked at Pixar and implemented a dipole subsurface scattering model in PRMan using deep shadow maps for rendering Nemo in Finding Nemo. I worked on Avatar at Weta helping with subsurface scattering on the Na'vi with a focus on Neytiri (using the multi layered dipole diffusion model). I also helped with clouds, wet surfaces, smoke, and beams of light in the trees using the beam radiance estimate. My group worked with Disney to develop the hair appearance model for Tangled. I developed the rendering engine for Velux Daylight Visualizer to simulate daylighting in buildings with different skylight windows. It is the first rendering engine that was verified by CIE to pass all the CIE 171:2006 test cases dedicated to natural lighting. I have worked with DuPont (now Axalta) to develop the first computer simulation of their car paint (fully validated to match actual physical samples). I developed the rendering engine for HyperShot, which was the one of the first commercial interactive progressive global illumination engines shown in public in 2006 (using photon mapping and path tracing). HyperShot and the DuPont paint model enabled Ford to transition from painting full size clay models of cars to using computer graphics to design new vehicles. The renderer also enabled several companies to start using virtual photography as mentioned in this 2008 Wired article. HyperShot was later renamed to KeyShot and it has become the leading rendering engine for industrial design and product visualization. Here is a 2013 Wired article describing some of the uses of KeyShot.
I was a full professor in computer science at UC San Diego from 2002-2018. Since 2018 I am Professor Emeritus at UC San Diego. In 2019 I became Honorary Professor at Aarhus University. I am the Chief Scientist of Luxion makers of KeyShot. Here is a short biography.