Light Fields

Light fields are a form of image-based rendering, where instead of rendering an image from a virtual model, an image is rendered from a set of other pre-rendered images. In this light field project, a 3D model of a chestnut tree is rendered using a set of a few hundred images taken of the model in Maya. The images were taken from cameras all equidistant from the model, each pointing directly at its center. These cameras were arranged in a triangular mesh around the object, forming a half-sphere.

The original model contains several hundred thousand triangles, as well as self-shadowing, therefore, each Maya rending takes several minutes. Image-based rending, on the other hand, has a runtime complexity based solely on the size of each image, so this light field runs in real time (the exact time varies from computer to computer, but 60 fps is not unusual).

The process of rendering each frame is very similar to ray tracing, which makes it an ideal candidate for a CUDA implementation, where each pixel's color is calculated by a separate thread. Each pixel in the image plane of the viewing camera sends out a query ray. If this query ray does not intersect the light field sphere then that pixel is drawn transparent. If it does intersect the sphere, its spherical coordinates are calculated, and the three closest sample cameras are found (the vertices of the triangle intersected). For each sample camera the best sample rays are found that match the current query ray, and weighed based on the barycentric coordinates at the intersection of the query ray and the triangle of the light field sphere mesh. To find the best sample rays for each sample camera, each sample camera's image plane is projected into the center of the sphere, and the four closest sample rays to the query ray are averaged. This approximates the query ray intersecting the object at the correct depth within the sphere. However, because this is only an approximation, some ghosting occurs.

This project works on a set of images with resolutions of 128x128 and 256x256 (only results from the 256x256 images are shown here), and sets of size 545 and 2113. The renders using the set of larger cardinality have noticeably better sharpness, as well as, much less ghosting. Because this rendering technique requires per-pixel computation, it would require a great deal of work to make it run in real time without a CUDA implementation.

Below are some of the images taken in Maya used by my program in renders.

  
 

These images are are generated by the light field using the set of 545 256x256 images. Note the ghosting due to a larger distance between sample cameras, resulting in poorer interpolation than the set using 2113 images.

  

These renders are from the set of 2113 256x256 images. Notice the reduced amount of ghosting.

 

This video uses the set of 2113 256x256 images, and shows how the light field renders the tree model in real time (60 fps).