The app is available on the Steam VR web site for the HTC Vive, Oculus Rift and Windows Mixed Reality headsets. I don’t have one of these headsets so can provide a personal perspective on what they are showing, so let me try to describe what it offers.
What is a Light Field?
First of all, we should define a light field. Without getting too technical, light field data is designed to capture or model images that are just like what happens in the real world. In real environments, head movement not only create different perspectives of the scene, but light reflects off these surfaces in different ways as you move your head. Textures can change and diffuse and specular reflections are altered too. These provide very subtle but extremely effective cues to tell the brain that “this is a real scene”.
As Debevec put it, “With light fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space.”
A light field capture system records all the rays coming into a volume of space. Many inside-out VR camera arrays can do this, but with a limited about of cameras, only a limited set of rays can be captured. To achieve much higher fidelity of the captured images, Debevec and Google developed a new light field capture camera.
The new camera consists of 16 modified GoPro Odyssey Jump camera mounted in an arc. This arc of cameras is then swept around a central axis. It takes about a minute to do a complete scan to acquire about 1000 outward facing images. This creates about a 2-foot diameter (60cms) volume of space of light field data. We presume this is the volume that can be explored when wearing a VR headset with the new app that Google has released.
To be clear, no VR headsets can render a light field image. They must be presented as two, 2D images. But Debevec says they don’t just take two of the captured images that might correspond to the position of each eye in the position-tracking headset. “To render views for the headset, rays of light are sampled from the camera positions on the surface of the sphere to construct novel views as seen from inside the sphere to match how the user moves their head,” said Debevec. “They’re aligned and compressed in a custom dataset file that’s read by special rendering software we’ve implemented as a plug-in for the Unity game engine.”
For example, notice in the graphic above that the viewer sees rays that come from multiple cameras and that these rays and the cameras supplying them, change with position.
So far, Google has used the rig to capture light field images at the Gamble House in Pasadena, California, the Mosaic Tile House and St. Stephens Church in Grenada Hills, California, and the flight deck of the Space Shuttle Discovery.
Google has previously develop the Yi Halo 360º VR capture rig featuring 16 cameras arranged in a ring, plus one upward facing camera, capable of producing a stitched 8192 x 8192 image at 30 fps or a 5760 x 5760 image at 60 fps.
Google also developed the GoPro Odyssey rig that has the same configuration as the Yi Halo, but uses GoPro cameras, of course. The new light field rig reconfigures these elements in the spinning arc format.
And most recently, Google is reported to have purchased the assets of Lytro which has developed both a cinematic light field capture and a VR light field capture camera. It looks like Google is serious about light field capture. – CC
This article first appeared on www.displaydaily.com