Computational photography is transforming the way we think about camera flashes. Traditionally, a camera flash was a brief light that illuminated the scene for the camera. However, computational imagery uses advanced coding theory to project spatially varying patterns, such as checkerboards, onto the scene. This allows the separation of direct and global light and helps infer object shape and depth. Thus, computational photography takes the camera flash beyond simple illumination, enabling 3D modeling and viewing of obscured objects.
The flash is one of the most overlooked parts of the camera: the illumination which brightens the scene and allows a good photograph to be captured. In the past, the flash was just that: a simple light bulb flash to quickly shine light on the scene. A bi like a flash of lightening.
Computational cameras revamp the idea of the illumination by adding ideas from coding theory. For instance, projecting spatially varying patterns such as checkerboards onto the scene allows the separation of direct and global light, light that’s bounced once or more than once in the scene.
Spatially varying patterns can also help infer object shape and depth. This is known as structured light. It enables high resolution scans of objects for industrial applications.
One of the most popular techniques used in Hollywood is light stages, which capture the reflectance field of a human actor. Humans are scanned by an array of light sources that help determine the light information that reflects off that person. This information is captured by the camera and then used to digitize their appearance, allowing CGI effects to be created in a physically-realistic manner such as casting accurate shadows when interacting with virtual, CGI characters in movies.
Varying the illumination temporally, but invisibly to the human eye, can enable time-of-flight imaging which measures how fast it takes light to travel to an object. This can enable new applications in depth mapping, seeing through scattering media, and non-line-of-sight imaging, even allowing cameras to see around a corner.
Computational cameras take the idea of the camera flash from something that reveals the scene directly in the frame to be seen in two dimensions, to something that allows us to create 3D models and see objects that are not just obscured by darkness, but even blocked from view.