ASU Learning Sparks

Camera Optics: The Evolution of The Camera Lens

Cameras have evolved quite a bit over the past few centuries in the way they capture, record and interpret light. From pinhole cameras to camera lens options and now computational cameras, camera optics are quickly evolving to allow us to develop more quality, useful photos than ever before. The first part of a camera which interacts with photons of light from the environment ...

Cameras have evolved quite a bit over the past few centuries in the way they capture, record and interpret light. From pinhole cameras to camera lens options and now computational cameras, camera optics are quickly evolving to allow us to develop more quality, useful photos than ever before.

The first part of a camera which interacts with photons of light from the environment is the optics of the camera. Of course the first camera had no optics or lenses, just a singular hole (called a pinhole) which blocked out rays of light except for a single ray passing through the center of the hole. This formed an inverted image and gave rise to the word “camera obscura” or dark chamber in Latin. Pinhole cameras were used extensively throughout the Renaissance period. 

However, pinhole cameras had a severe problem: they lost too much light, making it difficult to use them in dark environments or when an object is moving in the scene. A lens is the update over the pinhole, allowing the focusing of light rays rather than blocking them out to create a focused image. This came at a cost: you couldn’t keep the whole scene in focus, as the bending of light was governed by the lens design. But the lens is the workhorse of the modern camera. Its size and shape govern the optical properties of the system. 

Computational cameras go beyond the traditional optics of a lensed system to recover more information from the visual environment. Changing the optics of the camera can include introducing new optical layers which mix rays of light in special patterns that can be decoded later. Light scatters, bends, and reflects at these optical interfaces, and this physical knowledge can be used to infer information about the light paths traveled in the scene to the camera. 

For instance, lensless cameras utilize diffractive masks or coded apertures above the camera, allowing the computer to refocus a blurry image after capture and enabling applications such as 3D photography. These cameras can be made ultrathin and are useful for space-constrained applications where a lens would be too bulky to deploy.

Another example is using natural phenomena such as water droplets on a pane of glass. Researchers from University of Bonn were able to reconstruct 4D information about light from how light bends through water droplets, allowing them to recreate new views of the scenes from different angles. 

Computation is the new lens, and we can utilize more unusual lenses and optical layers in our cameras while extracting pretty pictures, and even more useful information from the resulting captures.