Google’s Choon Chng Explains Why Computational Photography Is The Future

Google’s Choon Chng Explains Why Computational Photography Is The Future

We can’t defy the laws of physics, but we can sneak around them with computing power.

That essentially is the idea behind computational photography, a technique that will increasingly be adopted by mobile phone makers to improve the resolution and performance of their products.

“Computing is disrupting optics,” said Choon Chng, an engineering manager at Google speaking at Future Proof Storage, our annual summit on mobile technology that took place in Shenzhen recently. “Pixels aren’t perfect (in devices like phones). The optics aren’t perfect but we can compensate with bits.”

The need for computational technology essentially derives from the small form factor of phones. Total track length, or the distance between the lens and the digital imager, is a key determinant of picture quality and is proportional to the size of the imager. The longer the track length, the larger the imager can be and the larger the imager, the greater the opportunity to insert more pixels and/or improve overall photo quality.

Unfortunately, total track length right now in phones is fixed for all practical purposes. Increasing the track length would increase the width, or Z height of the phone. (Manufacturers have managed to increase the track length in a virtual manner through thinner components, but achieving further gains with that route could be reaching their end as well as this academic paper points out.)

That means, sensor size in phones is effectively fixed, said Chng. To increase the resolution, manufacturers can squeeze more pixels into the same finite space, but that won’t necessarily help improve picture quality. More, smaller pixels can generate noise. The number of photons captured can also decrease.

Enter computational photography. Rather than try to squeeze more pixels into a given space or further thin already small components, computational photography leverages information contained in images like the spatial relationships between objects to construct an image that is equivalent to what you’d achieve with a large lens and larger imager. Depth-of-field and RAW are also made possible.

Pictures improve in quality, phones stay the same size, and consumers are happy.

Computational photographic techniques, in fact, can accomplish things ordinary cameras can’t, such as creating 3D images by wrapping 2D images around a CAD-like drawing of a scene or object. Companies like Lytro, Matterport and Light with its 16 lens camera are early pioneers in CP.

Chng, however, added that computational photography isn’t easy. It requires substantial computing assets underneath the bezel.

Author

Michael Kanellos: Michael has written about advanced technologies and Silicon Valley for 20+ yrs with a resume that includes CNET, Greentech Media and Forbes.

Related Posts

AI Enters the Fight to Cure Breast Cancer
Published on February 17, 2021

Medical researchers are training deep learning models paired with powerful compute and storage technology to identify cancer biomarkers.

What Is “Next-Gen”? How SSDs are Shaping Gaming’s Evolution
Published on February 10, 2021

Ubiquity of SSDs is empowering a major shift in gaming that will change how games are made and how people play them.

Drive the Future
Published on April 14, 2021

Technology is helping the automotive industry take the next step forward, and data storage has a large role to play.

The Race to Seal Helium HDDs
Published on April 7, 2021

Long thought impossible, sealing helium has been one of the greatest breakthroughs for high-capacity hard drives.