Assessing the Market Impact of Computational Photography

Assessing the Market Impact of Computational Photography

Computational photography represents the next wave of innovation in consumer and professional imaging. It will allow manufacturers to improve the low-light sensitivity of the camera without having to increase the size or cost of the sensor and the device itself. Because it provides the ability to measure light depth, and not just horizontally and vertically, it will also be able to create a 3D experience that has high user value. However, because computational photography is software-based, IDC predicts that the storage requirements for capturing images, and specifically video within a computational photography context will be enormous, requiring much higher mobile device storage capacities and increasing device memory requirements. This will result in larger internal storage and additional demand for micro SD cards, driving greater number of mobile device vendors to include a micro SD slot on-device, particularly on tablets. In this way, consumers will have the flexibility to add their own cards and expand storage.

The following questions were posed by SanDisk to Christopher Chute, research vice-president for IDC’s Global SMB Cloud and Mobility and Digital Imaging practices on behalf of SanDisk’s customers.

Q. The imaging market is in a period of transition.  What will the next five to ten years look like for consumer imaging?

A. Mobility has already had a profound impact on consumer imaging.  Today, seven out of ten images captured by consumers are captured with mobile devices. In terms of being able to capture, share, store and view their images, consumers now view photography as originating from the mobile device, and not from a standalone camera.  IDC predicts that this mindset and behavior will continue to grow, relegating the traditional camera market to one centered around professionals and enthusiasts.

Over the next five to ten years, IDC predicts that consumer imaging will also revolve more around the ability to capture video and images and share them in a more creative context.  For instance, using software and hardware enhancements, computational photography will allow devices to capture images in three-dimensional form by measuring the length of light waves rather than just by the number of pixels. While we are still in the early stages, IDC predicts that over the next five to ten years there will be substantial developments in computational photography that will allow consumers to capture “living images,” or images that can be constantly manipulated to form new impressions. Furthermore, computational photography will allow for 3D image and video capture without the need for two lens assemblies.

Q. How do computation photography techniques address some of the technical challenges associated with mobile imaging?

A. Computational photography involves the ability to capture image depth with software in a way that was never possible before.  The fact that mobile devices have increasingly more powerful processors, larger on board storage capacities, and larger, higher quality screens means that manufacturers can now offer features based on computational photography that they weren’t able to offer in devices with less processing power, such as traditional cameras. For instance, many mobile phone manufacturers use software to allow the ability to capture an image and then refocus the image in playback by allowing the user to place the focal point anywhere inside the picture.  This brings new value to every image, and allows the user to be more much creative.

The ability to use software-driven computational photography allows manufacturers to improve their photography user experience while maintaining the same slim, attractive device form factor that consumers have come to expect.  Furthermore, computational photography allows manufacturers to improve the camera’s low-light sensitivity without having to increase the size (and cost) of the sensor and the device itself. Improved low-light sensitivity also gives users a much higher level of satisfaction, resulting in higher rates of device usage and image sharing, which in turn can drive average revenue per user (ARPU).

Q. Consumer imaging has always been enabled by high host device storage capacities. How will computational photography affect host device storage requirements?

A. Consumer imaging has been enabled by having high storage capacities on mobile devices, most notably flash memory cards, for well over a decade. Flash memory allowed users to engage with devices much more quickly and the market to grow much more rapidly than it would have otherwise.  This is the same pattern we saw in traditional photography as it transitioned from one driven by standalone cameras to mobile phones. The ability to purchase a device with up to 128 gigabytes of internal storage has supported the ability to capture as many pictures as a given user chooses without having to manage device storage by deleting unwanted pictures.  Furthermore, the fact that many mobile phone manufacturers have a linkage to online sync and sharing services (FSS) gives users the ability to continue to take pictures when on-board memory is used up by offloading content to free up space.

Because computational photography is software-based, it will likely increase device storage and RAM requirements.  While IDC predicts that, over time, internal device memory will increase due to usage, there will be an even stronger impact on storage requirements due to using features associated with computational photography. IDC predicts that vendors will meet this in one of two ways. The first is increasing host internal storage capacity and performance to allow for advanced applications driven by computation photography.  The second is an increase in demand for micro SD cards so vendors can offload this cost to users. More vendors will include a micro SD slots, particularly on tablets. In this way, consumers will enjoy the flexibility of being able to add their own cards and expand storage to meet their individual needs.

A. As we stated, computational photography provides the ability to measure light not just horizontally and vertically but also in depth. This will manifest in a couple of different capabilities. For instance, Google’s project Tango is a good example of how computational photography techniques can create a 3D experience that has high value for the user. It does this by letting users use a camera to render a 3D map of internal spaces  — something GPS is unable to do.  This could allow Google to extend Google Earth to indoor locations and provide additional value for those looking to map more intimate spaces.

Computational photography and 3D mapping will also extend mobile device use beyond consumer applications.  IDC predicts that the ability to map internal spaces using computational photography will appeal to business and government organizations. For instance, Law enforcement, security, and emergency personnel will have the ability to draw on internal maps in the same way they use GPS. Being able to map internal spaces in real time, such as public facilities and private residences, will allow first responders to better serve the public.

Q. How will computational photography impact consumer and professional video capture and production?

A. The ability to capture 3D video creates a new experience that goes beyond capturing still pictures.  Capturing a video stream allows you to adjust the focal point in playback or in a third dimension while editing. This will bring a completely new user experience to what is essentially a 100-year old technology.

IDC predicts that directors of photography, video production designers, and videographers will start to look at computational photography as a new way to be creative.  Experimental techniques are already being used to fuse reality with virtual reality sets and it’s very difficult to discern the difference.  Computational photography could drive sales of next-generation television sets, monitors, devices and other types of services in a way that was not possible with the 3D initiatives rolled out several years ago. The ability to capture still pictures more creatively and also offer an immersive video experience is something that users will have access to via computational photography. Finally, the storage requirements for capturing video within a computational photography context will be enormous, requiring much higher mobile device storage capacities and increasing device memory requirements.

About This Publication: This publication was produced by IDC Custom Solutions. The opinion, analysis, and research results presented herein are drawn from more detailed research and analysis independently conducted and published by IDC, unless specific vendor sponsorship is noted. IDC Custom Solutions makes IDC content available in a wide range of formats for distribution by various companies. A license to distribute IDC content does not imply endorsement of or opinion about the licensee. 

Copyright and Restrictions: Any IDC information or reference to IDC that is to be used in advertising, press releases, or promotional materials requires prior written approval from IDC. For permission requests contact the Custom Solutions information line at 508-988-7610 or gms@idc.com. Translation and/or localization of this document require an additional license from IDC.

This post is a contribution by Christopher Chute, a vice-president with IDC’s Global Small/Mid-Sized Business  and Digital Imaging practices. His research portfolio focuses on transformative technology trends impacting global small and medium business customers and the digital imaging space.  Mr. Chute’s domain includes SMB adoption of mobile and cloud IT solutions and DI/video trends across professional, broadcast/creative, and consumer markets.

Related Stories

AI Evens the Playing Field in Sports