MIT and Adobe Bring Down the (Image) Noise

MIT and Adobe Bring Down the (Image) Noise

There is no getting away from the fact that smartphones are becoming ever more like pocket computers. They pack substantially more processing power in them than the computers that took man to the moon but there are some areas where all that processing power doesn’t quite match up. One such area is in image processing.

Since smartphones are now the de facto camera of choice for the majority of the population we are seeing the rapid growth in mobile imaging technologies to provide ever better images. And along with that, there is a massive growth in mobile image-processing applications too—a quick look at any of the ‘app’ stores will show you that imaging apps are one of the most popular, and populated, categories.

The challenge, however, is that as image quality improves—and the more data that gets captured in individual photos via RAW mode or simply in higher resolution—image editing becomes even more computationally intensive, which can mean longer processing times and flattening battery times. Waiting for things to happen, and suffering rapidly decreasing battery life are two things smartphone users are not willing to accept, so any technology that can help alleviate these issues is likely to be readily snapped up by end-users.

The combined brains of MIT and Adobe might just have such a solution. At the Siggraph Asia conference, they showed a system that reduced the load on the phone processing by shipping the heavy data crunching off to a cloud server. This, they say reduced power consumption by up to 85% and because of their methodology it reduces bandwidth by as much as 98.5%.

The solution they have is using noise—image noise. The kind photographers normally spend a long time trying to avoid! In essence they create a very low-resolution version of the file to send to the processing server and when it gets there add high frequency noise to re-create the detail.

The image is then broken up into blocks which are each edited using a learning machine algorithm. Currently they use around 25 parameters to adjust each block—things like luminance, saturation, sharpening and so on. Then, instead of sending the resulting image back, they simply send back instructions for how to edit the high-resolution image stored locally on the smartphone, based on these 25 parameters.

While this does still leave some computational requirement to the phone architecture, it is neither as intensive in power needs or in bandwidth as sending a high resolution file would be. It also doesn’t negate the need for high capacity local storage to keep the high resolution files on the device.

At the moment they say it works well with Instagram style filters or global adjustments to the whole image, but is less effective with edits that change the content—by removing an element from the image for example.

Mobile imaging is here to stay and this is just the next step in improving the image quality smartphones can produce and in giving end-users more power in their palm.

Related Stories

AI Evens the Playing Field in Sports