What is computational photography?

In the context of the Android smartphone is iPhone we talk more and more about computational photography: but what is it really about?

This term defines various techniques devoted to image processing capable of making the single shot better than what was actually taken by the lens. This is done by taking, on a purely practical level, several photographs and combining them. Through particular algorithms, a sort of “definitive photograph” is extrapolated, bringing together the best features of every single shot.

As stated at the beginning, today’s mobile devices (thanks to extremely high precision cameras) have made this technology practically within everyone’s reach, even those who are not particularly familiar with photo editing software.

xiaomi telescopic camera

Advertisements

Computational photography: techniques increasingly within everyone’s reach

Some smartphone makers, such as Apple and Google, continually improve the ability to take pictures of their devices year after year without ever drastically changing the physical sensors of the camera.

The way a camera digitally “captures” a photo can be roughly divided into two parts: it photographic shot real and the image processing. The first stage is the actual work of the lens that captures the photograph. This is where factors such as le sensor size, the target speed, and the focal length. In this process, a traditional camera (such as a DSLR) manages to have an edge over a mobile device which, although high-end, is somehow “adapted” to photography.

The second part is image processing. This phase starts when the software uses calculation techniques to enhance a photo. The techniques in question vary by device and from manufacturer to manufacturer. Typically, though, both processes work together to create an impressive photograph.

High-end phones tend to have too tiny sensors is relatively underperforming lenses due to their size which, inevitably, must be reduced. This is why today’s smartphones have to rely on image processing methods to create great-looking photos. Computational photography is not necessarily more or less important than physical optics: they are simply two different phases.

Advertisements

Obviously, one traditional camera, has a significant strategic advantage over a smartphone, even the most expensive. This is mainly because, on a camera, the sensors and lenses are much larger, leading to an obvious hardware advantage.

That said, however, smartphones also have advantages over cameras and, in this case, computational photography comes into play.

A person uses a smartphone while driving

The main techniques of computational photography

There are some amazing computational photography techniques used by smartphones to create fantastic images. One of the most famous and used is lo stacking.

Advertisements

It is a process where multiple photos are taken by one camera at different times and with exhibitions or focal lengths different. The various shots, therefore, are combined by the software to preserve the best details of each image.

Stacking is responsible for most of the huge advances that have occurred in mobile photography software in recent years and are used in most modern smartphones. It is also the technology on which HDR photography (High Dynamic Range).

Since the dynamic range of a photograph is limited by the exposure of that specific shot, HDR shoots an image at different exposure levels. Then combine the blackest shadows and the brightest highlights to create a photo with a wider range of colors. HDR is a key feature of any high-end smartphone camera.

Pixel binning is another process used by smartphone cameras with ad sensors high number of megapixels. Rather than “stacking” different photos on top of each other, it combines adjacent pixels into a very high-resolution image. The final output is scaled to a more detailed, but less large, and lower resolution image. Unless you want to print a poster, the result still appears (at least to the human eye) appreciable.

Advertisements

The cameras of today’s smartphones are often calibrated through a neural network, which is a series of algorithms that process data. These neural networks can recognize what constitutes a good photo, so the software can create an image that is pleasing to our eyes.

Other rather common techniques

Virtually every photograph we take with our smartphone uses computational photography to improve the image, in a more or less accentuated way. In fact, in recent years, technologies that have become well known have gradually been introduced, which we use almost without realizing it, but which fall within this type of technology. Below we list the main ones.

There Night mode, for example, is perhaps the most common. This process uses HDR processing techniques to combine photos taken over a different range of lengths exposure to enhance an image shot in low light conditions. The final photo will contain more detail and appear more adequately lit than one taken with a single exposure.

Advertisements

Slightly different is the astrophotography, proposed by telephones Google Pixel. In fact, this mode is totally devoted to photographs of the night sky, giving the best of itself with stars and celestial bodies.

Also, their portrait mode is very much in vogue. This allows the photo to focus on one subject, blurring everything in the background. It uses software to analyze the depth of an object relative to other objects in the image, then blurs those that seem farther away. There Panorama mode is also quite widespread. This allows you to compose images next to each other and then combine them into one large, high-resolution image.

Deep Fusion, is a relatively recent modality, first introduced by iPhone 11. This process uses neural network technology to significantly reduce noise and improve detail in the footage. It is especially useful for capturing images in low to medium light indoor conditions.

There Color tone instead, is a smartphone process to automatically optimize the tone of any photo taken. This is applied before manual changes, through filters or specific methods that involve the user.

Advertisements

The functions related to computational photography vary in effectiveness depending on the manufacturer and the specific model of smartphone on which they are applied. For example, Color Tone on Google devices generally takes a more “natural” approach while Samsung models generally offer a better saturation ratio as well as more pronounced contrasts.

mobile modder

Blogging is a great way to keep up with the general public while still providing insight and advice. Especially if someone thinks of you as an expert in your field, it can have serious benefits.