Computational Zoom Will Let Photographers Control Perspective And Focal Length Of Photos
While smartphone cameras have been improving year over year, there's still a lot of difference between them and a professional camera. The iPhone 7 Plus introduced the dual camera setup which aided in the provision of photos with depth of field, at least to some extent. While many smartphones have followed the same path, a lot of work still needs to be done. However, researchers have found a brand new photography technique called the computational zoom. The method basically allows a photographer to control or utilize the composition of their images once they have been taken. Once that is done, they can create ''physically unattainable'' photos. So let's dive in to see some more details on the new technique and how it can be achieved.
Algorithm Lets Photographers Virtually Adjust Depth Of Images
Spotted by DPReview, the researchers in the University of California, Santa Barbara along with NVIDIA have noted down several details on the scenario. Computational zoom is the technology that can be used to adjust the perspective and focal length of an image once it has been shot. So basically what the photographer can do is tweak the image after it has been taken to alter its composition after processing.
To achieve this, photographers have to take a bunch of photos but with the same level of focal length. In addition to this, the camera edge should be moved slightly closer to the subject. Once the photos have been captured, the computational zoom combined with the algorithm results in a 3D rendering of the scene with multiple views depending on the group of photos taken. After that, when all of the given information has been collected, it is “used to synthesize multi-perspective images which have novel compositions through a user interface”. At this point photographers or individuals would have the ability to alter a photo's composition using a software's algorithm. Do take note that all the processing and rendering will be done in real time. For more details, check out the video embedded below.
According to the researchers the compositions generated are physically unattainable and can enhance a photographer's control over several elements like different depths, object size and much more. In the end the final photo is the combination of many instead of being a single photo. The team of researchers suggest that the technology will be available as plug-ins for photographers.
This is it for now, folks. What are your thoughts on the new computational zoom technique? Would it be available to photographers anytime soon? Share your views with us in the comments.