Mobile phone photography has come a long way from the days of 0.3 megapixel VGA cameras. Over the past few years, smartphone camera technology has grown, exponentially. From having one rear camera, we, today, have multiple cameras on the rear and on the front side of a phone. But, is it just adding more cameras to a phone that’s making all the difference? Or is there more than what meets the eye? Let’s find out.
Nowadays, we are hearing a lot of smartphone manufacturers talk about Artificial Intelligence (AI) and machine learning being implemented in their phones. But, what does this really mean? And how do they affect our smartphone photography experience?
What is machine learning & artificial intelligence?
It’s simple, machine learning is a process by which a machine learns from its experience. Machine learning is an algorithm or a technique for understanding patterns and doing something according to that pattern. Artificial Intelligence, on the other hand, is the idea of exhibiting intelligence which includes abstract thinking, creativity, strategy, and context. So, what role does it play in smartphone photography? AI and machine learning in a smartphone camera can help recognize different types of scenes and switch modes, accordingly. It can also help change different aspects of the image you click, even before you click it. Things like ISO, white balance and exposure.
Now, let’s talk about the elephant in the room. While everyone in the smartphone industry is choosing to implement more and more cameras on their phones, Google is adamant on using a single camera and still beating everyone at the smartphone photography game. Google’s Pixel smartphones are known to have some of the best smartphone cameras on the market, and they manage to do all this with just one camera on the back.
What’s their secret?
They’re achieving this feat with help of extraordinary software. Ever since Google decided to ditch the Nexus series to make way for the flagship Pixel line-up that could compete with the iPhones, it has taken smartphone cameras and photography very seriously. So much so that the Pixel series has become synonymous with great cameras — it’s in the name after all. Google wants us all to understand that smartphone photography is not all about extra cameras and megapixels. There’s a lot of it depending on the Image Signal Processor (ISP) unit, the silicon used in the phone, and computational photography. What is computational photography, you may ask? Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. The software game in smartphone photography is gaining so much popularity, that even the popular players like Apple, Samsung and Huawei are adopting it to improve their smartphone cameras.
Now, let’s talk about all the features that Google have introduced with help of great software. Google Pixel 2 and 2XL were considered some of the best smartphone cameras of last year. So, it was a given that the recently launched Pixel 3 & 3XL were expected to be leaders in the smartphone photography game. This year, Google brought a lot of new features like the Top Shot, Night Sight, Photobooth, Super Res Zoom, Playground mode, and Motion Auto Focus. Let’s talk about all of them in brief.
Top Shot mode captures the perfect action shot by recommending the best shot from the moment you captured. The mode uses machine learning to pick out the best images as you press the shutter key. To be more specific, it watches out for smiles (with eyes open) and gaze/focus/blur when picking out the top snaps.
This feature blew everyone’s mind away when it was announced at the keynote. The mode uses “state of the art techniques in computational photography and AI” to help you capture detailed shots in low light without using any external light or flash. In fact, Google’s Liza Ma says the mode relies on machine learning to choose the correct colours for the scene.
This feature is much simpler to understand than others on the list. It lets you click a picture just by smiling at the camera. Of course, you need to turn on the feature from the setting menu, first.
Super Res Zoom
With this feature, Google is intending to bring improved digital zoom to the table. It lets you zoom in to your desired subject without having as much pixelation that occurs on most phones by taking a burst of photos, merging them all to deliver better levels of detail. It also uses your hand jerks to calculate your distance from objects and artificially add in pixels for better quality zoom.
AI and machine learning are not the only things that are gaining hype these days, there’s AR (Augmented Reality) which is getting quite popular, too. This mode allows you to add AR-powered characters, stickers and more to scenes — and these “Playmoji” characters will react to your expressions, too.
Motion Auto Focus
As you can guess from the name of the feature, it lets you keep a moving object in focus, whether you are using the front camera or the rear.
All this works as solid evidence of the fact that Google is trying to prove that we, indeed, don’t need a bunch of cameras to improve smartphone photography. Computational Photography is leading us to the day when we can click DSLR-grade pictures using just our smartphone cameras.