Nobody Behind The Camera
What happens when computers can see
Photography has always required a photographer. For nearly 200 years, there’s been someone in charge of the shutter button. In 1900, the Kodak Brownie popularised amateur photography, while a century later, the rise of the smartphone has seen photo taking soar to 1.2 trillion per year. Smartphones have seen a massive leap in the number of photos, accompanied by huge increases in the quality of photos, as focus and exposure controls improve and post-processing enables images to be “fixed” with the press of an icon.
But we’re just at the start of a new revolution in photography — where we’re no longer always the ones taking the photos. With all the recent talk about robots and Artificial Intelligence (AI) replacing humans, who would have thought that photographers would be among the activities in the firing line so soon? Yet three product launches in recent months point to a suddenly impending change where we’re no longer the ones composing the best images.
No More Shutter
While humans are taking more photos than ever before, a new question is “ how many photos are being taken now without human intervention”?
In December 2017, the Research at Google team quietly released the innocuous-looking Selfissimo app. With barely any interface at all, this app snaps Selfies of you automatically when it detects a change in your pose. It’s a camera app with no shutter button — it decides when to take the photos.
Heavily funded start-up Skydio unveiled their first product — an autonomous drone that can follow a person (even when they run) while dodging any obstacles in its way. Travelling at upto 25 miles per hour, it can track a person and record them in high quality video with a smooth aerial perspective once the preserve of Hollywood movies. Although it’s rather pricey at $2,500, I would expect such functionality to become more affordable very rapidly.
At the end of February, the Google Hardware team finally launched the Clips “camera” that was announced back in October ‘17. This is a first-of-a-kind product that’s a little hard to describe. It seems insufficient to call it a camera, as it’s more accurately described as a photographer and camera in one. It decides when to take the photos, its purpose to recognise and unobtrusively capture moments that you wouldn’t otherwise. Moments where you are in the photo rather than taking the photo. Moments that your phone would ruin. It’s surprisingly small in real life, which definitely makes it unobtrusive but may add to the reaction of “it’s creepy”. At $250, it’s not especially cheap, but may appeal to parents or pet lovers seeking images a little different from the bland predictability that populates Facebook and Instagram feeds.
The pitch from Google: Google Clips is smart enough to recognize great expressions, lighting and framing. So the camera captures beautiful, spontaneous images. And it gets smarter over time.
The Clips is best thought of as an experiment and a showcase. Although it features a shutter button to reassure hesitant folk for whom a camera device without a button is a step too far; it’s really not intended to be used but is more a vestigial sop to a bygone era. It’s not a mass market device that will appeal to everyone but more a signal of what’s to come — intelligent devices that are watching us; in this instance, a benevolent device seeking to capture precious moments for us that we might otherwise miss. Amazon too are experimenting in this space, with their Echo Look product — a camera-focused version of their popular Alexa line that offers fashion advice by taking photos (via voice command) and applying AI to them.
For anyone who has seen the Tom Hanks & Emma Watson movie, The Circle, the notion of an artificially intelligent miniature camera will arouse immediate suspicion. Already fairly common in surveillance and security, the “leisure” applications of computers that can see are starting to emerge. The Google Clips is essentially constantly thinking: Is that person important? Do they look happy? Will this make a good photo? It can learn which faces are important if you take a close-up or allow it to learn from people tagged in an existing Google Photos album.
There are already plenty of home security cameras that can detect faces, or tell animals from animals so as not to set off an alarm. While premium devices such as the Nest Camera or the new Lighthouse camera retail at $299, Wyze, created by former Amazon employees, offer a HD camera with motion detection for just $20. While these cameras are designed to be active primarily when you’re out of the house to detect intruders or adverse events, the application of AI to devices intended to photograph ourselves may be a tough sell to many people. Do you want an invisible, candid camera in your own home? Is it creepy or useful?
Google have been at pains to point out that the Clips device does not need an internet connection to function. The AI happens right on the device and your photos aren’t shared unless you choose to upload them. That should reassure most people, but as with voice assistants like Alexa and the Google Assistant, quite a few people remain skeptical about putting listening devices in their homes — and now we’re talking about watching devices too.
“In other words, we’re just starting to see the very earliest glimmers of what it might meant that computers will be able to see.”
This really is the beginning of a new era. We’re moving from enhanced photography to enhanced photographers. These are the early days of computers that can make qualitative judgements on the photographic merits of what they are observing. Computers were blind for the first few decades of their existence. Now, they are starting to see. And we may never see the world the same way again.