Blog posts, for me, are like kidney stones. Or at least how I imagine kidney stones to be, since I’ve never had them personally. They usually begin with a question or idea, and then tumble around in my head, until crystallizing in a way which compels me to write, for to *not* do so would almost be painful. Hours, days, months can elapse before this moment occurs. Or sometimes it never does — either the topic loses my interest, or even better, someone else covers the same ground and I can just point to theirs. (I say “even better” in these cases but there’s probably a 10% “damn, I wanted to write that myself,” if I was truly being honest).
“Apple and AI and Cameras” was in my list since last autumn. And I’d been saving pictures to accompany the post — of realtor signs, pieces of art, store windows. All examples of pictures where richer data links could be automatically inserted into the metadata or literally overlaid on the photo itself. The static image of the photo roll turned into something where captured phone numbers can be clicked-to-call, prices and ordering information for identifiable items, operating hours and other business info automatically displayed, artist bios and other works inserted as stackable images behind the museum work I snapped.
If cameras are a platform — and I believe they are — I’m surprised how little innovation has occurred in our camera roles. Not auto-organizing and facial recognition but using an understanding of the image contents to supplement with information and applicable actions. Enough utility would even change what we photograph and why (for example, a whole wall of books at a bookstore and then tell me which five I’d like most).
But the post just never came out of me. So I was excited (at least 90% excited that is) when my friend M.G. penned something similar in his newsletter. The Camera as the App Layer says “What I really want in a mobile OS is the ability to fire up the camera, take a picture, and launch apps and/or services from there based on that picture.” Exactly.