Introducing Reader Vision
The concept stems from a simple problem which I see all the time: difficulty reading menus in dark restaurants and bars. One particular story sticks out.
My father was visiting San Francisco and we found ourselves at Anchor & Hope, a seafood spot in SoMa. The lighting was a typical ‘romantic’ vibe, warm and a bit dim. When handed the menu, my father pulled out his glasses and attempted to read it. Then his phone came out and with the flashlight on. And he still couldn’t decipher it! I resigned to just reading the items out loud for him. Even the woman at the next table, who was probably 20 years younger, turned to us and said “I had to do the same thing.”
I’ve seen this happen plenty of times. People, especially as they age, having difficulty reading things in dimly lit situations and pull out their phones etc, and still find it difficult. Reader Vision alleviates this problem by using camera filters & flash to make text in dimly lit situations more legible.
The goal of it
I’ve chosen to release Reader Vision for free for a number of reasons. There are a number of apps that do a similar thing. But all that I’ve tried are outdated, poorly designed, don’t work half as well (or at all), and often cost a couple dollars. Sure I could sell it for a dollar, but I think the picture is bigger here. I want to leverage Reader Vision as a mechanism for feedback and insights into the aging population.
Everybody today is making apps for millennials or younger crowds. Building software for and older population is not touted as hot and sexy. However, there’s a huge population of baby boomers that now have smartphones. But they’re not out there using Snapchat. I believe they are currently an underserved demographic and I’m curious what the future of technology holds for them.
Reader Vision will remain on the app store for free, and I have no intention of monetizing it. I’ll run a few experiments and campaigns with the goal of gaining learnings. Hopefully it will act as a gateway into the needs of older mobile users.
Featurewise, an idea that I played around with a lot initially was using OCR to actually decipher and display the text. There are a lot of cool and exciting technologies out there such as Tesseract and Apple’s CoreML and VisionKit. A couple good guides: https://www.appcoda.com/vision-framework-introduction/ & http://www.neurosurg.de/2017/10/17/part-1-how-simple-is-it-ocr-without-tesseract-yeah/. However, I did not find that the results of were consistent enough to be useful to the user at the moment.