Around Thanksgiving 2016 I designed and built out a little always-listening personal assistant for my phone that would constantly be scraping my IRL conversations for data. Something that could just listening all day as I moved about with my Airpods (this was built out in anticipation of the AirPods release).
The conceptual ideal centered around Ashford eventually knowing what information to pre-emptively send to your phone, without you taking any action or you explicitly commanding anything. It knows who you need to meet, and when. It has all the things you need to remember (like phone numbers, emails, and so on) saved and pushes them to you at relevant times.
Eventually I got Ashford working to a point where it’d automatically save things like phone numbers, addresses, and reference numbers as I said them with a push to confirm. It could also define complex words without intervention. It just listens as I speak with my phone tucked away in my pocket and if it picks up on something relevant, saves it and sends me a push. If it picks up on something esoteric, defines it and sends me a push with the definition.
The service was codenamed Ashford (later changed to Apollo) and consisted of a Django backend that handled all the querying, recommendations, logic, and auth stuff. Entity queries were done with Wikipedia as a primary source and WordNet (via PyDictionary) as a fallback for dictionary definitions.
Transcription was handled by Google’s Cloud Language API– streamed from the iOS app (written in Swift 3).
Ashford is currently just a (fragile) toy that I poke at once in a while– it has many hurdles to overcome if I want to actually release it in the App Store (which I have no plans on doing). If it weren’t rejected outright by App Review, it’d (with current technology) probably decimate the battery life of both your AirPods and your phone. It’s something I’ll probably iteratively keep improving over time for fun.