Until relatively recently, if someone were to ask for your opinion on the “The Edge” you’d probably default to talking about why he’s no Bono but U2 really couldn’t function without him. However, over the last couple of years there has been an increasing buzz around a different Edge, “Edge Computing”.
In a nutshell, this non-U2-related Edge refers to pushing large amounts of compute down from the cloud (so 2016, darling) to the devices themselves. From the center of a theoretical network diagram to the edge.
So why is this a thing, given we’ve all been so excited about cloud computing for so long? In the longer term, as smart devices like robots, drones, autonomous vehicles and various IoT thingies become ubiquitous, they will need to be able to make intelligent decisions through processing data incredibly quickly — much faster than networks can send information to and from the cloud for central processing. Although this won’t make the cloud completely redundant, investors like Peter Levine from A16Z believe that the vast majority of processing will soon become decentralized.
While talk of self-driving cars and robots may make it seem like this shift is some way off, we’re already seeing mobile phones — the original edge device — pick up a lot of the processing for complex machine learning tasks: Snapchat and Facebook Messenger with their fun face lenses, Prisma’s cool photo transformations, placing furniture in you room in the IKEA app, and Siri and Google assistants’ voice-based natural language processing. These applications may seem frivolous but they represent the beginning of a shift as huge as mainframe computing moving to client-server, and the following move to the cloud.
However, the challenge with running these compute tasks at the edge is that phones (and other such devices) are far less powerful than the servers that these complex models were designed to run on, and waiting on remote server to process the data seriously degrades user experience. Imagine a noticeable lag whenever you stick your tongue out when using that cool Snapchat dog filter? That’s not fun. Or a self-driving car which needs a round-trip to the cloud in order to make a decision about an object in the road? Downright dangerous.
Enter Fritz — a developer platform that helps software teams optimize ML models, easily deploy them to the edge, update with one click, and get all sorts of cool analytics and tools for A/B testing. Fritz sits on top of leading platforms like Core ML and TensorFlow Lite (with more coming) and has gained traction with developers working on computer vision including image and scene recognition, object detection and artistic style transfer. We envision many other categories like UX personalization and big data processing coming down the line.
I’ve written in the past about the idea of letting every software team act like the best in the world by putting powerful tools in their hands — Fritz enables any business to enjoy the same kind of tooling and power as the computer vision and machine learning teams inside companies like Snap, Facebook, and Tesla.
We invested in Q3 2017 alongside our friends at Eniac and Hack.vc and we’re thrilled to officially welcome Jameson, Dan and the test of the Fritz team to the Uncork family (and look forward to making bad U2 jokes for the foreseeable future). Oh, and check out their community newsletter Heartbeat which covers all manner of cool machine learning and edge computing projects.