I consider myself very lucky. After working on recommendation algorithms, a FinTech chatbot platform and starting the Budapest Bots Meetup group I’ve ended up in one of Silicon Valley’s most prestigious incubators with Bicycle AI. We are working on a human + AI hybrid solution for customer service.
These couple of months have made me sober up from the AI and chatbot hype. The Bay can be a tough place. They will tell you how it is, and you will be all the better for it. So here are the 3 most important lessons I learned working at a conversational AI startup in Silicon Valley. And just to put things into context, I have a psychology background.
1. Don’t call it AI if it is not
So what is AI anyway? I come from a psychology background and we can’t really agree on this in the company either. When it comes to definitions:
“AI is the capability of a machine to imitate intelligent human behavior”
So is classification of potentially cancerous lesions in the lungs imitating intelligent human behavior? Is understanding a customer’s questions and suggesting accurate answers from a large data set what we call AI? We are still far away from an Artificial General Intelligence (AGI) and the Technological Singularity but what we can do with today’s AI s quite staggering.
What should we call AI then? Artificial Intelligence has indeed become a buzzword, this year at CES we saw the explosion of shoehorning “AI” into every imaginable consumer electronics from toothbrushes to washing machines. Please don’t call this AI. I also wouldn’t call AI using one size fits all 3rd party algorithms like NLP engines and cloud vision APIs. Go for open source alternatives, tweak them, test them and make them work the best for your specific use case.
So why do I say that out-of-the box solutions are not good enough to build your business? First and foremost, they are one-size-fits-all solutions, so a properly tuned proprietary model will always outperform them. This was the case for us as well. We started out with IBM’s Watson, then built our own solution, ran them side by side for a while, and once we performed better, we switched entirely. And you basically have zero control over what you get back. In the case of NLP APIs, you send a string and you get back a snippet, great, but not nearly enough in most use cases. And there is a reason these services are free or priced competitively for now, they need data, your data. I’d say it is fine to start out with 3rd party services, but build fast to replace them. And if your team doesn’t have the capability, hire someone who does and fast.
When it comes to starting your own business in the AI space, I’d suggest you go over this Y Combinator blog post before claiming you have one. Basically, none of the topics in that post should make you uncomfortable when a VC partner asks about them. And believe me, they will.
2. Don’t do AI for AI’s sake
AI is the latest buzzword, even though it has been around and enjoyed periodic hype cycles since the 50’s. Full stack AI companies solving real problems will thrive, others will ride the wave, get money, burn it and die, eroding VC trust.
Some are calling AI the Cleantech of today and with good reason. You can have the best AI algorithms in your arsenal, but if you can’t use it in a way that solves real problems, then what is the point? Great AI applied in a useful manner is the key, just with any other technological innovation to be honest. Just look at Google Photos, it uses Google’s powerful Cloud Vision APIs to great effect, yet you don’t see it touted at every turn.
I work with conversational AI and for many, AI means assistants, like Alexa or the Google Assistant. Today with chatbots on the Messenger Platform, it is easier than ever to throw together something that resembles a conversational AI agent. Yet is it worth building the 10th restaurant search chatbot even if it would have world class AI? Apply AI to fields where they can generate real value, like the already mentioned lung cancer diagnostics, you can also win a million dollars, which doesn’t hurt.
3. You will need humans
This was my biggest learning. Unless you are an already gigantic company with a lot of data or are working on existing, publicly available data sets, you will need a lot of humans. And by humans I mean users, who generate the specialized data set that your algorithms can then work on.
HealthTap is a great example. They have built a wildly successful platform with 107 thousand doctors in the past 7 years that allowed them to access huge amounts of proprietary data. And yes, they launched Dr. AI, their AI-powered personal physician late last year.
Tesla’s autopilot program is another good example on how you can use your existing user base to generate the data you need to get to the next stage. And self-driving AI is not the end goal, it is but the means to get to the stage where you don’t need to have your own Tesla to take advantage of the technology. With true self-driving capabilities Tesla can get into the transportation sharing business. A vertical where today Uber reigns supreme and is also working on self-driving technology, albeit with a different approach, one that is in serious jeopardy.
And then we haven’t even mentioned the human workforce teaching the algorithms. After all, even if you have lots of training data, you still need to supervise the learning process to fine tune the model. This is what x.ai is doing and what you have to be prepared for as well if you go down the AI route, be it users, your own employees, or crowd sourced labor from the likes of Upwork and Crowdflower. Yes, the necessity of human supervision is bound to go down as your models progress, but depending on the field you chose, it can take years. And the lower your error rates are, the harder it is to improve which again, depending on your field can make or break your business.
AI in Q1 2017 is just past the peak of inflated expectations, with Facebook cutting its ambitions, Alexa marching ahead and Apple reportedly lagging behind on AI, this year will be quite interesting for the hybrid approach we are working on.
So how do these learning factor in with what I’m doing? Are we doing proper AI? Should we? And how do humans come into the picture?
We here at Bicycle AI use AI aim to solve a different kind of problem. Customer service is a very important aspect of any company and with 2.6 million people working in this position in the US alone (and a lot more offshore), it is one of the most labor intensive. And customer service is still slow. Departments measure their turnaround times in hours. And if you are a smaller company, scaling your CS team to meet fluctuating demand is next to impossible.
We here at Bicycle AI provide a virtual customer service rep for our clients who solves Level 1 cases for them. Our virtual rep is powered by a combination of AI and human supervision. We work 24/7 for you and have blazing fast response rates. This way we deal with the majority of cases so our clients can focus on high value customers and complicated cases.
We do this by ingesting previous customer service conversations, knowledge bases, product docs and any other resources available. We use our proprietary NLU engine to understand incoming messages and based on this combined data set our AI provides suggested responses. We have our own experienced customer service agents to approve, personalize and send each response. With each interaction, our machine learning algorithms are teaching our AI to give even more relevant answers in the future.