A 3 step guide to assess any business use-case of AI
If anyone asks me what I learnt by starting off my career working in a venture capital firm, my first reply would be to mention that it was the unique pleasure of being a generalist in a world that expects recent college graduates to work on specific areas.
On deeper thought, it is the only industry which gives you a chance to evaluate ideas and trends. I refer to it as ‘opportunity spotting’. Or a perch to look at what the future holds. When every entrepreneur you meet and every report you study speaks about AI, you tend to keep a close tab on it.
When I moved to a venture backed AI computing startup (Series C), it was this skillset of opportunity spotting that I took along. Performing a deep dive on multiple industries, identifying use-cases of AI adoption and figuring out market entry points was an extension of what I learnt in VC. I lost no time in applying the same principles and mental frameworks.
But I couldn’t have been more wrong.
At Columbia, I took quantitative courses centered around ML to get a feel of the implementation part of adopting AI solutions. It was my first exposure to Python, a language which The Economist has referred to as the world’s most popular coding language and the lingua franca of most people working in AI/ data science.
In the past 12 months Americans have searched for Python on Google more often than for Kim Kardashian, a reality-TV star.
A commonly used phrase (cliche?) by managers is to take a ‘30,000 ft. view’ towards building a business. As a young professional interested in technology businesses, I was so caught up in trying to hone that skill that I didn’t realize that my interest area required a different approach.
However, getting hands-on with the execution side of ML via coursework, couple of MOOCs’, projects involving client delivery and a lot of reading has led me to the conclusion that the traditional way of evaluating opportunities in the AI space is incomplete at best and rudimentary at worst.
Here’s a 3 step framework that I now use.
Understand All Things Data
“Without data, you’re just another person with an opinion.”
W. Edwards Deming has been described as the national folk hero of Japan for reviving its industry after World War II. His introduction contains a lot of adjectives but going by his above quote, we must add ‘prescient’ to that.
A course work project involving logistic regression and decision trees took a team of 3 around 20+ hours to group publicly available data across years, figure out what we could actually do with the dataset collected, understand factors that could help us predict the output and decide on the combination of factors that we were going to use.
It probably took us around 20% of that time to build the model and draw conclusions.
Over the last quarter of 2018, our project team has consulted with a global market-intelligence firm and a NASDAQ listed ad-tech company. We are using deep learning/data analytics to solve business process challenges.
For an automatic keyword extraction project with multiple types of articles, our ideal approach would have been to draw from deep key phrase extraction research being done by pioneers in the NLP world. But when we received data from our client, we realized that the annotated data would not suffice to build a traditional deep learning based model.
We are now back to the table to research on statistical methods/manual annotations to work through it.
To really understand what it takes to build AI/ML powered solutions, you need to work with your first dataset. Either for a course project or a deployment ready solution, major tasks of AI require practitioners to train algorithms with vast amounts of data.
It takes disproportionate time to collect, clean and organize such data when compared to working on algorithms. Many industry professionals are voicing the opinion that it is actually datasets and not algorithms that drive breakthroughs.
Looking at solutions being implemented in the industry from a competitive analysis perspective without thinking through what datasets gave a certain company a competitive advantage, what partnerships opened up access and an estimation of the underlying approach might just prove to be redundant when your team works on putting a roadmap in place.
If you are on the business side, any tangible recommendation slide should have a section detailing what the organization’s approach towards obtaining/handling data should be.
Track the State of the Art
As a recovering VC industry professional, I still love the concept of a business moat (and the word keeps cropping up in my blog too!). When I had to research on AI adoption across industries, my first move was to understand how companies across the world were using AI today.
But then, you can’t see what is most in vogue today and put your eggs in that basket. You need to have an inkling of what tomorrow brings. To account for this, we tracked sub-areas where there were growing VC investments. However, in hindsight, this might not have been a complete solution. This was my traditional ‘opportunity spotting’ tendency kicking into full force.
Once you take a peek at the implementation part of it, you realize the dizzying pace of research and wide set of solutions that exist to solve business needs. Take the case of image recognition for example.
Cloud hosted AI/ML platforms of the kind of Google Vision/ Amazon Rekognition/Clarifai have made it easier for face recognition, detecting objects, text detection, logos/landmark detection. Then there are other solutions like OpenCV which bundle a host of algorithms for the same. These plug and play kind of platforms have largely reduced the need for firm to hire vision engineers for low-hanging use cases.
But globally, computer vision is a hot space with billions of dollars of venture money flowing into this area with the Chinese startup SenseTime alone raising $1.2 billion. Different business use-cases require algorithms with varying levels of sophistication. Different levels of sophistication motivates companies to push the barriers of what was previously possible.
There is no single number than can capture the size of the opportunity just like there is no single one algorithm-fits all approach.
There’s another point here. Annual business review meetings have a 3–5 year strategic plan. But several factors have contributed to an explosion of AI research and we see a changing definition of state of the art, sometimes in as short intervals as a year.
Sample this. Let us take a look at underlying infrastructure to look at the ‘velocity’ of progress. In a year and a half, the time required to train a network to classify images from the famous ImageNet corpus has fallen from one hour to about 4 minutes. The AI index report mentions that this has been possible due to both algorithmic innovation and better infrastructure.
Now, imagine the cascading impact on other areas across training and inference at the cloud or at the edge due to these advancements.
Side Note: Recommend reading the AI Index report (2018). It is authored by professionals/ subject matter experts from Open AI and a host of other universities led by Stanford and is an authoritative text summarizing key happenings.So, the idea of a business moat is not long term in nature but a temporary advantage as the times they are a-changin.
From a strategy perspective, it is important to think of ways to sustain it and this usually comes from progress on the quantitative side (combining hardware-software engineering/ data science etc. here).
This brings me to the final step.
Talk to a Quant Colleague
“Content without method leads to fantasy; method without content to empty sophistry.”
I was interviewing for a Product Internship role (Machine Learning focus) at a major real-estate tech company (B2C) when I had the opportunity to ask my interviewer a question towards the end.
“How do you incorporate/evaluate product ideas your data science team comes up with in your roadmap?”
“Oh! That tends to happen a lot. We usually…………… .”
It is natural to assume that product vision is set by the business team. In industries like semiconductors etc. where product roadmap is set in stone, this might be true. But when it comes to AI, this logic is flawed.
In fast-changing fields, the quantitative guys are at the forefront of innovation. At a 30,000 ft view (can’t resist!), most of the current advancements we are seeing in cutting edge research around deep learning/ vision have been initiated by one researcher’s untiring efforts.
Jeff Dean from Google, John Giannandrea from Apple, Ilya Sutskever from Open AI, Yann Le Cunn from Facebook have all been part of multiple business units commercializing products worth $billions and none of them have taken a business school course ever.
The line between engineering and business professionals will be a blurred one in any AI related application. The researchers at the AI computing firm had unique insights on what differentiated a particular algorithm for a particular use case. At the executive level, it was the Software Head who took a keen interest in understanding if there was a primary use-case/product fit.
This point of seeking out advice from quantitative team members was drilled into me when working on projects at Columbia.
A call with a data scientist from the market intelligence firm gave us more insights than 3 meetings with the business analyst responsible for project delivery. We were focusing on accuracy of the classification instead of improving our recall value. It is a ratio change when you are on the business side but an whole change in approach for the data scientist on our team.
30,000 ft. views (one last time!) usually looks at business strategies from the top. Equivalent to the train-test considerations (bias/ variance), my experience has been that the unless you involve your data scientists/ engineers in Strategy sessions, there is every chance that you might be on the receiving end of untoward surprises during the implementation phase.
That’s probably why most companies are locked in a talent war to hire AI talent. The opportunity cost of not doing so is a real business threat.
“We have had difficulty filling jobs for a number of years. It does slow things down.”
As I delve deeper into topics around AI/ML that would help me better communicate with engineering/data science teams, I see a lot of ads related to picking up AI/ML skills on my social media feeds.
They mostly follow a template indicating a statistic related to AI adoption by businesses and a course that will help you pick up concepts. The duration varies from anywhere between 6 days to 6 months depending on usually two sets of target audience — business professionals in case of the former and college students for the latter.
I am not sure if a 6 day course has enough of depth to cover the basic difference between classification and clustering algorithms or explain what constitutes statistics vs ML vs AI.
However, if you are a business professional fascinated by this space, I think there is a better way to understand the true implications of it. One that will aid you better in building strategy.
- Try finding a dataset in the area you work in and use one of the zillion tutorials available online to build a rudimentary model after thinking through what you want to achieve.
- Now check the progress in that particular and think about how your solution can maintain a sustained lead over that of your competitor’s. Think ‘velocity’ over ‘speed’.
- And in case you are stuck or need to validate your thoughts, just request a data-scientist/ AI engineer in your company for a coffee meeting. :)
I am currently pursuing a Master’s in Management Science at Columbia University, New York. My prior work experience includes a generalist role reporting to the CEO of a venture backed AI Hardware startup, a stint in VC and a failed attempt at entrepreneurship!
Please feel free to drop a message to uday(dot)marepalli[gmail].