Cracking the AI Paradox

BCG GAMMA editor
GAMMA — Part of BCG X
3 min readJul 24, 2018

By Philipp A. Gerbert, Senior Partner & Managing Director at BCG

As a manager, over the last years you have been bombarded by reports describing the wonderful things artificial intelligence can do for you. Reassuringly, your talented data scientists and engineers have probably already started to pilot a range of AI applications in your company.

You still need to get your data organized, hire more scarce talent, assemble cross-functional agile teams and perhaps overcome your employees’ fears of Frankenstein-processes. But it does not look harder than previous technology roll-outs and the initial business benefits seem alluring.

After several beautiful pilots, however, you may realize that something is wrong. Things become, well, messy. If this sounds familiar, you are not alone. You are experiencing the

AI Paradox: It is deceptively easy to launch AI pilots with great results. At the same time, it is fiendishly hard to move towards AI@scale.

In a recent publication we studied the nature of the beast facing an increasing number of companies and what it takes to overcome it (The Big Leap Toward AI at Scale).

Not every manager will like the consequences we are outlining:

First: When moving toward a human and machine world, IT architecture becomes as important as people organization, so you can no longer just delegate the structure to the techies. Many years ago we learned that people do not fall neatly into isolated organizational boxes, and we started to structure processes. Similarly, AI-enhanced IT infrastructures can no longer be segmented into neat modules but need to be managed as end-to-end processes. Google researchers published a (cryptically named) warning of this challenge as early as 2015 (Hidden Technical Debt in Machine Learning Systems), and it has now reached mainstream business. Companies now struggle to assemble AI platforms for help in managing such workflows.

You cannot even outsource the problem to tech vendors. When their algorithms need your data to learn from, even time-honed vendor relationships become convoluted (The Build-or-Buy Dilemma in AI).

Second, there are new demands on the organizational structure for people. As a general principle, AI requires learning to be centralized while actions stay decentralized. We need to adapt to this structure in the functions and units where we introduce AI. This is, however, not the end of centralization: HQ will need to install a central data governance — data might be ‘democratized’ but this only increases the need for proper governance. Likewise, cybersecurity will need to be pooled and strengthened. If the target of (possibly also AI-based) cyber-attacks are autonomously acting AI systems, you are exposed to entirely new risks.

Third, the most challenging area remains — you guessed it — people themselves. When you have finally filled the ranks of those (highly paid) data scientists you realize that the machine learning core is just a tiny fraction of the overall AI system. You also need to build what amounts to a little army of system and data engineers to keep it running. The arguably bigger issue, however, is the concerns of employees affected by AI. Even if you believe a future ‘superintelligence’ is humbug, you must address the fast pace of change AI is introducing to job profiles today — including in traditional safe havens like marketing or finance (AI and the ‘Augmentation’ Fallacy).

For better or worse, you cannot procrastinate. AI@scale can indeed foster decisive competitive advantage, as Chinese enterprises are determined to demonstrate (stay tuned for our upcoming article on MIT SMR on this).

All the best!

Philipp A. Gerbert

--

--

BCG GAMMA editor
GAMMA — Part of BCG X

BCG GAMMA is a global entity of BCG dedicated to Analytics, Data science and Artificial Intelligence.