Why it is so difficult to build and scale AI?

Challenges to become a first-class AI organisation

TurinTech AI
Optimise Code with TurinTech AI
11 min readJan 15, 2021

--

Introduction

AI is a game changer, but it will take time, scale and expertise to unlock its full potential and ROI.

Companies strategically scaling AI generate 5X ROI VS Companies unable to scale. 86% of executives believe they won’t achieve their growth objectives unless they can scale their AI. -Accenture

However, not every company understands the challenges of creating a first-class Scalable AI organisation. Today, it’s mainly the Tech Giants (Google, Amazon, etc) that are able to scale AI and reap the benefits. Whilst some large corporations are starting to see the results of their AI efforts to embed AI into their business applications, most of them struggle to scale. Meanwhile, there is a large group of non-tech companies that are just starting to run AI pilots, but find that even launching their first AI projects difficult.

In this article, we will discuss:

  1. What is Scalable AI and why it is important?
  2. Why it is difficult to build and scale AI?
  3. How TurinTech makes AI scalable?

You can also download the white paper here.

1. Why Scalable AI Is Important?

McKinsey estimates that AI will add $13 trillion to the global economy in the next decade. The full value of AI can only materialise when firms can offset their upfront costs of developing AI with substantial business gains from its widescale deployment. Indeed, three-quarters of organisations with large ROI have scaled AI across business units. However, most companies are struggling to scale AI. McKinsey’s survey shows that most companies have run only pilot projects or applied AI in just a single business process.

Organisations that successfully scale AI become empowered to make sense of their data, and to provide the right analysis and right predictions needed for business transformation to achieve competitive advantage. As Accenture discovered in a recent survey, successful AI scalers report significant and diverse benefits, from customer satisfaction and workforce productivity to asset utilisation. There was an average lift of 32% on Enterprise Value/Revenue Ratio, Price/Earnings Ratio, and Price/Sales Ratio.

What is scalable AI?

Truly scalable AI can be built by both tech and business users and can run anywhere at high speed and low latency.

  1. Built by Tech and Business Experts.
    Smart AI that can be built by both tech and business experts to optimise their business processes and create business value.
  2. Run Anywhere without Compromise.
    Efficient AI that can run anywhere, from cloud to different types of devices without compromising performance and without the pain of integration.
  3. At High Speed and Low-Latency.
    Efficient AI that runs at high speed to enable real-time business decision making.

2. Why It Is Difficult to Build and Scale AI?

As the coronavirus pandemic has catapulted businesses into the digital space, AI will become a clear differentiator and become even more crucial for growth. Gartner predicts that companies will have an average of 35 AI projects in place by 2022. However, becoming a first-class AI company is much easier in theory than in practice.

2.1 Why It Is Difficult to Build AI?

Building AI with traditional methods, is time consuming, labour intensive, costly and ultimately, has no guarantee of success.

2.1.1 Talent

Building AI requires significant domain knowledge which necessitates years of training.

AI experts are scarce and expensive. Companies with deep pockets pay more than $1 million per year for world-leading AI experts. With the talent pool increasing only slowly and inelastically, most businesses just cannot afford to build up the full team of AI experts they really need.

The solution lies in more automation.

TurinTech automates the end-to-end data science life cycle, empowering everyone in an existing team to build expert level AI almost instantly. This not only avoids tremendous costs of hiring additional experts, but it also allows innovation and transformation in business areas that typically did not have the budget, resources or experience to adopt AI.

The Economist: “AI specialists are scarce, and command luxuriant salaries. Solely the tech giants and the hedge funds can afford to make use of these folks.

2.1.2 Time

On average, it takes around two months for a team of data scientists to build a machine learning pipeline. In addition, most companies spend more than a month to deploy an ML model into production. By the time the model is online, the conditions of the market may have changed, and the model is out of date, which might put the business at risk of loss.

TurinTech has focused on speeding up the ML pipeline as much as possible — typically users can generate accurate models 30% faster than those built by data scientists manually. For example, a hedge fund accelerated their trading model development time from 6 months to only 2 months. In particular, by leveraging distributed parallel computing, AI project time can be radically reduced, by parallelising hundreds of AI model training events. Moreover, model deployment must be seamless, ideally “one click”. Code-free and flexible deployment options support customer’s different IT infrastructure, both on-premise and in the cloud, significantly reducing the time spent on integration.

2.1.3 Data

Data is the crucial asset for AI. However, most organisations have their information reside in different departments and systems. It is difficult to know what internal data they have, let alone collect, share and capture value from these proprietary datasets.

Most of the time, a real-world dataset is not cleaned and ready for building AI solutions. Data Scientists spend 51% of their time collecting, labelling, cleaning and organising data. This slow and inefficient process is draining organisational resources and inhibiting their ability to gain business insights instantly.

Moreover, data is ever changing. Business expectations and regulatory requirements can also change rapidly. As a result, the performance of models can quickly become degraded, even obsolete. To maintain performance, models need to be continuously optimised. This means businesses may need to go through the lifecycle of data pre-processing, model training and deployment again and again. This is not only time consuming, but also costly.

With TurinTech, different teams across the organisation can share data and collaborate on one unified platform. Powered by TurinTech’s advanced data pre-processing and feature engineering capabilities, users can transform their enterprise data into machine-learning-ready data, instantly. In addition, with automated data visualisation, users can gain hidden business insights into even the largest datasets. When it comes to the issues associated with data drift and business changes, TurinTech proactively detects obsolete models and continuously optimises them on the latest data for optimal performance.

2.1.4 Explainability

AI has been widely adopted by a broad range of industries, including some critical and sensitive domains such as medical diagnoses and credit decisions. The ethical issue concerning automatic decision-making by algorithms has been widely discussed and the need to build trust around algorithmic decisions and predictions is extremely important. However, the more complicated the algorithms used in creating the machine learning pipeline, the more difficult it is to understand and explain how and why the model made the decision it did. To fully trust AI, users need the transparency to understand how these decisions are being made and the capability to debug the models when they fail in real-world applications. The greater the trust, the quicker and more widely a new model can be used in a real-world business context.

TurinTech automates the whole ML process with full transparency, allowing business to build explainable AI. It shows the model building process via visualisation and explains why a model makes a particular prediction using accepted interpretability techniques. It allows different levels of explainability to enable different users to support important business decisions and satisfy audit requirements from regulators with confidence.

2.2 Why It Is Difficult to Scale AI?

According to Accenture, 75% of business leaders feel that they will be out of business in five years if they cannot figure out how to scale AI.

Building AI is difficult; building AI at scale is even more difficult. The ever-increasing data and diverse implementation environments dramatically increase this technical complexity and difficulty. As a result, companies will need infrastructural resources such as memory, computational power and storage that can easily scale without breaking the bank.

To do this, model efficiency is critical. Inefficient models run slow, consume huge amounts of expensive computing resource and reduce device operating life. For instance, training a single model may cost around $250 K just for AWS cloud resources. Moreover, you may need to train hundreds of models before you get a production-ready model.

“If an enterprise wants to apply this model to the data they collect going forward, and improve the model as more data is collected, you need a way to scale your access to data and computing resources without breaking the bank.

2.2.1 Ever-growing Data Volume

The data we generate is ever growing. By 2025, IDC (a global business consultancy) estimates that worldwide data will explode to 175 zettabytes with 61% compound annual growth rate. Meanwhile, companies are deploying more models to understand and capture value from tremendous amounts of data. When dealing with a 100X increase in data volumes or models, the technical complexity drastically increases.

With TurinTech’s microservices architecture, businesses can simultaneously create and optimise hundreds of models amid massive amounts of data, scaling AI faster and with less friction.

2.2.2 Intensive Computing

Running AI to generate real-time prediction requires heterogeneous computing. When your AI application engages with millions of customers, each of these interactions triggers inferences that must be computed simultaneously and virtually instantly.

In fact, training and running AI at scale requires a huge amount of expensive computation power. According to Facebook’s head of AI, training a large model once can cost ‘millions of dollars’ in electricity consumption. Even if you are a billion-dollar company, this sort of expense is hard to justify.

Massive amounts of electricity consumption also generate a huge amount of CO2. For instance, training and running a model can emit more than of CO2. That’s almost five times the amount of carbon dioxide emitted by the average car during its lifetime.

2.2.3 Performance Trade-offs on different AI Devices

AI is moving beyond just the cloud to the edge. We are unlocking a world of intelligent Internet of Things (IoT), where smart, connected devices can take intuitive actions based on awareness of the situation. Statista forecast that there will be 75 billion connected devices by 2025. In IoT era, businesses will need to roll out their AI models to different end devices-whether a smartphone, car, drone, machine, or other things.

For one single use case, businesses may need to scale AI models to hundreds of thousands of different devices. Models need to be redeployed and retrained to comply with constrained hardware parameters of each device.

However, AI accuracy is often twinned with hardware capability. For example, power efficiency of computing is particularly critical for both on-device AI operating life and for enterprise data centres. Businesses struggle to make AI models smaller (requiring less computational power) without sacrificing accuracy.

Furthermore, AI is usually built by developers who have limited understanding about end-device operations. They do not necessarily have the visibility or expertise to make effective trade-off decisions at the code level to optimise the model’s performance in production.

Source: Qualcomm

TurinTech’s code optimisation feature enables businesses to optimise a large number of models at source code level with multiple objectives. For example, deliver higher performance while reducing power consumption. This allows business to scale efficient AI to various end devices without compromising on accuracy or other business metrics.

2.2.4 Organisation

Accenture: 92% of Strategic AI Scalers leverage multi-disciplinary teams.

AI is not a plug-and-play technology with immediate returns. Apart from technology changes, scaling AI across business units requires making fundamental shifts in organisation. Most business problems are wide-spread and multi-faceted. These problems need to be framed appropriately into AI problems. This cannot be done in a siloed work culture, but rather through interdisciplinary collaboration, where business leaders, operational people and technical experts come together to ensure broad organisational priorities are being tackled.

By avoiding tremendous costs in AI experts, computational resources and more, TurinTech enables businesses to invest significant resources in reshaping business processes and organisational culture. Meanwhile, TurinTech’s code-free and collaboration features underpin team productivity . They enable organisations to prioritise complex business problems and frame them into AI problems. In addition, by empowering businesses to build and optimise AI at high speed, companies can do rapid prototyping and evolve towards more agile and adaptable organisations.

McKinsey: Changing company culture is the key-and often the biggest challenge-to scaling artificial intelligence across your organisation.

3. TurinTech: Making AI Scalable

We envision a world where every company can build and scale AI to gain a competitive edge.

Powered by our award-winning proprietary research in evolutionary optimisation, TurinTech enables companies to build smart and efficient AI that has wide applicability across different organisations, users, contexts and geographical location.

  1. Built by Tech and Business Experts.
    From data scientists to software engineers and business analysts, people with different level of technical skills can generate accurate AI models instantly. With a high degree of explainability, people at all levels can trust AI as a decision-making partner to take optimal actions.
  2. Run Anywhere without Compromise.
    TurinTech’s multi-objective optimisation enables business to iterate their AI models on-demand based on their specific criteria. Businesses can tackle difficult trade-offs between accuracy and performance, rolling out AI models to various clouds and devices at scale.
  3. At High Speed and Low-Latency.
    TurinTech accelerates AI models to run at high speed to deliver desired accuracy for real-time business decision making. By optimising models at source code level, businesses can minimise computational power, storage and other infrastructure resources needed to scale AI.

About TurinTech:

TurinTech is a leader in Evolutionary AI Optimisation. We are a research-driven deep tech company founded in 2018 based in London.

We envision an intelligent and efficient business world powered by scalable AI. We automate multi-objective optimisation to help businesses scale AI efficiently: build AI quicker, run AI faster, deploy AI greener.

TurinTech builds upon over 10 years’ research in optimisation. We are professors, data scientists and engineers from prestigious universities across the globe who are actively collaborating with world-leading academic institutions to create breakthroughs.

Learn more about scaling AI at https://turintech.ai/
Follow us on LinkedIn, Medium, Twitter 😊

References:

  1. https://econsultsolutions.com/esi-thoughtlab/roi-ai/
  2. https://www.accenture.com/gb-en/insights/artificial-intelligence/ai-investments
  3. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy
  4. https://www.accenture.com/gb-en/insights/artificial-intelligence/ai-investments
  5. https://www.gartner.com/en/newsroom/press-releases/2019-07-15-gartner-survey-reveals-leading-organizations-expect-t
  6. https://www.economist.com/technology-quarterly/2020/06/11/businesses-are-finding-ai-hard-to-adopt
  7. https://algorithmia.com/state-of-ml
  8. https://visit.figure-eight.com/rs/416-ZBE-142/images/CrowdFlower_DataScienceReport.pdf
  9. https://medium.com/syncedreview/the-staggering-cost-of-training-sota-ai-models-e329e80fa82
  10. https://www.seagate.com/gb/en/our-story/data-age-2025/
  11. https://www.21stcentech.com/working-ai-big-environmental-footprint/
  12. https://www.economist.com/technology-quarterly/2020/06/11/the-cost-of-training-machines-is-becoming-a-problem
  13. https://towardsdatascience.com/how-leading-companies-scale-ai-4626189faed2
  14. https://www.economist.com/technology-quarterly/2020/06/11/the-cost-of-training-machines-is-becoming-a-problem
  15. https://www.leidos.com/insights/why-ai-so-difficult-scale
  16. https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/
  17. https://ai-forum.com/opinion/why-commercial-artificial-intelligence-products-do-not-scale/
  18. https://venturebeat.com/2020/02/22/why-ai-companies-dont-always-scale-like-traditional-software-startups/
  19. https://qualetics.com/leadership-blog/enterprise-ai-why-organizations-fail-in-scaling-ai-and-how-to-do-it-right/
  20. https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/digital-blog/what-it-really-takes-to-scale-artificial-intelligence
  21. https://www.accenture.com/us-en/blogs/intelligent-functions/scaling-ai-how-to-make-it-work-for-your-company
  22. https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html
  23. https://www.ingedata.net/blog/the-cost-of-ai-why-labelling-data-in-house-is-a-bad-move

Originally published at https://turintech.ai.

--

--