Why AI needs a bit more Vroom

Alan Dalley
Another Integration Blog
3 min readAug 28, 2024

In fact a lot more Vroommmmm

Show me the Power

One aspect of Artificial Intelligence that is very rarely spoken about is the amount of computing power that it requires to perform its processing and deliver useful business outcomes — see my previous article on AI useful business outcomes!

So why does AI require all of this substantial power? Well there are several reasons but its primarily due to the nature of the algorithms, the volume of data, and the complexity of tasks involved. Let’s look at some of the key factors that are in play here.

As I have spoken about before AI requires very large datasets on which to base its outcomes. AI models, especially those based on deep learning, often require training on vast amounts of data to achieve high accuracy. Processing and analysing these large datasets require significant computational resources.

Of course if you have a large data set then you also need considerable storage capacity to hold the data but, in addition, this storage needs to be fast and robust to ensure efficient data retrieval and processing.

Deep learning models, such as neural networks, involve numerous parameters and layers that require extensive calculations. Training these models involves complex mathematical computations, such as matrix multiplications, which are computationally intensive. In addition to this AI model training is an iterative process, involving numerous training cycles to optimise the model’s performance and each of these cycles requires substantial computation.

AI models often work with high-dimensional data, which involves processing a large number of features. This increases the computational complexity, as the models need to learn and extract meaningful patterns from this high-dimensional space.

Now if we look at applications like autonomous driving, real-time recommendation systems, and interactive AI (e.g., chatbots), all of these applications require low latency processing. This, in turn, necessitates powerful computing resources to ensure quick response times. Imagine driving down a road waiting for the AI to decide if you need to brake or not — a pretty scary experience I would imagine!

State-of-the-art AI models, such as transformers used in natural language processing (NLP) or generative models, are highly complex and require significant computational power for both training and inference. Finding the optimal hyperparameters for AI models often involves running many different configurations and evaluating their performance, which can be computationally demanding.

Some forms of AI processing require use of Graphics Processing Units (GPU’s) and the lesser well known Tensor Processing Units. (TPU’s). Anyone who follows the development of PC’s will have seen over recent months that some of the large PC companies such as Apple have been preparing their hardware for AI processing with the introduction of increasing GPU capability in particular. These specialised hardware components are designed to handle the parallel nature of AI computations but still require significant power.

Hand in hand with the requirement for additional computing power comes the requirement for additional energy usage. The hardware used for AI, particularly GPUs and TPUs as I have just highlighted are power-intensive, contributing to the overall high energy consumption in AI computing environments. Of course where there is power and energy usage you need to maintain an optimal operating temperature for these high-performance computing resources which again adds to the energy requirements to power the cooling solution.

In order to handle large-scale AI tasks and computations are often distributed across multiple machines or clusters This obviously required robust infrastructure and orchestration, which in turn, further adds to the computational demand required to support AI. For this reason Cloud infrastructure is preferred by some organisations as this can provide the flexibility to scale up and down when experimenting with various AI models and architectures which require repeat training on large datasets. However, we have also seen recently hat some large organisations are repatriating their systems and data onto specialised data centres where they can provide additional security and configuration.

In summary, the need for high computing power in AI arises from the large volumes of data, complex models, iterative and parallel processing requirements, and the demand for real-time performance. As AI continues to advance, optimising computational efficiency and developing more powerful and energy-efficient hardware will be crucial in managing these demands.

--

--

Alan Dalley
Another Integration Blog

MuleSoft Ambassador. I have a lifetime of IT experience with a passion for API led Integration, Data, Data Quality and Agile ways of working.