Red AI => Green AI

What is it ?

Jaideep Ray
Better ML
2 min readMay 26, 2022

--

We are still in early days of AI. The general objective is to find the architecture which gives us the highest accuracy for a given task essentially “buying” stronger results through paying high computational cost. The authors of paper Green AI called this trend as Red AI. This is strongly driven by the desire to beat and raise state-of-the-art performance.

This has 2 major consequences :

  1. AI research is getting more and more expensive with computational needs rising exponentially. It is clearly not sustainable.
  2. We miss out on research towards simpler models, efficient practices, reproducible research.

Making efficiency a common evaluation criteria with accuracy would balance this trend out. The focus on efficiency as a metric while designing ml models is leading to Green AI — sustainable, environment friendly AI research.

ML papers

What should your evaluation metrics look like ?

  1. Model performance : Is the model capable of doing task ?
  2. Model latency/throughput : How many inferences is the model able to perform in one second given your inference setup ?
  3. Training cost ($) : Hardware cost/hr x time to train. Training is also done on heterogenous setup. For example, embedding tables are often stored in machines with large memory and bandwidth and compute is done on machines with gpus. Training cost should always be taken in context. For example, training BERT can be expensive but it is used to fine-tune various lightweight models suitable for different domains.
  4. Inference cost ($): Cost per 1000 inferences. This is dependent on the hardware / setup you choose based on latency. Often predictor setup are complex which involves custom hardware (mix of cpu, gpus) and rpc calls (distributed inference). Larger models need larger machines
  5. Floating Point Operations/inference : Total number of floating point operations required to generate a result through inference. This is a good metric to compute offline as it doesn’t depend on hw setup. Moreover, this can directly indicate savings through optimization.

Shift towards Red => Green in your model development process

  • Congrats, now you have a full picture of model performance & cost. Use efficiency as a measure to trade off between model A vs model B.
  • Invest in efficient architectures & optimization techniques like pruning, quantization, fine-tuning.
  • Often auto tuning comes to rescue when you have two different metrics to optimize (for example trading-off accuracy vs inference cost). Investing in a auto-perf tuning for model development has recurring benefits. We will cover this in more details in a separate post.

References :

  1. Green AI

--

--