Why M1 Pro could replace Colab: M1 Pro vs Tesla K80 (Colab) and P100 (Kaggle)

Nikita Kiselov
2 min readNov 17, 2021

Since Apple’s introduction of M1 chip in 2020 I was primarily interested in its usage for ML training. Though, then it was only about inference with Neural Engine.

However, after the announcement of the new M1 Pro/Max sillicons and the release of PluggableDevice for Tensorflow, it became possible to train models using the GPU. Comparisons of the maxed-out M1 Max with top-end GPU models are already available. However, having an M1 pro (16 cores GPU, 16 GB RAM) in a 14inch MPB body, I wondered how much it could replace a regular instance with a GPU on the same Colab or Kaggle 🤔.

Part 1: Dummy test

For quick comparison, I have used TensorFlow Transfer learning example of Cat vs Dog detection. DenseNet201 with unfrozen params was used as a base model.

The results were the following:

  • Colab (Tesla P80): ~880 ms/step
  • M1 Pro (Metal): ~480 ms/step
  • Kaggle (Tesla P100): ~245 ms/step

Not bad, considering the power consumption of the M1 Pro and the fact that I didn’t hear the fan sound during the test. However, such a comparison didn’t seem mature enough to me.

Part 2: Running better benchmark

--

--

Nikita Kiselov

Applied Scientist 👨‍🔬 | MSc AI @ Université Paris-Saclay 🇫🇷 | Ukrainian 🇺🇦