Deep Learning on M1 Mac Mini
A review of Apple Silicon performance while coding neural networks with PyTorch
Curious about coding for artificial intelligence on Apple Silicon with PyTorch? In this article, I lay out the results of building a language model with an M1 Mac Mini (early 2021), a MacBook Pro (late 2018), and Google Colab Pro.
High-Level Results (THIS STORY IS BEING UPDATED)
According to my experiments, the M1 Mac Mini with 16GB unified memory (“M1”) is slightly faster or just as fast as the 2018 MacBook Pro (“MBP”) and the Google Colab Pro environment (“Colab”). When it comes to language modeling with a deep neural network, the M1 is slightly slower than the MBP and much slower than a Colab GPU. However, for less exotic tasks, the M1 is faster at cycling through loops and I/O such as reading and writing large amounts of data.