Deep Learning on M1 Mac Mini

Justin Chae
Mac O’Clock
Published in
7 min readApr 19, 2021

--

A review of Apple Silicon performance while coding neural networks with PyTorch

M1 Mac Mini 2021 — Photo from the Author.

Curious about coding for artificial intelligence on Apple Silicon with PyTorch? In this article, I lay out the results of building a language model with an M1 Mac Mini (early 2021), a MacBook Pro (late 2018), and Google Colab Pro.

High-Level Results (THIS STORY IS BEING UPDATED)

According to my experiments, the M1 Mac Mini with 16GB unified memory (“M1”) is slightly faster or just as fast as the 2018 MacBook Pro (“MBP”) and the Google Colab Pro environment (“Colab”). When it comes to language modeling with a deep neural network, the M1 is slightly slower than the MBP and much slower than a Colab GPU. However, for less exotic tasks, the M1 is faster at cycling through loops and I/O such as reading and writing large amounts of data.

The Bottom Line (*Updated May 2021) — With Python 3.9 and PyTorch*, Apple Silicon is not a suitable alternative to GPU-enabled environments for deep learning. Instead, the M1 is a pretty good computer for…

--

--

Justin Chae
Mac O’Clock

Justin writes about technology, programming, and general interest topics.