Google Colab Pro Vs MacBook Pro M1 Max 24 Core — PyTorch
Comparing the Pytorch performance and ease of use for ML tasks
After my first article about ‘Google Colab Pro Vs MacBook Pro M1 Max 24 Core’ where I covered some specs and compared the TensorFlow training and inference speeds, I wanted to do the same for Pytorch.
The new MacBook Pros gave rise to hope, especially in the machine learning community, that neural networks with sometimes large amounts of data could now be trained locally on a laptop.
As soon as they were released, I was curious about their performance in tackling ML-specific tasks. Being a long-time user of Google Colab Pro, I was very happy with its available GPUs, so I was particularly interested in how the cloud GPUs would compare to the M1 Max chip.
Probably most of the ongoing or professional Data Scientists and ML Engineers nowadays get used to using cloud providers to train any kind of ML and especially Deep Learning models. Most laptops or even desktop PCs are simply not powerful enough for most ML tasks.
Here you can find the Jupyter Notebooks where I have implemented and tested the code.