Solving the “RuntimeError: CUDA Out of memory” error
**Freeze frame, scratch that record and cue — ‘The Who’ intro**
Now you might be wondering, how I got here and why this blog post has such a bland, un-enticing non clickbait-y title. Why is this not an ultimate guide to something something basic? I’ll tell you. This string is what I would enter on you.com(what, are you still using google?), when I’m completely devoid of hope and praying to an imaginary deity whose existence I question, to make my deep learning models run on GPU. You can dread it, run from it, but this CUDA issue still arrives.
So here’s to hoping that your prayer will be answered when you find this post.🤞 Right off the bat, you’ll need try these recommendations, in increasing order of code changes
- Reduce the `batch_size`
- Lower the Precision
- Do what the error says
- Clear cache
- Modify the Model/Training
Out of these options the one with the most ease and likelihood to work for you, if you’re using a…