Solving the “RuntimeError: CUDA Out of memory” error

Nitin Kishore
7 min readNov 2, 2022

**Freeze frame, scratch that record and cue — ‘The Who’ intro**

Now you might be wondering, how I got here and why this blog post has such a bland, un-enticing non clickbait-y title. Why is this not an ultimate guide to something something basic? I’ll tell you. This string is what I would enter on you.com(what, are you still using google?), when I’m completely devoid of hope and praying to an imaginary deity whose existence I question, to make my deep learning models run on GPU. You can dread it, run from it, but this CUDA issue still arrives.

🎃

So here’s to hoping that your prayer will be answered when you find this post.🤞 Right off the bat, you’ll need try these recommendations, in increasing order of code changes

  1. Reduce the `batch_size`
  2. Lower the Precision
  3. Do what the error says
  4. Clear cache
  5. Modify the Model/Training

Out of these options the one with the most ease and likelihood to work for you, if you’re using a…

--

--

Nitin Kishore

Staff Data Scientist 💻. I write weekly to either clear my mind or as part of active learning | UMass Amherst CS Grad | BITSian | Deep Learning |