How to limit GPU Memory in TensorFlow 2.0 (and 1.x)

Jun-young Cha
2 min readFeb 17, 2020

--

Photo by Joshua Hoehne on Unsplash

If you got this error when you tried training your deep learning model:

could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED

There’s a big chance that your GPU is running out of memory.

When your GPU run out of memory..!

Wanna limit your GPU memory(VRAM) usage in TensorFlow 2.0 ?

You can find a detailed explanation of using GPU in TF2.0 from its official documentation. In this article, I will show you some codes from the docs that you can use right away.

First option:

Use this code below. It will set set_memory_growth to true.

import tensorflow as tfgpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except RuntimeError as e:
print(e)
  • Currently, the ‘memory growth’ option should be the same for all GPUs.
  • You should set the ‘memory growth’ option before initializing GPUs.

Second Option:

This code will limit your 1st GPU’s memory usage up to 1024MB. Just change the index of gpus and memory_limit as you want.

import tensorflow as tfgpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
except RuntimeError as e:
print(e)

… But, if you’re using TensorFlow 1.x, try this:

First option:

This code below corresponds to TF2.0’s 1st option.

config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

Second option:

This code below corresponds to TF2.0’s 2nd option, but it sets memory fraction, not a definite value.

# change the memory fraction as you want...import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

That’s it!

Any comments would be greatly appreciated. Thanks :-)

--

--