Boost your Keras model performance with XLA
XLA JIT (Just in time compilation) is a powerful tool to optimize TensorFlow performance by fusing multiple operations into a small numbers of compiled kernels.
As long as you’re running TF with XLA built in, you’ll be able to take the advantage of it with only a few lines of code. Note that TF image from dockerhub/pypi doesn’t have XLA built in (yet).
Here are some example code of using XLA with Keras:
First, create a ConfigProto and specify optimization level. Level 1 is enough for most of the use cases.
config = tf.ConfigProto()
jit_level = tf.OptimizerOptions.ON_1
config.graph_options.optimizer_options.global_jit_level = jit_levelThen, attach this config to the Keras default session. Note that Keras has only one session associated, this will overwrite any other session configuration you had before.
sess = tf.Session(config=config)
tf.keras.backend.set_session(sess)That’s it. Pretty easy, enh?
Please your comment below to tell me how much performance gain you had after enabling XLA on either CPU/GPU. If you didn’t find any performance difference, your TF is probably not enabled with XLA. Here are some instructions on how to build TF with XLA using interactive UI or Dockerfile (basically use environment variable to avoid interaction).