Announcing Apache MXNet 1.2.0

Today the Apache MXNet community is announcing the 1.2.0 release of the Apache MXNet deep learning framework — faster, easier to use: ONNX, MKL-DNN support, mixed precision training and many more features! Check the full release note here.

MXNet is faster

MKL-DNN integration: MXNet now integrates with Intel MKL-DNN to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators like sum and concat. This integration allows NDArray to contain data with MKL-DNN layouts and reduces data layout conversion overhead to get the maximal performance from MKL-DNN. More details are available in this blog post.

Number of images processed per second with different models / batch sizes.

Enhanced FP16 support: MXNet now adds support for distributed mixed precision training with FP16. It supports storing of a master copy of weights in float32 with the multi_precision mode of optimizers. Float16 operations on x86 CPU are 8 times faster using the F16C instruction set. To learn more, watch the video below!

MXNet has better model quantization

Support for Model Quantization with Calibration: MXNet now supports model quantization with calibration, borrowing the idea from Nvidia’s TensorRT. The focus of this work is on keeping the inference accuracy loss of quantized models (ConvNets for now) as close as possible to the corresponding FP32 models. Please see the example on how to quantize a FP32 model with or without calibration. Currently, the Quantization support is still experimental.

MXNet is easier to use

New Scala inference APIs: This release includes new Scala inference APIs which offer easy-to-use, idiomatic and thread-safe high level APIs for performing predictions in Scala with deep learning models trained with MXNet.

Improved exception handling support for operators: MXNet now percolates backend C++ exceptions to the different language front-ends and prevents crashes when exceptions are thrown during operator execution

MXNet provides easy interoperability

Import ONNX models into MXNet: There is a new ONNX module in MXNet which offers an easy-to-use API to import ONNX models into MXNet. See a more detailed write-up here and check out the tutorials on how you could use the mxnet.contrib.onnx API to perform super-resolution, image classification, or fine-tuning with ONNX models. See below the results of the ONNX fine-tuning tutorial:

Resnet-18 ONNX model, trained on ImageNet 1k, fails to classify correctly the images
Resnet-18 ONNX model, fine-tuned on the smaller Caltech101 dataset, classify correctly the images

Getting started with MXNet

Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can follow our 60-minutes crash course, and then later complete this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. You can find more practitioner guides here. Have fun with MXNet 1.2.0!

Like what you read? Give Thomas Delteil a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.