See Machine Learning Models with Unprecedented Detail: Getting Started with the Viewer

Our short video tutorial will show you how to use the free Zetane Viewer to analyze the inner workings of popular models and metrics for neural networks.

Jason Behrmann, PhD
Zetane
7 min readFeb 5, 2021

--

We all know that the complexity of machine learning models has both professionals and the public expressing concerns about how they operate like mysterious black boxes. At Zetane Systems we aim to advance one effort to bring greater transparency to black-box algorithms with our free software platform, the Zetane Viewer. The Viewer displays pre-trained machine learning models using intuitive visuals. You can access and inspect all the metrics for each component of an artificial neural network, including all tensors for each node. Here we provide a video tour of the Viewer and mention how you can use rich visuals of machine learning models to better plan debugging and optimization tasks. We suggest that you download the Viewer before the tutorial so you can follow along.

Below the video, we provide a transcript with links to make it easy to navigate to the online resources mentioned throughout the tutorial. Following the tutorial, we recommend that you check out these case studies that complement this discussion. There you will find an in-depth explanation of the visual techniques to optimize and debug a U-Net model mentioned in the tutorial.

Tutorial video

For more tutorials like this, be sure to subscribe to our YouTube channel.

Transcript

Hello, everyone. So, welcome to the Zetane Viewer.

We’ve released the Zetane Viewer to the community so that everyone can finally open the AI “black box”. With it, you can open the ML models that we’ve created. These are files with the .ZTN extension, and also you can open ONNX models. By the end of this tutorial, you’ll be able to inspect the architecture for artificial neural networks, and view all the internal tensors of your model. The benefits gained from fully opening ML model is that you’ll be able to gain valuable insights into how the input data is transformed as it goes through the models. You can also use these insights to debug and optimize without guesswork.

So we’ll first look at the user interface. Here you have a few tabs: “Data”, “View” and “Snapshots”; and here you have a few utility buttons. In particular, you have here “Help”, access to the Gallery, and access to our Home page where you can join the mailing list. In the data section, you can load the .ZTN file and you can also load an ONNX model. This button is used for sending the input to the ONNX model that you just loaded. This helps clear the universe, which is this place here where all the models will live, in this workspace. In the view, you have different options, including axes, hiding and showing the floor and you can also reset the camera. Yeah, and I’ll come back to Snapshots later. In the meantime, if you don’t have the same version as me, you can always go to the “Docs” here. And these are the official documentation. These are updated frequently. The website is docs.zetane.com. And you have all the information about the Viewer here. And you also have information about the Zetane Engine, which is the “Pro version”. The big difference with the Zetane Engine is that you can connect your own Python code to the Viewer.

A good section to review before you start is the section “Controls”, under the Zetane Viewer tab. It depends on if you’re using a mouse or a trackpad. The main actions are to pan up and down and also sideways. And also to zoom and rotate. And also you can select. You can rotate it within the 3D space because the Viewer is itself 3D-engine. Now we’ll open the Snapshot and see what a model looks like.

And so we can go in the tab here called Snapshots. And we can choose one of the models. In this case, we’ll choose the Karas mNIST. You have the input to the model. And there’s in this case, there’s only one input, and you have the model. And this is the architecture and all the operators.

Each of those nodes has different buttons on them. You also have the attributes of that operation. And you can also access different things such as the tensors of the model, seen in different ways. So here, when you press on this one, you access the output tensor, which is the same as this one but seen directly on the node. So you see, as the input to that node, we receive something like this, which are, which is just the four. And as the output, we have the feature map, a feature maps, and it gives us this tensor here. So this tensor is a three-dimensional tensor. And you can look at the shape here. And we have different views for it, we can look at it in 2D as projected in 3D. And we can also look at the values themselves in the tensor. And if you just want to look at the values, you can click here. And so these are different views for each of the tensors. So we have one view directly on the node, which is the which are the feature maps. And we have different views for each of the tensors. So in this case, here, you have devices, those are the filters, and you have the input to the node and the output. And each of these gives you a view of that tensor.

Now we’ll go into Gallery to download a Zetain model and load it in the Viewer. So we can go here in the Gallery and it’ll open your browser and you’ll be able to access the Gallery. So here, there are many models to choose from. And we’ll go download the one at the end here, UNET.

So now that you’ve downloaded the model, we’ll load it. So we can go here, press on this button and then open the model which is downloaded. It’s a .ZTE file. Takes a few seconds to bring in the Viewer because it’s not such a small file. And now we have few images here. And these were taken at a certain moment in time where images were passing through the model. So we took a Snapshot. And this is the model’s original image, the predicted mask and the ground truth.

And you can inspect everything about it. And you can see that it’s a u-net. And you can see the architecture here. And we can go at any node and look at the tensors themselves. So we can look at them here, have their 3D representation of the tensor, look at the shape. And you can also see the distribution of the values. And if you want to see it as feature maps, you would go here, and we can see how the model is viewing at this place in the network.

I’d like to bring your attention to a certain part of the U-Net model; it will be towards the end. So so we can look at the ReLu layer here. And we can see that it is doing the segmentation of the lungs. But it’s also looking intensely at the corner there. Can look at the range, and the tensor. So we can see the range is quite large. And it tells us that the model is really looking at the corner. But in the corner, it turns out that there’s a tag — a patient tag — that we assumed that the model will start ignoring the more you train. So it turns out that by removing the tag in the corner, it helps the model to perform a better segmentation and increase the dice score by 1%, for something that was already at the state-of-the-art. So this was a nice discovery on how to improve the model.

You can also load the pre-trained models from the ONNX Model Zoo using this button here. So on the ONNX website, which is here, or about the ONNX standard, and the partners that are part of that consortium, and you can go to the model zoo here. This will allow you to download a pre-trained model and load it directly in Zetane. After it’s loaded in Zetane, what you can do is you can use this button for — to give an input. You can also give a synthetic input to your ONNX model. This way, it will populate all the internal tensors, mostly the outputs of the nodes, and allow you to see how the model reacts to the data that’s been sent to the network.

Now if you’re interested to send your own input, and send your models directly from your own code, you can go upgrade to the Pro version. It gives you a 30-day trial version for free. And from there, you’ll be able to send all your models and data in different, other things such as images, meshes, and so on, directly to the Zetane working space so that you can debug and optimize your models.

To know more about the Python Zetane API, you can go to docs.zetane.com access to documentation there in this section, and know about all the details about the API and “Hello world!” examples, and so on. We hope this tool will be useful to you.
Thank you.

See more presentations and tutorials from Zetane.

--

--

Jason Behrmann, PhD
Zetane

Marketing, communications and ethics specialist in AI & technology. SexTech commentator and radio personality on Passion CJAD800. Serious green thumb and chef.