First, make sure you have flashed the latest JetPack 4.3 on your Jetson Nano development SD card.
# Run the docker
docker run --runtime nvidia --network host --privileged -it docker.io/zcw607/trt_ssd_r32.3.1:0.1.0
# Then run this command to benchmark the inference speed.
Then you will see the results similar to this.
In this tutorial, I will show you how to build a deep learning model to find defects on a surface, a popular application in many industrial inspection scenarios.
Along with the latest PyTorch 1.3 release came with the next generation ground-up rewrite of its previous object detection framework, now called Detectron2. This tutorial will help you get started with this framework by training an instance segmentation model with your custom COCO datasets. If you want to know how to create COCO datasets, please read my previous post — How to create custom COCO data set for instance segmentation.
For a quick start, we will do our experiment in a Colab Notebook so you don’t need to worry about setting up the development environment on your own machine before…
Let’s say you have a GPU virtual instance on the cloud or a physical machine which is headless, there are several options like remote desktop or Jupyter Notebook which can provide you with desktop-like development experience, however, VS CODE remote development extension can be more flexible than Jupyter notebook and more responsive than remote desktop. I will show you step by step how to set up it up on Windows.
First, let’s make sure you have set up SSH on your server, most likely your online server instance will have OpenSSH server preconfigured, the command below can check whether it…
I wrote, “How to run Keras model on Jetson Nano” a while back, where the model runs on the host OS. In this tutorial, I will show you how to start fresh and get the model running on Jetson Nano inside an Nvidia docker container.
You might wonder why bother with docker on Jetson Nano? I came up with several reasons.
1. It’s much easier to reproduce the results with a docker container compared with installing the dependencies/libraries all by yourself. …
This tutorial will demonstrate how you can reduce the size of your Keras model by 5 times with TensorFlow model optimization, which can be particularly important for deployment in resource-constraint environments.
From the official TensorFlow model optimization documentation. Weight pruning means eliminating unnecessary values in weight tensors. We set the neural network parameters’ values to zero to remove what we estimate are unnecessary connections between the layers of a neural network. This is done during the training process to allow the neural network to adapt to the changes.
Here is a breakdown of how you can adopt this technique.