Jetson AGX Orin: The Hot New Thing In Edge Computing & AI
Among the slew of developments that came out of Nvidia’s GPU Technology Conference (GTC), were a pair of announcements aimed at accelerating the development of AI on the edge and enabling autonomous mobile robots, or Autonomous Mobile Robots (AMRs). The general focus of the 2022 Spring GTC beyond better and more powerful hardware can be summed up by a quote from the keynote:
“AI has fundamentally changed what software can make, and how you make software. The next wave of AI is robotics. Digital robotics, avatars, and physical robotics.”
Firstly, NVIDIA revealed the latest in its Jetson line of new supercomputers for edge AI- Jetson AGX Orin. With six times the processing power of Jetson AGX Xavier, in the same form-factor, it is by far the most powerful GPU-powered device designed for AI at the edge and in embedded devices. NVIDIA designed Orin to be an “energy-efficient AI supercomputer” meant for use in robotics, autonomous and medical devices, as well as edge AI applications that may seem impossible at the moment.
Equipped with an Ampere-class Nvidia GPU, a Cortex-A78AE CPU, and up to 32 GB of RAM, Jetson AGX Orin is capable of delivering 275 trillion operations per second (TOPS) on INT8 workloads, which is more than an 8x boost compared to the previous top-end device, the Jetson AGX Xavier.
It is pin and software compatible with its predecessor, the Xavier model, so the existing customers who have products with the AI processor can simply just plug in the new device into the existing solutions. And as it is built on the NVIDIA Ampere architecture GPU, it comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications.
Nvidia also announced the release of Isaac Nova Orin, a reference platform for developing Autonomous Mobile Robots (AMRs) trained with the company’s AI tech. The platform combines two of the new Jetson AGX Orin, giving it 550 TOPS of computing capacity, along with additional hardware, software, and simulation capabilities to enable developers to create AMRs that work in specific locations. Isaac Nova Orin also will be outfitted with a plethora of sensors, including regular cameras, radar, lidar, and ultrasonic sensors to detect physical objects in the real world.
Another key element is the Isaac Sim on Omniverse, which enables developers to leverage virtual 3D building blocks that simulate complex warehouse environments. Developers can then train and validate a virtual version of the AMR to navigate that environment.
The opportunity for AMRs is significant across many industries, including warehousing, logistics, manufacturing, healthcare, retail, and hospitality. Nvidia says the market for AMRs to grow from under $8 billion in 2021 to more than $46 billion by 2030.
“The old method of designing the AMR compute and sensor stack from the ground up is too costly in time and effort,” says Nvidia Senior Product Marketing Manager Gerard Andrews in an Nvidia blog post. “Tapping into an existing platform allows manufacturers to focus on building the right software stack for the right robot application.”
“Until a year or two ago, very few companies could build these AI products, because creating an AI model has actually been very difficult,” said Deepu Talla, Nvidia’s vice president of embedded and edge computing. “We’ve heard it takes months if not a year-plus in some cases, and then it’s…a continuous iterative process. You’re not done ever with the AI model.”
However, Nvidia has been able to reduce that time considerably by doing three things, Talla said.
The first one is including pre-trained models for both computer vision and conversational AI. The second is the ability to generate synthetic data on the new Omniverse platform. Lastly, transfer learning gives Nvidia customers the ability to take pre-trained models and customize them to a customer’s exact specifications by training with “both physical real data and synthetic data,” he said.
The developer kit for Jetson AGX Orin is available at a starting price of $1,999. Now, this hefty price tag might discourage you from learning and experimenting with NVIDIA’s Jetson lineup, but that doesn’t need to be the case.
Check out our Jetson Computer Vision Course!
Everything we cover in it is compatible with all Jetson modules, so you can build your knowledge base on the more affordable Jetson Nano and switch to a more powerful Orin later on. We take you through everything! From setting up the Jetson Nano and installing deep learning libraries, to training and deploying computer vision models. And that’s not all, we also cover model optimization using TensorRT and working with multiple video streams using NVDIA’s Deepstream.