The Beauty of Nvidia TX2 at the Edge: takeaways from GTC 2020

Dasha Korotkykh
Hivecell
Published in
3 min readOct 23, 2020

--

Every enterprise data solution on the market is trying to figure out one thing: how to handle volumes of data that just keep getting larger. The ratio of business-relevant data to raw data is 1:400 — or at least it was at the beginning of 2020. With pandemic-enforced digitalization, those numbers are likely to differ even more dramatically.

What will suffice to solve these issues profoundly is a shift in the technology objective:

Processing the raw data as close to the source as possible.

Pushing only the business-relevant data to the cloud.

This is an entire concept of Edge computing. It started to root in the minds of the technical community in the last two years, going from an idea to a necessity. But actually solving it is obviously tricky. Software and services are fighting security and latency issues, hardware devices are dealing with storage and bandwidth limitations.

Earlier this month, our CEO and co-founder Jeffrey Ricker spoke at GTC 2020 to showcase how Hivecell is helping solve this problem within the oil and gas industry. We have built the Hivecell platform exactly to fuse the hardware and software into an ultimate solution for the data build-up.

The Hivecell contains:

● NVIDIA TX2 64-bit ARMv8

● 6 CPU cores, 2.4GHz

● 256 GPU CUDA cores

● 8GB RAM LPDDR4

● 500GB SSD

● 3x1G Ethernet

● Wifi IEEE 802.11a/b/g/n/ac

● Size 220x175x65 mm

● Weight 1.36 kg (3.0 lbs)

● Power 15W (max 25W)

The practical benefits introduced by Hivecell platform:

It takes care of delays, security, and compliance regulations — data doesn’t have to be moved to the cloud.

It doesn’t add to bandwidth, storage space, and electricity expenses, as all of the processing happens right at the place where data has been created.

It is fault-tolerant, preventing any damage from outages.

But hardware alone, however efficient, doesn’t solve the edge. The software part of a Hivecell platform handles the remote provisioning in any number of locations, stream processing and container orchestration. We ran three pilot projects with NVidia TX2 to see the practical value of this implementation. Let the numbers speak for themselves.

#1
Hazelcast streaming

Performance results

● Low parameter: 3.00 million messages/second

● High parameter: 6.79 million messages/second

#2
Single node Kafka streaming with Machine Learning

Performance results

● video feed from 4 cameras at 12 fps

● 9 frames per second per watt

#3
Multi-node Kafka streaming

Performance results
Consistent throughput 20–30 Mb/s
(compare to AWS MSK 50 Mb/s)

Average latency

  • Kubernetes 19.2 ms
  • Swarm 10.5 ms
  • Docker-compose 13.4 ms
  • Bare metal 10.2 ms

(Compare AWS MSK with 2.1 ms)

What are the findings?

Utilizing Nvidia TX2 at the edge allows processing stream based analytics at 6 million messages per second. You can run Kafka as a single node or run Kafka and Kubernetes as a high availability cluster. This solution is easily sustainable in 100+ remote locations. The power of the platform is an option to economically process raw data at the true edge and push only the business-relevant data to the cloud.

To watch the full GTC 2020 presentation, visit our YouTube page.

--

--