Is the performance of USB 2.0 different from 10/100 Ethernet in Embedded Systems?

Jaime Dantas
Reverse Engineering
5 min readJan 1, 2020

The Internet of Things (IoT) is a huge deal nowadays, and embedded systems are gaining popularity day after day. With so many communication protocols out there, picking the right one at first try becomes a challenging task. Even though USB 2.0 and 10/100 Ethernet ports are kind of deprecated by now, they’re still present in the majority of IoT devices. This article presents an in-depth analysis of the performance of these two standards in the BeagleBone Black Development Board.

BeagleBone

The BeagleBone Black Development Board is a low-cost, community-supported development platform for developers and engineers who want to run code in combination with hardware features in an embedded device. That being said, this board can also be used as a small Linux computer for running applications that require a high processing power (compared with microcontrollers).

BeagleBone Black

Hardware:

  • Processor AM335x 1GHz ARM® Cortex-A8
  • 512MB DDR3 RAM
  • 4GB 8-bit eMMC on-board flash storage
  • USB host, Ethernet, and HDMI connection
  • Debian, Android or Ubuntu OS
  • 2x 46 pin headers

USB 2.0

The Universal Serial Bus (USB) 2.0 is an industry standard that establishes specifications for cables, connectors, and protocols. The 2.0 version of this standard can reach speeds up to 480 Mbit/s (60 MB/s).

The current this version can handle is also limited to 5 A. Another limitation of USB cables is the length. While some industrial protocols can reach up to 100 m, the maximum length of a USB cable is 5 m.

10/100 Ethernet

100BASE-TX is the protocol known as fast ethernet due to its high speed at the time of its release. It runs over two wire-pairs inside a category 5 or above cable. Unlike USB cables, Ethernet’s can have up 100 m. The speed is also another advantage of this standard since it can reach up to 100 Mbit/s of throughput in each direction (full-duplex).

Analysis

As stated at the beginning of this post, the goal here is to compare these two standards using a real-world application with embedded systems.

Let’s create a TCP Socket and use the BeagleBone Black with you computer to measure the throughput of each one of these protocols. The buffer size is pivotal when dealing with data transfer. For this reason, let’s also vary the buffer size and analyse its impact on the results

Before we start, it’s important that you know how to connect to your BeagleBone board through ssh. There are several tutorials available on the Internet to guide you on this process.

TCP/IP Socket

Stream sockets are also called TCP socket. A stream socket transmits data reliably, in order, and with out-of-band capabilities. If you want to learn more about this data transfer protocol, don’t hesitate to check out its RFC.

The code used in this post is available on my GitHub:

We need to specify the address of both the client (BeagleBone) and our server (Computer) as shown below:

#define SERVER_IP "192.168.20.101"
#define SERVER_PORT 1010
#define BUFFER 10
#define N_AMOSTRAS 1000

Inside the loop, we’ll read any incoming connection that arrives and reply to it immediately. Note that the number of samples is also important since we need to send hundreds of messages so we can have a reliable test.

For timing our socket, we’ll use the gettimeofday function as shown below:

gettimeofday(&tempo_inicial, NULL);

After the processing is finished, we’ll calculate the time in ms using the following:

tms = (tempo_final.tv_sec - tempo_inicial.tv_sec)*1000000;

When running our socket, we need to save the result for creating a graph later on. To do so, we’ll save the output of our program in a separate text file. We’ll end up with several txt files since we’re using different buffer sizes.

The server side of our TCP socket will be almost the same as the client one. The only difference will be the order: the server will receive and then resend the message.

Results

In our tests, we used an Ethernet patch cable and a certified USB 2.0 cable.

When we send an echo message through ping, we notice that if we use USB communication, we’ll be 25% faster than using Ethernet. We got an average time of 0.479 ms for USB and 0.642 for Ethernet.

The table below shows the results we got for each buffer size tested.

BeagleBone Black Socket results

When we plot the obtained results, we come up with this chart:

Communication time for TCP Socket

As we can see, the biggest difference in time between these two protocols was seen when we had a buffer size of 10 bytes. On the other hand, at 2 KB we almost didn’t notice a difference. This is because the higher the buffer size, the higher the overhead caused during the processing of the message.

However, we got some sort of intriguing results for a buffer size of 40 bytes since Ethernet surpass USB in this case. This may be caused due to errors during the test or any other external event.

Overall, USB 2.0 was around twice as faster as 10/100 Ethernet. So, if your application uses only a small buffer size, the data shows that USB is the way to go. In contrast, if the message you’re using is quite big (i.e. > 2 KB), it doesn’t matter if you opt for Ethernet over USB since both have similar performance.

That’s all folks! I hope this post was helpful to you somehow. If you have any question, reach me by email.

www.jaimedantas.com

--

--