5G over here, 5G over there. Okay, let ‘s look at 5 G from the standpoint of robotics, cloud and artificial intelligence.
The 5G 📡
It’s important to keep in mind that 5G won’t simply transform robotics, cloud & Artificial intelligence overnight, as many of the applications and technologies to achieve it are today either embryonic, in development, or just on the drawing board.
Rather, 5G should be viewed as the beginning of a new era that fully enables robotics and many other applications for the first time.
In addition, mobile robots are also a long way from being a mature technology and it will likely take years before they are massively deployed in applications ranging from manufacturing and production to agriculture, search and rescue operations, wide-ranging search and rescue operations, and many others.
5G will require enormous levels of innovation in every aspect of the network, from the development of millimeter-wave communications systems to software-defined and virtual network architectures, and new wireless access methods that make it possible for many robots to operate in a small area without interfering with each other. Looming above it all is latency, which researchers must find a way to reduce to virtual insignificance.
We‘re not going to waste time.
Let’s talk directly about the bandwidth in MBps.
The 5G would be at 175MBps in download and 93MBps in upload.
I watched various Youtube videos based on tests in real condition and environement with smartphones.
It should be noted that these are the first numbers and that this technology would theoretically according to different sources, which in no way justifies their information, 1.25GB/s. (I personally don’t believe in it).
- In Paris, with an iPhone 11 Pro and the best french carrier (Orange), I have in 4G, 25MBps in download, 12.0Mbps in upload.
Ok let’s jump into robotics now.
The robots ! 🤖
Here I discuss mobile robots interacting with their environment using information provided by their sensor, etc. I’m not talking about your food processors.
For example :
Quite a cute robot, right ?
As we can see today, we have few robots of this type in our environment. They find it difficult to make a place for themselves in our societies because their interaction with users is not very relevant.
This relevance can be explained by the limits of his intelligence. Functionalities & tasks it can respond to. Interactions are not user friendly. Today’s interactions are done via tablets where the interface is therefore limited and requires extra time for the user. The user has to take control of the interface (that is sometimes not user-friendly) and describe his intention on this limited interface. Those who included voice interaction suffer from long delays, which exceeds the time of human interaction (600ms to 1200ms). That causes indirect and passive frustration to the user. De facto, the latter tends to do less interaction with the robot.
His knowledge & actions are limited by his data and understanding of the environment. He can not answer all your willings and will sometimes give you an answer to point out his incomprehension or inability to act.
In order to improve the points I have mentioned above, came to help us, artificial intelligence. Well, in reality it does not really exist, but that is another matter. I advise you to find out more about Luc Julia’s work.
The artificial intelligence 🧠
What is called Artificial Intelligence today allows us to have an understanding of a given context or environment, finite and limited, more advanced than any algorithm at the present time. That’s what we call artificial intelligence. It is an expert on a subject, a task, in a more or less precise field & environment, but that’s it.
For example, there are artificial intelligence that:
- Understand the voice in a given language
- Recognizes a limited number of objects in an image
- Interprets a sentence written in a given language
So you’re always limited on context, “a limited number”, “a given language”, etc. If we want to perform these different tasks, and at the same time, which we do with a human brain, then we will need multiple computer neural networks to do each of them. You might as well say that given the number of varied tasks that a human brain can do, that would make a lot of artificial intelligence to have in one robot.
Let’s take YOLO as an example! You know YOLO.
Ah that beautiful deep-learning algorithm that you may have seen on TV, in your favourite series or in company brochures.
In fact we like Yolo because it gives today the best ratio performance/classification of object. We also like YOLO because we like the visual results, and it’s quite impressive to be honest with you.
So a good performance/classification ratio, that’s exactly what roboticists like! There’s a big question of timing in robotics. I’m not going to go into any more detail here. But we need performance & reliability. But you see, Yolo, basically, his “brain”, is able to recognize 80 objects. You can re-train his brain to classify +/- objects, but you won’t be able to classify thousands. Well … There is YOLO9000, which classifies on …….… 9000 objects, but it has much lower results.
So YOLO’s brain, you know, it goes from 42Mo (Tiny YOLO) to 237Mo (YOLO v3). That’s at least 10x less than the Netflix episode Netflix episode you ‘re watching in Full HD.
In fact, the challenge for artificial intelligence is to train it. This is where we’re going to have to spend a lot of computing time. Once the “brain” is created with the right neurons, well, you actually have a small file.
You need DATA to train these artificial intelligences. Yes, because it is going to learn from the data. In our case, we are talking about artificial intelligence for object recognition in images. So we need… Pictures. A lot of pictures. For others it’s vocal sounds, for others it’s text, etc.
So basically if we were to do the math :
Alright buds, so now that we know how to cook an AI. We need to choose the best tools, don’t we ? We need heavy computational time and a lot of data.
The cloud ☁️
Of course we’re going to talk about the cloud! This is the perfect environment to have these prerequisites at very interesting costs!
So if you don’t know what the Cloud is, don’t worry, look at this.
https://www.youtube.com/watch?v=dH0yz-Osy54 (I don’t want to make a special ad for this Cloud, but they made the best video).
So we have 24/7 access to titanic computing power and unlimited storage through the internet.
But you know the internet is sometime slow. And in a context where we would have a robot moving around, just like your phone that sometimes picks up badly or the connection goes slowly, could impact his performances.
That’s why today, we still prefer to have the robot’s calculations on board with high-performance computers, (which also consumes a lot of electricity btw). Calculations are fast & robust, even if they have performance and computational limitations. (Do you remember the recipe from before?)
In addition, network performance is difficult to predict in the free and open environment that is our world. We are not talking about a laboratory room or an industrial room where we would have measured the performance and optimized the transmitter and receiver to this particular and unchanging environment.
With 5G, we could blow several of today’s boundaries:
- Network reliability
From then on we would have a strong & fast link between our robot and a strong computing and storage capacity (So a very interesting potential on the artificial intelligence side).
We could therefore use these mentionned technologies, 5G & Cloud to :
- Send informations received by the robots sensors to the cloud in order to be processed by really fast and reliable artificial intelligences. These informations could also go through several and multiple A.I. because of the computing power available in order to have different comparable results.
This is already done with voice assistants such as Siri , Google Now or Amazon Alexa. But you see, as I have told you, there’s a big question of timing in robotics. Having unpredictable network performance could lead to laggy AI, and thus risks to the robot and its surroundings. This could be used for AI that are not critical and the results could be really interesting.
- Download on the fly new A.I. trained brains.
Let’s take back our beloved YOLO. His neural network, weighs, remember, between 47mo and 237mo. I made this little diagram just for you.
So roughly speaking, for 4G, we would need 10 to 2 seconds to receive a new neural network from the cloud.
On the other hand, with the 5G, we are, to simplify, below the second, or even very much below. So after we get this A.I. from the Cloud, we have to load it, get it up and running, but it doesn’t take much longer and it can be optimized.
By having very fast access to neural networks on the cloud, these neural networks could be transferred on the fly to different robots depending on the context, the environment and the tasks that one wants to do.
We could buy neural networks, just like we would buy smartphone apps.
Understand that or that language, translate that or that language, recognize that or that object. There are endless possibilities. Many artificial intelligence offerings already exist on a number of cloud platforms.
Moreover, with the bandwidth offered by 5G, the robot could very well send data (sounds, images, etc.) that it has recorded in its environment (during an initialization or learning step) to the cloud to re-train its neural network. This sending of information will allow you to have a neural network that is best adapted to your context and environment. As an example, Apple, Google & Amazon are already doing this with their respective voice assistants. Please let’s not get into data confidentiality…
Basically, 5G will allow us to send and receive data on our robot very quickly and this will allow us to increase its field of application by tenfold. In addition, some of the algorithmic complexity would be shifted to optimized servers. Nowadays, the variety of computers and robot boards adds complexity at all levels (design, maintenance, etc.) and additional costs. It also restricts him on new features capabilities that would require a new card here or there, more compatibility, etc..
I see you coming, there’s gotta be something wrong…..
Yes, by using a robotic cloud, you are shifting part of your robot ‘s operation to remote computers , making your robot more remotely accessible. You’ve increased your attack surface considerably.
We have a lot of protocols and tools to protect the robot, but its computer system used to be onboard. All calculations, decisions, controls, etc, were within the robot and rarely made accessible from the outside (or with small exceptions). A limited and difficult to access attack surface.
But with the robotics cloud, your robot’s more connected. It goes through multiple internet servers across the globe, which may have vulnerabilities.
Finally after all … we already have houses connected with cameras, sensors, voice assistants, … Many of them have been produced in China with a strict minimum in terms of security and privacy. In the end, they ‘re the same sensors, in a more or less humanoid case.
“The attack surface per device is actually shrinking,” said Robert van Spyk, senior offensive hardware security research at Nvidia. “It’s getting smaller for Android devices, in particular. But the digital footprint — the entire ecosystem attack surface — is expanding. It’s going to be very hard to address that with something that works at a system level, not just specific devices.”
Put simply, a chip vendor can make a difference at the chip level. But as the number of devices connected together continues to balloon, the entire ecosystem must be in sync.
“The problem is that we have 1 trillion devices that are not symmetrical,”
said Chowdary Yanamadala, senior director of security marketing at ARM.
ARM is a large semiconductor and software design company, not to say the biggest. You can be sure that in every robot, you have an ARM processor.
“There are lots of rich nodes, constrained nodes and mainstream nodes. With different deployment schemes and structures, there are different attack surfaces and attack vectors that we need to worry about. So how do we protect these devices? There is no one silver bullet. But we can make sure that security is addressed in a methodical manner, through a framework that can handle the appropriate threats and attack vectors that are pertinent to a particular deployment. While that framework might change, depending on the deployment, the need for a framework to address this in a methodical, systematic manner is essential. There are gaps, and we are trying to fill them. From there you can build on top of it and apply the necessary protection mechanisms, depending on the deployment.”
To address these issues of cyber security, I would like to take the example of CloudMinds, a company that works on Cloud Robotics. Security is their raison d’être. They offer an end-to-end system consisting of a XaaS cloud, a blockchain-based VBN network and a robot-control mobile device. The network is independent of public networks and therefore immune from cyber attacks. This is truly an innovative approach that uses the latest technologies, such as blockchain, to counter cyber security risks.
In a nutshell, 5G would make robotics smart enough to make a place into our societies.