Why the Cloud and Fog Can Improve Robotics

Marcelina
Shade Robotics

--

A Brief Overview of the Cloud and Fog

Cloud computing basically allows people to use a company’s larger servers to perform high-intensity processing. A company may want to run some high power AI algorithms but may not have the equipment to do so. Cloud computing makes this possible through off-site servers. It is a way of extending computing capabilities without investing in new hardware.

Fog computing is an extension of cloud computing. The fog is the layer between edge devices and the cloud. Where edge devices may be the hardware on the device and the cloud being large data centers in centralized locations, fog nodes are devices that lay in the areas and locations between: more computation than the edge, and less latency than the cloud. As a result, the deployment of fog computing, can absorb some of the pressure of a cloud data center. Theoretically, any device with spare resources can act as a fog node and be a resource supporter for fog computing. Fog nodes are located in closer proximity to the edge devices from which they receive data, making fog computing superior to cloud computing in terms of response speed. Additionally, they are able to analyze and filter the data they receive and only forward relevant data to the cloud, further limiting traffic congestion.

Cloud Computing vs Fog Computing vs Edge Computing

Bringing the Cloud and Fog to Robotics

Cloud and Fog Robotics were introduced as a way for robots to enhance their capabilities by allowing them to take advantage of the processing power of large servers and computers: robotics without the hardware limitations. Many times in robotics either equipment is too expensive or complex programming and processing is out of reach simply due to computational constraints such as not enough RAM or GPU. But with cloud and fog robotics, integrating computationally-intensive algorithms is straightforward (with trade offs as we will see later). By offloading much of a robot’s data processing to off-site (near-site) servers and processing and only running control-critical tasks locally (such as safety mechanisms, collision detection), computationally intense robotics tasks can be solved more efficiently while infrastructure costs are reduced, leaving us with more adaptable, scalable, interoperable, and reliable robots. Let’s dive into some of the immense advantages that robotics gains from applying the cloud and fog.

Adaptability

Smart devices like robots benefit greatly from central data processing because it allows them to easily access colossal collections of information (think a hive mind) and powerful computing resources. This makes for more flexible robots that are able to adjust to various scenarios at any time. Such adaptability is crucial for robots in remaining robust.

Let’s talk through a quick example: imagine you have a new autonomous mobile robot that is being rolled out across all your warehouses. Each robot is assigned to an associated warehouse, understanding the mapping, location of items, and associated classifications for the items specific to that warehouse. Now imagine a new robot that comes in or a robot that moves to another warehouse, or even a new item that comes into inventory (multiple variables). Without a central source of truth, these AMRs have to relearn everything again. Introducing a hive mind leads to more adaptable robots and faster ROI including applications and companies in robots-as-a-service.

Having a data storage that is capable of holding more data than a robot could hold onboard unlocks the potential for a robot to learn about and adapt to more environments, tasks, and potential collaborators (other robots). A self-driving car, for example, might have to travel in different areas or terrains from the ones in which it was trained, so having data about new environments which it can pull from the cloud makes for a more adaptable system, and all without having to store additional data onboard the system. Different AVs can even learn from each others’ travel experiences as they can exchange and share data about environments they have already experienced, via the cloud. This capability is extra useful when working with a large fleet of robots that should all be able to perform the same operations because the cloud makes it easy and efficient to reuse training data when expanding the fleet.

Even more than the cloud, fog robotics helps leverage this adaptability because it allows robot tasks to be carried out the the use of devices that might not be a part of the robot. Fog computing does not depend on the availability of all connected edge devices which requires the fog to be able to assign tasks to the best performing devices of the current moment. Within the fog architecture, tasks can easily be switched to devices that are working correctly in order to continue task execution and avoid total failure in cases where a part or even an entire robot malfunctions.

Scalability

Scalable technologies and solutions are always favorable and this is especially true in robotics as automation becomes mainstream across most industries. Companies like Amazon with large warehouses utilize robotic fleets to navigate and manage these warehouses. These fleets must be able to increase performance as they inevitably increase in size. One of the best things the cloud provides is an elastic infrastructure which allows for automatic deployments and thus ease of scalability. The use of a central cloud network, where all the same data is available to every robot in the fleet, makes it straightforward to expand the fleet by simply connecting additional robots to the same network which instantly provides them with the same data and capabilities as the rest of the fleet. Typically if you want to scale up a system with a lot of robots, you have to add more servers, but in the cloud it’s done automatically. Hundreds of robots can be deployed without any difference and workflows can be transmitted to hundreds of robots over the cloud in minimal time.

The fog ecosystem embraces scalability as the amount of devices connected is huge and this number can easily vary, allowing for not only scalability but flexibility as well. Thus fog robotics allows a robotic system to easily be scaled up as there is a lot more resource availability among the collection of devices in the fog which can be integrated into the system to expand it, whether that be in physical size or computational power and efficiency.

Cheaper on Hardware

Developing a robot can be expensive, and the more advanced the robot is in terms of computational abilities, the more hardware is required on board the robot to be able to process large amounts of data and perform complex computations. That places a huge burden on roboticists as they are forced to build bulky robots using crazy expensive hardware such as processors, sensors, actuators, etc. Complex humanoid robot production costs can quickly get up into the millions. Often, robotics projects are even forced to make certain performance tradeoffs due to the lack of affordable hardware.

The cloud offers roboticists the ability to offload a lot of computation to other servers. For example, data can be pushed to the cloud for processing and building a map of a robots environment, which can then be transmitted back down to the robot for local navigation. The same map can be transmitted to other robots using the cloud. Robots that take advantage of this can be produced using less hardware, making the overall product much cheaper and easier to manage. The costs can instead be allotted to additional robots, research, and other useful robotic endeavors. Additionally, when robots can offload computation, carry less sensors, or use less powerful processing units, they save a lot of energy. This is a huge advantage as it then allows them to retain their battery life and remain autonomous.

Another solution to hardware constraints is provided by the fog: real-time inference offload. When robots are unable to contain all the necessary hardware to perform everything onboard, being connected to the fog allows them to use hardware from surrounding devices. For example if a robot is not equipped with all the required sensors it can tap into the sensors of a nearby fog node to complete a task. This type of interaction would be natural for something like a robotic fleet in which it is normal for robots to utilize each others data anyway, and in this way each robot in the fleet also serves as a fog node.

Interoperability

In the robotics world, communication can occur between robots of the same type as well as different robots. Maintaining this line of connectivity is essential for collaborative robots which need to share states and information with others. When robots cooperate the result is more specialized robots that can complete more complex tasks because they are simultaneously and constantly receiving multiple streams of information that empower the tools that carry out those tasks. Yet putting this in practice is easier said than done. With different devices having their own messaging systems and protocols, the need for standardization becomes more and more important.

In the cloud and the fog, every device in the network can interact with any other device in the network allowing multiple types of cooperation. This connects back to the idea of a hive mind shared by numerous robots all having access to the same information. A central brain of this kind, which promotes a universal operation schema, is what organizes robot fleets and can even enable communication and cooperation between robots from different companies. Imagine a world where self-driving trucks can communicate and dock autonomously with warehouses.

Interoperability also enables existing robots to integrate into new spaces and platforms and thus heavily contributes to the growth and progress of the robotics industry. In this way, it is an essential step towards a future robotics environment where the full potential of robots is realized.

Watch robots using the cloud to share information and collaborate to complete tasks

There Cannot Be Light Without Darkness…

Unfortunately, the cloud and fog robotics vision is still early and, as is common across all innovation, these promising advantages are accompanied by an array of challenges. Although cloud and fog computing certainly take robotics to new heights, we’re not working with flawless technologies here, so it is crucial to consider how factors such as latency, security, connectivity, and feasibility fit into the picture.

Latency

Perhaps the largest issue we face when it comes to incorporating the cloud and fog into robotics is latency. Due to the rapid increase in smart devices, and consequently in data and traffic, cloud performance degrades as more devices demand more fetching of data to and from the cloud, causing resource bottlenecks at the data center and in turn creating network congestion, latency, lower efficiency, and significant decrease in reliability. This is bad news for cloud robotics since many robotics applications such as robot navigation have strict requirements on processing latency. High latency prevents robotic systems from working in near real time which is essential for many reasons like safety and fluidity. For example, the sensors in a Tesla car constantly monitor its surrounding environment. If they detect an obstacle which the car must move around or stop n front of, the data sent through the sensor must be processed instantly to help the car recognize the obstacle before it comes into contact with it. If the delay in detection is even a fraction too long because of latency caused by connection to the cloud, there could be major issues and human safety concerns.

In addition to the network congestion. Simply sending data across current cellular networks are slow and inconsistent by design. Though we are looking forward to the promises of 5G and its ability to mitigate latency, there still is critical infrastructure to be built to scale this across multiple locations including rural environments.

Fog robotics arises due to the increase in demand of cloud services and the fact that the applications of cloud robotics require low latency in order to be truly useful. Fog helps mitigate some of that latency. Since fog nodes are connection points between an edge device and the cloud, fog allows us to store and process data closer to the robot, which minimizes traffic to the cloud and reduces latency while still having computation offloaded from the robot. Fog computing promises a real-time network that the cloud lacks.

In the end, more work needs to be done to optimize the allocation of operations with their associated levels of importance (for example human safety critical operations must be done on board whereas speech synthesis and high level mission planning could be done offline).

Security

Another issue that stems from increased data and traffic to the cloud is related to security. Robots become vulnerable to things like privacy breaches, process hacks, and ransomware attacks when data is stored and operations are performed remotely in the cloud. Hackers can use remote execution to get access to robots connected to the cloud and change robotic services to affect robotic behavior in harmful ways.

This is where having a large fleet of robots all connected to and controlled by one cloud network can be extremely risky (single point of risk). If a hacker gains access to the network, they can cause dangerous attacks on a massive scale. This however can be mitigated with VPN connections.

An even better solution is found in the distributed architecture of fog robotics, where the “distributed-by-nature” creates additional layers for security so that network security protocols can be applied between network nodes and entire systems are safeguarded from cloud to device. Fog nodes can monitor the security status of nearby devices to quickly detect and isolate threats. Therefore, if one node is compromised the rest of the network can still be safe. In addition, since computation is separated across various nodes, no node or point in the network has a complete view of an ongoing operation, following the least access privilege and zero-knowledge security practices.

Connectivity

Imagine, in the future, having a life-saving operation performed on you by a surgical robotic arm. Now imagine the trouble that could ensue if the robot suddenly loses its connection to the cloud. It’s simply not an option.

Both the cloud and fog are subject to connectivity issues that can rise from various factors such as network congestion, power failures, and security breaches. These network connectivity issues make cloud and fog robotics not fully reliable as robots can potentially become not only slow but completely inoperable at any point. Although systems may have backup protocols onboard for cases when such failure occurs, such protocols are not possible for all systems or for certain computations and operations. Fog, however, has the advantage here in that fog nodes can form resource sharing clusters as a fallback so that robots can keep sharing resources performing tasks even if a network goes down. This idea follows the principles of swarm robotics in which groups of robots operate without relying on any external infrastructure or form of centralized control.

Thus this element of fault tolerance needs to be considered and more code needs to be written to ensure that latency and connectivity are utilized to determine best allocation of algorithm and robotic performance.

On the bright side, the growing 5G network offers major connectivity and latency improvements, hopefully making these challenges less of a worry for cloud and fog robotics in the future.

Development

One of the more overarching things we must consider when it comes to cloud and fog robotics is how feasible the development of such technologies is. Development locally with the Robotics Operating System (ROS) is already its own feat. Trying to get critical algorithms such as YOLOv5 or motion planning algorithms is still not a plug and chug based system. ROS2 seeks to help make some improvements via virtualization and real-time systems, however the ecosystem is still limited with limited connections with the fog and cloud. Though it is widely used by roboticists, besides being fairly difficult to start out with, ROS tries to be universal and comes with many features that you don’t really need. This leaves developers having to modify ROS for their specific needs, substituting certain pieces with other libraries or starting completely from scratch writing their own libraries. What this results in is a complex and lengthy development process, tons of variations among common robot operations such as path and motion planning, navigation, etc., and no solid universal standard for robots. Currently, the technical complexity and the lack of standardization between hardware and software components in robotics are creating big disconnects that limit innovation and drive many away. Without easy software integrations to create state of the art robots, or limited access to state of the art algorithms, how can we assume that every robot will be easily connected to the fog and cloud?

Takeaways

Cloud and fog robotics are no longer future phenomena; they’re here. However, there are certain hurdles that we must get over before they can be widely implemented without worry. Many of the challenges that exist within the cloud robotics space can be mitigated with fog robotics but even fog is not a perfect solution. In many cases it is subject to similar vulnerabilities, just on a smaller scale.

Despite the fact that we face a few challenges that hinder present capabilities, there’s no arguing that the cloud and the fog are game changers for robotics and will take the industry far. The current state of technology, data, automation, and the likes simply demand that cloud and fog robotics will become the norm.

--

--