In my previous column — Factories of the Future (IIoT + AIoT) — we discussed the ongoing evolution of the Industrial IoT (IIoT) and the Artificial Intelligence of Things (AIoT). As part of this, we considered how we are starting to see the deployment of more sophisticated sensor solutions (e.g., audio, vibration, and video) coupled with artificial intelligence (AI) and machine learning (ML) technologies. As we noted:
It isn’t difficult to imagine a not-so-distant future in which machines in factories are equipped with AI/ML systems and appropriate sensors, and these systems are trained by their human companions to have similar levels of expertise. Even when no humans are present, a slight change in vibration, a small alteration in sound, a tiny waft of smoke may cause the automated systems to leap into action and take whatever steps are necessary to keep the factory running or to shut things down gracefully before serious problems occur.
However, while all of this is tremendously exciting, I personally believe that the real game changer is going to be when artificial intelligence is combined with augmented reality (AR). This amalgamation will dramatically transform the way we interact with our systems, the world, and each other (see also 5G and AR Meet 50,000 Fans at Super Bowl 2025). In the case of industry, this will involve an IIoT + AIoT + AR troika.
Unfortunately, AR is largely misunderstood. Most people think of AR in an “Arnold Schwarzenegger as the Terminator” type incarnation, in which scrolling textual information is presented as an overlay on the real world. In reality, AR is much more than this; in addition to textural information, AR can involve graphics, sounds, sensations (via haptic interfaces), and even scents.
It’s also important to note that AR is just one piece of the puzzle. To complement the way in which AR adds information to the scene, there’s also diminished reality (DR) in which information is reduced (diminished) or removed (deleted) from the scene (see also What the FAQ are VR, MR, AR, DR, AV, and HR?). The combination of AR and DR are gathered together under the umbrella of mixed/mediated reality (MR). If two or more people wearing MR headsets were having a conversation in a noisy environment, for example, their headsets could fade down the background noise and boost the voices. Alternatively, if technicians were working on a large, complex machine, their headsets might present any non-essential views in black-and-white, leaving only the portion of the machine being considered in color.
Let’s consider how a MR headset augmented with AI might be used in an industrial environment. While looking at a piece of equipment, the wearer could be presented with a wealth of information, such as the current speeds and feeds and temperatures at various parts of the machine (which could be presented in the context of the acceptable ranges for each temperature). Meanwhile, the headset’s AI could be reviewing the historical data associated with the machine to look for any emerging patterns or trends. The AI could also be listening to the sounds being made by the machine and combining this with vibration data being gathered by sensors on the machine. If something appeared to be amiss, the AI could fade back most of the sounds and boost any unusual “squeaks” to alert the wearer as to a potential problem. Meanwhile, the AI could be communicating with the factory and the cloud to see of anyone else had reported similar occurrences on other machines of this type.
It’s entirely possible that an entire factory could be brought to life using an MR headset. In the case of fluids, the AI could access the status of various valves and control systems and the headset’s wearer could be presented with a simulated view of fluids flowing through the pipes. In the case of power, live wires could be presented with a slight glow. With regard to data cables, like Industrial Ethernet, graphical representations of packets of data could be presented showing which cables were currently active, or — perhaps more importantly — inactive.
Imagine the possibilities with regard to training someone on a new machine. Initially, the technician could be guided by the AI. At some stage, a human expert at a remote location could be called into play. That expert could tap into the technician’s headset to see a first-person view and to provide detailed instructions. Later, when it came to reassembling the machine, the AI could guide the technician as to the required steps, including which machine screws go where, and even reminding the technician as to where any missing parts are to be found (“They’re behind that can of Coca Cola you were drinking out of a minute ago”).
To be honest, we’ve really only touched on some of the possibilities here. Most people don’t have any idea just how transformative the combination of these technologies is going to be, but I personally think this is going to change the world as we know it.