Human Machine Interfaces and Command Over The Digital Environment

Navneet Vishwanathan
b8125-fall2023
Published in
5 min readDec 6, 2023

As computing technology and capability advances and automation rises as a dominant current in augmenting capability, technologists and investors must take particular consideration of how humans interact with technology. This is all the more important in the context of military operations and national security where the bevy of information and speed of multi-domain operations require operators to think quickly and process information at ease. Investment in Human Machine Interfaces is necessary to support decision-making processes in an era of modern computing. From a defense context, the importance of these tools largely exists to augment and support decision-making systems/processes, enhance command and control, generate more user-intuitive systems, and support the integration of increasing disparate data into warfighting systems.

From an investor perspective then, it’s important to be clear about what we mean when we talk about human machine interfaces and then define the emergent areas and trends that will shape opportunity in the space of HMI.

To answer the first question, based on the Department of Defense’s definition of the space we can begin by dividing the category into two major subcomponents, each of which has its own clear lane: Augmented & Virtual Reality and Human-Machine Teaming.

First unpacking AR/VR (or even XR as an umbrella term), the interface between the human and the machine can largely be defined by the extent to which digital environments complement physical ones. At the lowest level of interaction is augmented reality where translucent devices can be used to superimpose digital components to physical spaces. Taking this one step further is a little used term, mixed reality, which layers in some user interaction between the virtual and physical environments. And finally at the most digital level is full virtual reality, where the user is transported from the physical to the virtual in an immersive experience.

Several trends and undercurrents in the space can be unpacked to build an investor’s perspective on AR/VR as well as subdivide this area into more digestible applications:

One the VR front, current adoption of VR is largely in its infancy with limited transformative use cases seen in the market so far. Overall, venture investment into AR/VR technologies has been on a downward trends since 2017, leaving the space sensitive to investment risk. However, the promise of AR/VR tech as a value-add capability is still there, though the timing may not be. What this may tell us is that investment in the enabling capabilities and pick-and-shovel opportunities in AR/VR may be an interesting prospect. Google Glass was long derided for being too soon to the market but it’s not unfounded to expect innovation in hardware forms to support AR/VR technology. Savvy investors should see opportunity in better displays, better wearables, better enabling tech to reduce the potential energy required to supercharge AR/VR adoption.

One other small segment to observe here is the AR/VR simulation and content space. The use case here is fairly clear with advanced simulation capabilities and training leading to potentially better adherence outcomes. And the spectrum of capability here varies widely from traditional content platforms and metaverse style simulations to advanced neuromorphic AI and brain-pattern mimicking simulations. The viability and outcome in this area is still to be determined with fewer clear competitive moats and a dependence on the hardware that’s currently lagging. However, with breakthroughs in systems, the value proposition can be enhanced and turn into a potential watershed moment.

On the AR side of the house, adoption of mixed reality tools and integration into existing platforms may be a more immediate gate. We’ve already seen the early inklings of this with heads-up displays on cars but innovation on embedded systems can prove to be a critical capability for increasing decision-making effectiveness and information dissemination.

Like XR, Human Machine Teaming spans a spectrum of capabilities based on the ratio of human to machine being teamed. At the far end — with greatest human control — are “Evaluator” systems which help assess problems, leaving decisions to humans. One step beyond this are “Illuminator” systems which, like Evaluators leave the decision to humans but provide more insight rather than compiling information. “Recommender” systems take these insights to support automation of routine decisions with ease. “Decider” systems take this a step further, making the decision based on its assessment but leave the human to implement. And at the other pole, “Automator” systems fully automate processes with limited human interaction. Across these various modes of human machine teaming, opportunity for better human machine interaction exists in several critical industries:

First, looking at advanced manufacturing, industrial IoT applications, advancement in sensor technology, and other digital manufacturing trends are key enablers of better human-machine teaming. To create more reliable and autonomous human-machine teams, the systems require strong and reliable sets of data to open their aperture across two dimensions: the scope of work that can be performed by a machine and the quality of this work. With better IoT and better sensors, human-machine systems, will have better data across the entire decision-making chain, thereby understanding the impacts and consequences of decisions and enabling smarter HMT. Applications of this type of human-machine teams can be seen in factory settings, where better robotics will enable a transition to Industry 4.0, or in military settings where human-machine teams can facilitate the automation of repetitive tasks.

A second point of interaction with the critical technology areas is AI and Autonomous systems. Autonomous systems and human-machine teams both largely exit on the same spectrum, varying based on the level of automation. Here, innovation in artificial intelligence capabilities can help advance applications on semi-autonomous systems and human-machine teams. On AI and human machine teaming, developments in AI such as large language models, image models, and neuromorphic AI can radically enhance the way scope of work that can be done by machines. An AI that is able to behave and think in human patterns can not only make better decisions but can make more human decisions and more interpretable decisions, enabling more effective teaming and extrapolation of insight.

Each of these areas highlighted, across the extended reality and human-machine teaming flavors represent interesting opportunities to support operators in making better decisions. Half the battle with advanced computing technology is developing the better engine, system, or processing capacity to make better decisions. The other half of the problem is developing an effective interface for the human operator to comprehend, manage, and process the computer output. The relationship between human and machine itself is one that can be defined through technological progress and investors in this space are wise to get ahead of trends.

For a thoughtful technologist, operator, or investor, opportunity exists in this space but it must be context-aware on the broader adoption and integration into the decision-making loop. For military operators, the term of art is the “kill-chain.” Humans are still very relevant to this process as decision makers within the loop. Consideration of how to augment human decision-making capability within the kill chain must consider how the human machine teams that bridge our relationship between the real and digital.

--

--

Navneet Vishwanathan
b8125-fall2023
0 Followers
Writer for

Passionate about exploring dual-use and defense tech venture and innovation. MBA @ColumbiaBiz. Twtr: @_nav_v, In: @nvishwanathan