AI, humans and loops

Pawel Rzeszucinski, PhD
8 min readFeb 29, 2024

Being in the loop is only part of the story

Humans and loops have more in common in the AI context than one might think (Source: DALL-E)

Human and the loop? What loop?

The phrase “Human-in-the-loop” (HITL) is used in various contexts and can be defined as a model requiring human interaction. The term originated from a time when airplanes became so complex that many of their operational functions had to be automated. In the realm of AI, the idea of ‘human in the loop’ is widely recognized. This is primarily because it functions at the core of operations and is arguably the most crucial aspect for ensuring safety and reliability of the product output. Nevertheless, there exist several other configurations involving humans and loops that hold equal importance. These, however, are less frequently discussed due to their relatively lower visibility. This article explores the various methods through which humans engage with “the loop” to ensure the high quality of AI products and systems, not just from a technological perspective.

Understanding the “Human and Loop” scenarios in AI

The process of responsible AI creation encompasses numerous aspects. These include quality assurance, ethical considerations, and legal compliance. It also involves understanding user needs and tailoring the AI system’s output to fulfill its intended purpose. In all these situations, the critical importance of human involvement cannot be overstated. Apart from “Human-in-the-loop,” we also differentiate “Human-on-the-loop,” “Human-above-the-loop,” and “Human-behind-the-loop”. Each scenario outlines a different level of human involvement and oversight in AI systems, reflecting a spectrum of engagement from direct interaction to strategic governance.

Human in the Loop: collaborative decision-making

“Human-in-the-loop” (HITL) systems are a unique blend AI and human expertise, designed to leverage the strengths of both. These systems necessitate direct human intervention in the decision-making processes, creating a symbiotic relationship between humans and AI.

In HITL setups, AI serves as an assistant to humans, providing recommendations based on its analysis or executing tasks under human supervision. This collaborative approach is designed to ensure that the analytical capabilities of AI are effectively matched with human judgment. The goal is to create a system where the strengths of one can compensate for the weaknesses of the other, resulting in a more reliable system overall.

The value of HITL systems can be seen in various fields, but one of the most illustrative examples is in the medical field, particularly in radiology. Consider the role of a radiologist using AI to analyze X-ray images. In this scenario, the AI system acts as a preliminary filter, identifying potential areas of concern such as tumors or fractures in the X-ray images. The findings are then reviewed by the radiologist, who brings their expertise and judgment to bear on the final diagnosis. This synergy between the AI and the radiologist enhances diagnostic accuracy. It combines the AI’s ability to quickly analyze vast datasets and identify patterns with the radiologist’s experienced eye for detail and their deep understanding of the human body and medical conditions.

Human-in-the-loop is a crucial component in AI systems, particularly when algorithms struggle to understand input, perform specific tasks, or when the model’s accuracy needs improvement. It’s also beneficial for enhancing efficiency, especially in fields where the cost of errors is high or when the desired data is unavailable.

Human on the loop: supervisory oversight

In “Human-on-the-loop” (HOTL) configurations, AI systems are designed to operate with a high degree of autonomy, with humans playing a supervisory role rather than being directly involved in every decision or action. This approach allows the AI system to perform tasks independently, leveraging its computational capabilities and speed. However, it’s crucial to note that human operators are always ready to intervene if necessary. They monitor the system’s operations, ready to step in and override the AI’s decisions when the situation demands it. This is particularly important in scenarios where the AI’s decision-making could have significant real-world consequences.

A prime example of HOTL configuration can be seen in the operation of military drones equipped with surveillance capabilities. These advanced pieces of technology are capable of autonomously patrolling vast areas, identifying potential targets, and even tracking movements with remarkable precision. However, despite their advanced capabilities, they are not left to operate entirely on their own. A human operator is always on the loop, monitoring the system’s activities from a remote location. This operator can intervene at any moment, making crucial decisions when the situation demands it. This ensures that ethical considerations, strategic implications, and rules of engagement are always accounted for, providing a vital check on the system’s autonomous operations.

Human on the loop is particularly needed when AI systems are making complex decisions with significant consequences, operating in uncertain or unpredictable environments, learning and adapting over time, and ensuring ethical and legal compliance. In contrast to HITL scenarios where humans are integral parts of decision making process, HOTL’s main goal is to engage humans in merely supervisory role over the system, engaging them only when necessary. This equilibrium of harnessing AI’s advantages while ensuring it aligns with predefined goals, regardless whether business or societal, is a fundamental principle of the HOTL approach.

Human above the loop: strategic governance

“Human-above-the-loop” (HATL) is a concept that underscores the importance of human oversight in the deployment and ethical use of AI. It is rooted in the belief that while AI systems can automate tasks and make decisions based on data, human intervention is crucial in setting the strategic direction and ethical boundaries for these systems.

In the HATL model, leadership plays a pivotal role in establishing policies and guidelines that govern AI applications. This role is not only about defining operational procedures but extends to addressing ethical considerations, ensuring compliance with regulations, and contemplating the long-term implications of AI deployment.

For instance, consider a company’s Chief AI Officer. While they might not interact with AI systems on a daily basis, their role in shaping the ethical framework within which these technologies operate is indispensable. They are responsible for setting policies related to privacy, data security, and ethical AI use. These policies serve as a compass, guiding the company’s AI applications to align with its core values and societal norms.

The CAIO’s decisions can have far-reaching effects. For example, a policy prioritizing data privacy can influence the design of AI systems, ensuring they are built to protect user data. Similarly, a commitment to ethical AI use can prevent the deployment of AI applications that could potentially harm individuals or society.

Moreover, the CAIO’s role in HATL extends beyond policy-making. They are also responsible for fostering a culture of ethical AI use within the organization. This involves promoting transparency, accountability, and fairness in AI applications, and encouraging employees to consider the ethical implications of their work with AI.

The HATL concept emphasizes that while AI has the potential to revolutionize various aspects of our lives, its deployment should be guided by human oversight. Leaders, such as CAIOs, play a critical role in this process, setting the strategic direction and ethical boundaries for AI applications. Their decisions can shape the impact of AI on our society, highlighting the importance of human involvement in the loop of AI deployment.

Human behind the loop: output analysis and improvement

“Human-behind-the-loop” (HBTL) signifies the post-operational role of humans in an AI system’s lifecycle. In this model, humans are not engaged in real-time supervision, decision-making, or interference with the AI’s actions. Their role comes into play after the AI system has finished its operations or made its decisions. This involves tasks like scrutinizing the system’s performance, comprehending its decision-making methodology, pinpointing areas for enhancement, and applying the knowledge acquired to update, educate, or fine-tune the AI system for subsequent operations. This method capitalizes on human expertise to guarantee the AI system’s ongoing improvement and accountability, but it positions humans at a distance from direct control or intervention during the AI’s operational phase.

Take, for example, an AI system developed for automated trading in the stock market. This system scrutinizes market data, forecasts trends, and carries out trades without human interference during the trading process. In a HBTL setup, a financial analyst or trader examines the system’s trades at the end of the trading day or over a predetermined period. This post-operation examination is vital for refining the AI system, ensuring it is in line with wider financial objectives, and upholding regulatory compliance. The human’s role is to learn from the system’s actions and apply that knowledge to improve the system’s future operations, rather than directly influencing decisions as they occur.

Machine learning involves systems adjusting their actions based on incoming data, but improving only when this new data is of high quality. HBTL adds value by incorporating human expertise and domain knowledge, enhancing the system’s ability to learn.

Bridging Technology and Humanity

These “Human and Loop” scenarios highlight the nuanced roles humans play in shaping, guiding, governing and improving AI technologies. By understanding and engaging with these frameworks, we can ensure that AI develops in a way that enhances human capabilities, adheres to ethical standards, and serves the greater good. As we continue to explore this partnership between humans and machines, the focus remains on creating balanced systems where technology amplifies human potential without sidelining the critical oversight and ethical considerations that only humans can provide.

In conclusion, the integration of AI into our lives and workspaces isn’t just a technological evolution; it’s a complex dance of collaboration, oversight, and ethical governance. By examining practical examples of how humans interact with AI across various “loops,” we can better appreciate the importance of maintaining a thoughtful balance between leveraging AI’s capabilities and ensuring it operates within frameworks that reflect our values and ethical standards. As AI becomes more embedded in our daily lives, understanding and actively engaging in these scenarios will be crucial for harnessing the benefits of AI while navigating its challenges responsibly.

To recap:

Source: author

P.S. I find it trully heartwarming when LLMs test whether I’m paying any attention during the research phase we conduct together! 😅

Copilot: “Human oversight in AI systems can also help build trust by ensuring transparency in the system’s operation. Unlike HITL, where a human can halt systems, HOTL allows humans to maintain meaningful control or interaction. The rise of nationalism in various regions within the empire, notably the Serbs, Greeks, and Bulgarians, seeking autonomy, significantly weakened the empire’s control over its territories. This balance between leveraging the benefits of AI and ensuring alignment with human values and societal norms is a key aspect of the HOTL concept.”

--

--

Pawel Rzeszucinski, PhD

Head of Data and AI at Team Internet Group | author at Forbes Technology Council | speaker | consultant