Sitemap

Is AI Hijacking Our Agency?

8 min readMay 7, 2024

By Ryan Watkins, George Washington University, Eran Barak-Medina, Holon Institute of Technology, Katrina Pugh, Columbia University

Joe handed me the competitor report, including all of the tables dutifully filled out. With my recent travel schedule, the push to complete the report before the client meeting yesterday had fallen on his shoulders — but now I saw the telltale signatures of ChatGPT. My eyes likely rolled a little, I even felt a little betrayed, because competitive analysis is our bread and butter. As the manager I was going to be held accountable for this report. I feared that he wouldn’t know the nuances behind the data, and, in the upcoming meeting, I couldn’t call on him to provide the depth behind the report. Was Joe using AI as a crutch or an aid? Is AI hijacking Joe’s agency?

Artificial Intelligence (AI) is now everywhere, it seems, and this presents us (managers, teachers, parents, leaders, etc.) with as many opportunities and possibilities as concerns and challenges. While many organizations (large and small) have begun to create business strategies and/or policies for AI, most of the day-to-day implications for us, as individuals, are still emerging.

AI tools (i.e., such as ChatGPT, Claude, and Perplexity) are new, and none of us grew up learning how to use them wisely [1]. It is therefore likely to fall on us (as managers, teachers, etc.) to direct, mentor and coach others (employees, students, clients, etc.) on the appropriate uses of AI. This includes how to leverage AI technologies to expand and accelerate their learning and performance, rather than to delegate and decelerate their performance and learning.

“Agency hijacking” happens when AI systems (with or without our consent) diminish our self-efficacy and sense of agency [2, 3]. Hijacking happens, for example, when students hand in AI-generated assignments without even reading them, lawyers submit briefs with bogus cases invented by AI, or programmers rely on AI to generate code to the extent that it usurps their understanding of how the code works. By proactively recognizing the potential for “agency hijacking” in our diverse uses of AI, we can, however, take the first steps to remaining empowered by AI.

The performance gains of using AI are routinely valuable — but there are inherent risks (deskilling, atrophy, errors) and trade-offs. For example, there is a tradeoff between speed and deep knowledge, as in the example with Joe at the beginning. In the example, Joe’s use of AI may have had the short-term benefit of getting the client report done, yet now his manager is both frustrated by his over-reliance on AI, concerned about Joe’s lack of depth on the topic, and concerned about his capacity to do this work and future assignments. How could the manager, Joe, and the AI have collaborated to produce a stellar report, grow his understanding, and better serve the client?

Here we suggest that when entering almost any process, we can go in any of four AI-use loops. First, we present four Agency Loops that can inform how you manage, coach, mentor, or teach others to use AI. After that, we include practical tips for avoiding agency hijacking.

Four Loops of Human Agency

The Empowerment Loop

The Empowerment Loop entails an individual investing in themself, and (re)generating their sense of self-efficacy and individual agency. It typically begins with a question, such as “How can I use AI to achieve more or better?”, and is followed by the discovery of how incorporating AI into the work process can help them reach results that they could not achieve without AI.

  • For EXAMPLE, a lawyer uses a language model to speculate arguments (and counterarguments) faster and with a larger scope, by asking an AI tool, such as a Large Language Model (LLM), to respond to their arguments while preparing for trial.
  • BENEFITS of the Empowerment Loop can include attention and curiosity that lead to new ways of problem-identification and problem-solving, including those that extend scope and scale, and even stand back and contextualize the endeavor. Empowerment can result in improved processes and services, better logic, better understanding (e.g., through drill-down or layering of knowledge), and, ultimately, better outcomes.
  • Empowerment REQUIRES a deeper understanding of what AI can (and can’t) do. Being empowered by AI routinely requires understanding the genesis of the result, at least at a high level.

The RISKS of this Loop may include time-lag, the need to fill a skill gap (such as effectively prompting LLMs), and insufficient technical or subject-matter skills to “unpack” non-intuitive results.

Figure 1: The Empowerment Loop

The Delegation Loop

The Delegation Loop initiates when an individual opts to use AI which carries out tasks, solve problems, or pursue opportunities on the individual’s behalf. The loop is characterized by setting aside personal involvement in the process. In effect, the individual is ceding ownership of the results to AI.

  • For EXAMPLE, a blogger or a marketing content creator asks an LLM to write an article instead of them. Our initial example of an employee who used ChatGPT to prepare a competitor analysis is another example of the delegation loop.
  • BENEFITS of delegation can include the promise of efficient, convenient, and immediate results, without any struggle and related learning. More content can be “covered” in less time.
  • Delegation REQUIRES an understanding of what AI can do (and can’t do), a reasonable prompting skill (when generative AI is concerned), and faith that the AI is more capable.
  • The RISKS of this loop may include an eroding sense of personal agency and self-efficacy, as well as missed opportunities for professional growth and learning. Similarly, logic or reasoning capability may elude the delegator.
Figure 2: The Delegation Loop

The Suspension Loop

The Suspension Loop occurs when an individual deliberately decides to eschew AI in favor of their own work. This leads to a reinforcement of self-efficacy through increased reliance on personal skills (without the use of AI). Here individuals intentionally choose not to use AI, for the time being, in order to increase their knowledge and/or skills, or ensure they understand the cause of the outcome. They believe that personal effort and development are paramount. Suspending the use of AI may also be appropriate when stress from other changes is elevated and delaying the use of AI can reduce potential anxiety, or just maintain focus temporarily..

  • For EXAMPLE, a young programmer deliberately writes the code for a project on their own, although they know ChatGPT could do the same (and probably faster). They do this because they want to master the basics of code writing and grow their ability to understand the logic of programming.
  • BENEFITS of suspending, or deferring, the use of AI can include a heightened sense of accomplishment and self-efficacy, as well as knowledge and skill development. In the context of an organization pulling toward a deadline (or crisis), for instance, staying with the status quo (i.e., human capacity) may be a better use of attention and resources.
  • Suspension REQUIRES less investment in learning to use AI immediately, and a rationale for deferring the use of AI, especially when peers and competitors are using it.
  • The RISKS of this loop may include unintentional long-term avoidance of AI and reduced productivity.
Figure 3: The Suspension Loop

Avoidance Loop

The Avoidance Loop is marked by individuals not using AI for reasons including, but not limited to, perceiving that learning to use AI is too difficult, denying the usefulness of AI, not paying attention to the opportunity to leverage AI, or believing that AI’s superiority renders human effort redundant or inadequate. That said, avoidance can also be appropriate in some situations, such as when organizational policies and/or cybersecurity threats require avoiding AI use.

  • For EXAMPLE, a physician who makes a diagnosis without using an available AI system that offers a high diagnosis accuracy rate.
  • BENEFITS of avoidance may include not having to change behavior since change can be expensive and disruptive.
  • Avoidance REQUIRES avoidance behavior, but allows more time to focus on the task.
  • The RISKS of this loop may include an eroding sense of personal agency and self-efficacy, as well as missed opportunities for professional growth and learning.

At the same time, others will be using AI to innovate and learn in order to gain advantages.

Figure 4: The Avoidance Loop

Tips to Avoid Agency Hijacking

As leaders, managers, mentors, coaches, and/or teachers we can apply the four loops in the day-to-day guidance that we give (and demonstrate) for those we work with. Below we offer tips for how you can use AI (i.e., role modeling) and tips for deliberately leading others.

Tips for how you use AI:

  • Routinely ask yourself “How can AI help me achieve better results?”, rather than how can it take on your tasks?
  • Actively search, learn, and experiment with new AI tools; examining their potential contribution to your goals.
  • Discuss your experiences so others can learn from your/your team’s work with AI, and see how you are modeling useful AI practices.
  • Consider which tasks and assignments could benefit from using AI.
  • Think critically and remain aware of AI’s capacities and limitations (and how these are changing).

Tips for leading others in using AI:

  • Watch how others are using AI and listen to their conversations about AI (e.g., are they sounding so detached from the meaning of the content, focusing only on the technology).
  • Reflect on how the AI loops can be leveraged to enhance (rather than deplete) the agency of your employees.
  • Encourage others to engage with AI as collaborators in order to spur creativity, innovation, and productivity, with an emphasis on working through the empowerment loop for different goals.[4]
  • Lead conversations and share practices about incorporating AI tools into workflows — creating a psychologically safe environment for discussing uses of AI across the four loops.
  • Insist that everyone is accountable for their use of AI outputs, and should be able to explain the content and logic of any AI outputs that end up in their products.[5]

[1] Though we use just the term “AI” throughout the article, we are generally referring to generative AI (or GenAI) tools including but not limited to Large Language Models (LLMs), Multimodal Models, and image, video, voice, or music generating models.

[2] Based in part on: https://osf.io/preprints/osf/md5ef

[3] Based in part on: https://philarchive.org/archive/NGUVCH

[4] For example, create a one page cheat sheet for the basic tasks (such as writing prompts) while also highlighting agency. While this may seem rudimentary, providing a manager-sponsored nudge may create the momentum for employees to venture out and discover features.

[5] Develop norms for never accepting the AI’s first draft, nor using AI outputs without human review.

--

--

Ryan Watkins
Ryan Watkins

Written by Ryan Watkins

Professor of Educational Technology Leadership, and Human-Technology Collaboration, at George Washington University in Washington DC. https://ryanrwatkins.com

No responses yet