Sitemap
Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Are we using AI to automate instead of augment?

--

Many are treating generative AI like a digital butler — a tool to handle the menial tasks that clutter our workdays. But in our pursuit of efficiency, we may be overlooking the technology’s deeper, transformative potential — and quietly slipping into a future where our professional value is diminished, not amplified.

A recent PCWorld article, 9 menial tasks ChatGPT can handle for you in seconds, saving hours perfectly captures the current zeitgeist. It promotes offloading chores like summarizing meetings, drafting polite emails, and proofreading documents. This mindset is practical, instantly useful, and, on the surface, completely logical. But it’s also dangerously shortsighted.

This “Instrumentalist” stance treats AI as an executor of clear, predefined tasks. The goal? Simple: do more, faster. The human becomes a manager, delegating work to a tireless assistant. We hand over the job; it hands back a finished product. It’s a tidy, transactional relationship — focused purely on productivity.

Menial / Production-Oriented

  • “Rewrite this email to sound more professional.”
    [Executes tone-shift with no feedback loop]
  • “Summarize this meeting transcript in 5 bullet points.”
    [Flattens to extraction]
  • “Generate a job description for a junior marketing manager.”
    [Fills in template based on title]
  • “Proofread this text and fix any grammar mistakes.”
    [Performs task quietly with no trace of process]
  • “Give me 10 ideas for blog post topics.”
    [Breadth over depth]

But there’s another, more powerful way to work with AI — a “Reflective Generative” stance. Here, AI isn’t a butler we command, but a thought partner we collaborate with. A mirror to help clarify our own thinking. The focus shifts from task completion to sensemaking. From speed… to evolution.

Beyond the to-do list: Unlocking micro and macro work

The real promise of AI lies not just in automating visible tasks, but in amplifying the invisible work that underpins meaningful knowledge creation. This work lives in two layers:

Micro Work: The Cognitive Glue. These are the constant, low-level adjustments that sap mental energy and interrupt creative flow — reformatting a list, adjusting the tone of a sentence, turning disjointed notes into a clear outline, or just finding the right name for a new idea. Offloading this kind of “cognitive friction” to an AI doesn’t just save seconds; it preserves the momentum you need for deeper thinking.

  • “Can you reformat these rough notes into a clean, MECE outline?”
    Supports flow by clarifying messy structure
  • “This sentence feels off — what are three clearer phrasings that keep the nuance?”
    Focuses on subtle meaning retention, not just clarity
  • “What would be a compelling title that captures both precision and symbolic resonance?”
    Naming as an act of synthesis, not labeling
  • “Make this list of features feel emotionally motivating to a user, not just descriptive.”
    Invites tonal shaping for resonance
  • “Convert these ideas into a set of prompt templates I could reuse later.”
    Bridges creative friction into systematization

Macro Work: The Strategic Scaffolding. This is the thinking about the thinking — the meta-level work that drives real innovation. It includes building conceptual frameworks, questioning hidden assumptions, structuring complex arguments, or ensuring coherence across messy, abstract ideas. In this context, AI becomes less a tool and more a sparring partner — someone (or something) to help you explore the architecture of your thoughts, challenge your framing, and notice the patterns you might’ve missed.

  • “Can you help me identify the deeper pattern behind these three product features?”
    Moves from parts → frame
  • “I think I’m jumping between arguments here — what’s a better order to present this logic?”
    Co-designing the architecture of clarity
  • “Here are some assumptions in my draft — can you challenge them or offer alternatives?”
    Structured reflection through dialogic feedback
  • “Given this metaphor about AI as a forge, how could I weave it throughout the piece symbolically?”
    Emergent symbolic coherence, not decorative metaphor
  • “I’m trying to name the tension between speed and reflection — what are 3 framing contrasts I could use?”
    Crafting conceptual scaffolds

Ask AI to write an email, and you’re using it as an instrument. Ask it to help design a communication strategy — one that compares different approaches, interrogates assumptions, and adapts to context — and you’re using it as a partner.

A tale of two futures: The factory vs. the prosthetic

This distinction between stances isn’t theoretical; it signals a genuine fork in the future of work — and in how much agency we preserve along the way.

The instrumentalist model treats AI like a factory. Its goal is to optimize output. In this view, the human role was always provisional — the “last mile” of cognition required to format the memo or write the summary because the software couldn’t. Now, AI can close that gap — quickly, quietly, and at scale. This isn’t just task automation; it’s the final act in a century-long dream to make knowledge work behave like assembly-line labor — predictable, measurable, and, eventually, disposable.

Automating the Executor Self

  • “Do this task for me.”
    Low agency, clear output
  • “Write a paragraph that explains X.”
    No sense of participation
  • “Polish this text.”
    Focuses only on surface
  • “Schedule these meetings.”
    Treats the system as external executor

And in this future, AI isn’t creating inhumane work. It’s completing a process that already devalued the dignity of labor long before the algorithms showed up. The jobs at risk? They’re not just being replaced — they’re being unmasked as roles already drained of autonomy, creativity, and real growth potential.

The reflective generative approach sees something else entirely. It imagines AI as a cognitive prosthetic — not for outsourcing thought, but for extending it. But this shift demands more from the user — not just commands, but presence, curiosity, and the willingness to engage in a recursive, sometimes messy dialogue. The real value lies not only in what’s produced, but in what’s revealed — the insights, clarity, and capability we build through the process itself.

Training the Reflective Self

  • “I wrote this — but I’m not sure it captures what I mean. What do you see as the underlying tension here?”
    Self-awareness in development
  • “Can you help me articulate what I was trying to say, but more sharply?”
    AI as interpreter of intention
  • “What’s missing in this logic chain? I want to know what I haven’t thought of.”
    Invites epistemic humility
  • “Based on this rough sketch, can we evolve a framework together?”
    Embeds collaboration into framing itself

Are we automating when we should be augmenting?

It may seem naive to expect a widespread shift away from the efficiency-at-all-costs model that defines modern capitalism. The incentive to treat AI as a factory for replacing labor is immense. The path of least resistance is already well-paved: automate the tasks, reduce the headcount, scale the system.

But this path rests on a deeper, more disquieting question: Are we automating when we should be augmenting?

When we train AI to handle the menial, we are often training it to replicate our executor self — the part of us that performs with precision, speed, and conformity. This was the version of the human that industrial systems were already designed around: reliable, interchangeable, and ultimately, disposable.

But there’s another self that lives in the work — one that questions, frames, and constructs meaning. The reflective self. The self that makes the work worth doing and still matters after the task is done. If we only automate for the executor, we risk hollowing out our relationship with work entirely, optimizing humans out of their own growth.

Yet, a subversive possibility is hiding in plain sight. What if workers began using these tools of production to augment and liberate themselves?

Even in a role defined by menial tasks, an employee can begin to shift their stance. By using AI to ask why a process works the way it does, to model better workflows, or to identify hidden patterns in feedback — what begins as simple automation can become self-amplification. The act of using AI well — of asking better questions and shaping ambiguity into structure — is itself a form of retraining.

Not just of skill, but of self.

The AI, handed to the worker as a tool to make them more efficient, can become a mirror that helps them see their own potential — and a scaffold to build a new kind of value.

The choice, then, is not only about tools or outputs. It’s about which self we want to invest in.

And which one we’re willing to let disappear.

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

stefan klocek
stefan klocek

Written by stefan klocek

Firestarter. Designer in applied AI . Passionate maker, writer, thinker, painter, speaker, sculptor, father. Lead AI UX at HubSpot.

Responses (1)