Sitemap
Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Navigating Contradictions: A Manifesto for Product Teams in an Era of Change

9 min readAug 20, 2025

--

We are living in an era of profound contradictions. As product designers, managers, and user researchers, we are being pulled between opposing forces, each with the power to define the future of our work.

Press enter or click to view image in full size

On one side: political instability, the erosion of rights, climate change, and an economic climate that makes career shifts feel risky. On the other: breakthroughs in AI and automation that promise to democratize creation, scale solutions, and connect us across the globe.

The tension is real. And if you feel unsettled, you’re not imagining it — you’re living in the middle of a global structural shift.

The Contradiction at the Heart of Technology

The same AI systems that can help diagnose disease or provide adaptive learning for children are also the ones used to spread misinformation, entrench inequality, and concentrate economic power.

According to the World Economic Forum’s 2023 Future of Jobs Report, 44% of workers’ core skills will change within five years due to AI and automation. McKinsey predicts that 30% of work tasks in our fields could be automated by 2030, but the adoption curve of generative AI suggests that impact will arrive sooner. A 2024 Stanford Human-Centered AI study found that 60% of organizations deploying generative tools had already redefined job scopes within 12 months.

For product teams, this isn’t a theoretical discussion. The contradictions show up in sprint planning, roadmap prioritization, hiring freezes, and shifting role expectations.

Underestimating the Coming Shift

I believe the impact is going to be deeper and faster than the numbers suggest.

Already, generative products are compressing demand for execution-level work in design, writing, and research. The roles that survive and thrive will be concentrated at two key moments in the product lifecycle:

  1. The Genesis of Ideas — framing the right problems, setting strategic vision, and aligning stakeholders.
  2. Quality Assurance & Governance — ensuring outputs meet ethical, usability, and business standards.

This aligns with findings from the MIT Sloan Management Review (2024), which observed that in early AI-adopting companies, the highest value came not from replacing human roles but from augmenting high-level human judgment with machine-generated options.

If your professional identity is tied primarily to execution, this is the time to start building strengths in strategy, systems thinking, and ethical oversight.

Lessons from Industrialization and Scale

Think back to global food industrialization. Scaling production was meant to end hunger. It worked — until unintended consequences surfaced: loss of biodiversity, homogenized diets, fragile supply chains. and a rapid increase in autoimmune diseases.

Now, a counter-movement toward local, sustainable, and diverse food systems is taking root. Ethics and environment are taking over economics (slowly).

AI may follow the same arc. Today, we’re scaling outputs and standardizing processes faster than ever, but in doing so, we risk homogenizing creativity and eliminating the diversity that makes us all special.

Like food, the future of technology will require a return to local nuance, diversity, and human oversight.

Why This Hurts So Much Right Now

Right now we’re navigating trade-offs that echo contradictions all the time:

  • Speed vs. Thoughtfulness — AI can generate 50 designs in minutes, but are they solving the right problem?
  • Efficiency vs. Humanity — Automation can reduce costs, but can it preserve trust and empathy?
  • Global Reach vs. Local Relevance — Scaling worldwide is easy; respecting cultural nuance is hard.

For many of us, these tensions are compounded by organizational realities. Some employers are genuinely supportive in helping teams adapt. Others are not. Poor leadership, disregard for ethical considerations, and a lack of psychological safety make the change even harder. And given the economic climate, changing roles may not be an easy option right now.

Economics are winning right now.

If you feel frustrated, unsafe, or devalued — your concerns are valid. This change is fast, sometimes ruthless, and it’s normal to feel like you’re being pushed faster than you can adjust.

My Own Turning Point

I’ve been there too. A few years ago, I felt that same uncertainty — watching AI accelerate while wondering if my work was about to be swept away.

That’s when I started The Design of AI podcast with Brittany Hobbs. Our goal wasn’t to add to the hype; it was to sit down with the people building the tools, shaping the policies, and deciding the future of work. Sometimes I walked away aligned with their vision; other times, I disagreed entirely. But every conversation helped me understand the assumptions and incentives driving our industry.

At the same time, I was consulting directly inside large and medium tech enterprises. I saw the best and worst: brilliant teams reimagining what was possible, and techno-optimists removing the humanity from everything.

By combining what I learned through the podcast and what I experienced firsthand inside these companies, I began to see patterns. I learned that influence comes not only from critique but from optimism — using hope as a tool to inspire new paradigms of creativity and to focus product teams on the kind of customer impact that AI can make possible when guided by the right values.

Moving Beyond Fear and Critique

Critique is important. It calls out harm. But it’s not enough. The technology will evolve whether we engage or not.

We need to shift from passive critique to active influence — using every channel available to shape how these tools are used, implemented, and governed. As the saying goes, “If you’re not at the table, you’re on the menu.” Your presence, perspective, and participation matter — because if we aren’t directly involved, someone else will decide the future for us.

I’m reminded of a friend of mine who spent his entire life weighed down by the stress of work, never able to breathe joy into his day-to-day. It took until now, as he lies on his deathbed with cancer in four places, for him to realize that he could have taken control of his life from anxiety long ago. His story is a painful reminder that if we wait until it’s too late, the chance to lead with intention may pass us by.

A Roadmap for Becoming Agents of Change

If you want to move from feeling bulldozed by change to actively shaping it, here’s where to start:

  1. Educate Relentlessly — Change can feel overwhelming when it’s forced upon us, but one of the best ways to protect ourselves is by equipping ourselves with the right knowledge. Treat AI literacy as a core professional skill: explore the tools in depth, understand both their strengths and their limitations, and practice explaining what you learn so others can benefit too.
  2. Embed Ethics into Everyday Work — Push ethical questions upstream, into the earliest phases of product development. Don’t wait for a crisis to start discussing consequences. This also means knowing the ethics of your employer and how they align (or don’t) with your own values, so you can anticipate where conflicts might arise.
  3. Challenge False Assumptions — The industry is still struggling to monetize AI, and we are all still discovering the true value of this technology. When you hear “AI will make this faster and better,” ask: Faster and better for whom? and At what cost?
  4. Redefine Value Beyond Execution — Lean into uniquely human contributions: framing ambiguous problems, synthesizing context, building trust. This is when thinking systematically matters most — connecting the dots between people, processes, and outcomes to create lasting impact.
  5. Create Micro-Coalitions — You don’t need to change an entire organization at once. Find two or three like-minded colleagues and start influencing together.
  6. Prototype Governance, Not Just Products — Draft lightweight decision-making frameworks that include checks for bias, accessibility, and ethical trade-offs.
  7. Seek Dissonance — Talk to people you don’t agree with. It builds resilience in your thinking and sharpens your ability to influence.
  8. Model the Culture You Want to See — Treat trust, safety, and human impact as part of your product requirements.
  9. Use Your Platform — Whether it’s a team stand-up, a design critique, or a public talk, use it to bring your values into the conversation.

Why The AI Trap Matters Now

My work on The Design of AI was a way to understand and navigate the contradictions we all face. But I’ve come to see that understanding isn’t enough.

That’s why the podcast will soon relaunch as The AI Trap — to focus directly on the false assumptions AI can create about how we work, what product teams need, and how decisions should be made. These traps are subtle, and they often go unnoticed until they’re deeply embedded in processes and cultures.

You’re Not Alone in Feeling This Way

These contradictions can feel like a landslide — changes coming from every direction, faster than you can process. That’s not weakness. That’s what living through a structural transformation feels like.

You don’t need a podcast to lead change. You just need to claim the space you have and use it with intent.

Our concerns are valid. Some employers are terrible. The economy makes mobility harder. But within your control, there’s always room to steer toward alignment with your values and to stand with others who want to do the same.

This is not a solitary journey. The more we connect, share strategies, and push collectively, the more likely we are to shape the future into something we can stand behind.

If you’ve read this far, you’re already part of the community that can make that happen.

Comment on this article if you want to connect with others navigating these contradictions. Or, if you’d rather talk privately, message me directly and we can walk through your questions together confidentially.

You are not alone.

Recommended voices shaping how product teams prepare for — and shape — the AI revolution

These leaders are actively exploring how to use AI constructively while helping product teams, designers, and researchers adapt to this new era. Follow them to stay informed, inspired, and equipped to influence the future.

  • Ovetta Sampson — Ovetta refuses to let AI drift into being just another engineering exercise. She champions designers as the crucial voices in shaping human-centered AI, reminding us that creativity and empathy are not “nice-to-haves” but the heart of responsible innovation. LinkedIn
  • Andrew Ng — Forget the noise, Andrew focuses on practical skills. He is the rare voice making AI feel less like a mystical black box and more like a toolkit any product team can put to use. If you want to do something with AI rather than just talk about it, he is your guide. LinkedIn / deeplearning.ai
  • Sarah Gold — Sarah pushes hard against the idea that data and trust are afterthoughts. She forces product leaders to ask: are we building something people actually want to trust? Her work shows that responsible design is not a constraint, it is a competitive advantage. LinkedIn / Projects by IF
  • Luiza Jarovsky, PhD — Luiza is one of the sharpest voices on AI governance and digital regulation. She helps product leaders cut through legal jargon to see how emerging rules and ethical frameworks will shape the future of design, research, and product strategy. LinkedIn
  • Fei-Fei Li — When others hype AI as a magic trick, Fei-Fei anchors the conversation in humanity. Her leadership at Stanford HAI is unapologetically bold: AI must serve people, not replace them. She gives us both the research and the moral framework to lead. LinkedIn
  • Dr. Rumman Chowdhury — Rumman does not just critique, she builds. From Twitter’s Responsible AI team to Humane Intelligence, she is relentless about giving teams real frameworks to hold AI accountable. If you want to stop hand-wringing and start acting, Rumman shows the way. LinkedIn | Twitter
  • John Maeda — John has never accepted the boundaries between art, business, and technology. He is a constant provocation to think differently, to lead creatively, and to stop settling for narrow definitions of innovation. Every post feels like a jolt of perspective. LinkedIn
  • Erika Hall — Erika is the antidote to hype. She insists on clear thinking, good questions, and real evidence. In an AI era where shiny demos drown out substance, her work reminds product teams that rigor and research are not slowing us down, they are how we avoid building garbage. LinkedIn
  • Stuart Winter-Tear — A seasoned AI and product leader, Stuart connects product strategy to real value. He warns against FOMO, and teaches how to build AI products grounded in real-world needs. LinkedIn

Who else should we be following?

This is just the start. There are many more voices helping us understand and shape AI. Comment with the leaders you follow, so we can all build a stronger network of product thinkers preparing for this revolution.

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Arpy Dragffy
Arpy Dragffy

Written by Arpy Dragffy

Customer Experience & Service Design | Head of Strategy of http://PH1.ca

No responses yet