Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Mental Models in the Age of AI

--

What can we learn from the browser wars to understand AI models

What’s under the surface?

At the heart of our technological landscape lies a human problem: the predictable progression of fundamental computing power against interfaces' complex, often messy evolution. The underlying architecture — processors, memory, operating systems — follows an almost mathematical certainty, governed by Moore’s Law and similar principles of exponential advancement. We’ve grown accustomed to this steady march of progress, perhaps even taking for granted how processing power doubles while costs diminish, creating an ever-expanding foundation of computational possibility.

Yet this predictable mechanism gives rise to a chasm of misunderstanding. The layer where human intention meets technological capability. Here we find ourselves navigating a more nuanced territory. Most pay little or no atttention to computer capabilities and learn how and why they operate. Since moving away from the command line, and manuals, we expect these tools to do what they say they will do. And how or how well can be secondary. I think the web browser is a compelling example of this intersection — a tool that must balance raw technological capability with human expectations and habits. Usually ignored or given little thought by end users, unlike monolithic software suites like MS Office that impose their paradigms, browsers serve as mediators, interpreting and rendering the collective aspirations of web creators while accommodating the diverse ways humans seek to interact with information. For example, the world of work is subtly governed by the interface decisions of tools like Slack, Teams, or Zoom, each userbase exists separately and benefits are locked into that technical choice.

I recall working at Accenture and introducing Slack to our team in the mid 2010's. Immediately we saw benefits as different teams could now easily chat together, share ideas quicker, and avoid lengthy meetings. Yet, the powers that be shut it down in favor of the new competing product by Microsoft. Teams was hobbled together due to Slack’s refusal to be purchased. Salesforce later bought them up, likely due to their long-term competitive disdain of the Redmond behemoth. This isn’t a recipe for success, even though Slack pioneered many compelling ideas on how people like to collaborate. Teams to this day still seems to be catching up, but as our team had to transition I learned that the habits we created with Slack kept going. Our mental model of how it should work trumped the clunkiness, so we kept up good habits even when the tool didn’t necessarily reward it.

In this weeks discussion, Jack and I tease out if AI companies work like we think they work.

This online chatroom paradigm is relatively recent, if we look back the invention of the browser is very similar in a difference in styles of how it was implemented. In the early days there wasn’t a strong idea of what the web was like. It was passive, graphically primitive due to bandwidth, and always designed for a TV like viewport. It also relied on extensions such as Flash for dynamic interaction, or plugins for heavier processing. Learnign how to develop for what it can do vs what it could do was a battle of how each browser chose to interpret and deliver our design goals. End users started with Netscapes' early dominance and it set the standard for being free. Microsoft stepped in to take over with Explorer, and Apple countered with Webkit-based Safari. Along with Firefox and other players, most users went with the default, yet each had its own way of interpreting the language of HTML. Also new standards and ideas were ratified by the best and brightest in the W3C, and the ideas that ran this document delivery system improved mightily over the decades as the proprietary nature of IE was overcome by Chrome. All the market dominance aside, the hope was to see a cooperative, collaborative tool that respected new ideas and implemented a backbone to allow creators to engage users without relying on hacks.

Fast forward, there still needs to be allowances made, as runtime engines like react were built to handle the never ending scrolling that users tended to prefer, along with microinteractions like, er, liking things could be queued and handled without interruption. Video content delivery also benefitted from great compression algorithms, but also bandwidth increasing, more is more.

This historical context of browsers serves as a powerful lens through which to examine our current technological inflection point: the rise of artificial intelligence and the competing mental models we use to understand it. In a recent conversation with my colleague Jack, we found ourselves drawing these very parallels, exploring how our understanding of previous technological evolutions might inform — or perhaps mislead — our comprehension of AI systems.

Just as browsers represented the invisible yet crucial interface between users and the vast landscape of networked information, today’s AI models serve as mediators between human intention and computational possibility. The browser wars of the past offer a cautionary tale about how competing corporate interests can fragment and complicate what should be, at its core, a tool for universal access and understanding.

The analogy, however, breaks down in fascinating ways. Where browsers fought over the interpretation of structured markup languages — HTML, CSS, JavaScript — today’s AI systems operate in the realm of natural language and unstructured data. This fundamental shift represents not just a technical evolution, but a profound change in how we conceptualize human-machine interaction.

OpenAI presents a particularly interesting case study in this evolution. Like the early promises of open web standards, its name suggests transparency and accessibility. Yet, as with Microsoft’s Internet Explorer during the browser wars, we find ourselves navigating the complex intersection of public good and private interest. The key difference lies in the nature of the technology itself — while browsers primarily interpreted and rendered content, AI systems generate, reason, and create.

The concept of an “AI engine” emerged in our discussion as a useful, if imperfect, metaphor. Like the engines that power everything from lawn mowers to rocket ships, AI models serve different purposes at different scales. Some are optimized for specific tasks like running a Roomba, while others aim for broader cognitive capabilities. This spectrum of application raises important questions about how we frame and understand these tools — are they simply more sophisticated algorithms, or do they represent something fundamentally different in the evolution of technology?

What troubles me, and what emerged as a central theme in our discussion, is the question of quality assessment and benchmarking. In the browser wars, compatibility and standards compliance provided clear metrics. But how do we measure the quality of AI outputs? While there are benchmarks for specific tasks, the creative and generative aspects of AI present a more complex challenge. As Jack pointed out, we’re moving beyond simple question-answering into the realm of genuine innovation and discovery.

The question of efficiency versus effectiveness looms large. We discussed how AI might optimize various processes — from HR functions to database management — but the deeper question remains: Are we using these tools to enhance human capability, or merely to eliminate human participation? The example of automated firefighting robots presents this dilemma in stark terms: while they could save lives by operating in dangerous conditions, they also challenge traditional employment structures and human roles.

As we concluded our conversation, I was left pondering the nature of progress itself. Like those unwashed dishes from the holiday party sitting in my sink, problems don’t solve themselves just because we have better tools. The human elements of procrastination, uncertainty, and decision-making remain central challenges, even as our technological capabilities advance.

Perhaps the most valuable insight is that we’re not just building tools — we’re creating new environments for human-machine collaboration. The real challenge isn’t in perfecting the technology, but in understanding how to integrate it meaningfully into our human systems of work, creativity, and decision-making.

The path forward isn’t clear, but one thing is certain: as with the early internet, our mental models of AI will need to evolve as rapidly as the technology itself. The question isn’t whether AI will transform our world, but how we choose to shape that transformation through our understanding and implementation of these powerful new tools.

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Michael Dain
Michael Dain

Written by Michael Dain

Exploring AI and how it shapes the way we value being human. More @ michaeldain.com, adjunct at Northwestern University

Responses (3)