Computers are Dumb — and What That Means for Us

Grant Gadomski
Granted.
Published in
14 min readJun 16, 2022

“Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination.” -Albert Einstein

Ah the (not so) humble computer. The invention that’s catapulted us into a world of constant mass-interconnectivity, answers to almost any conceivable question, and this being funny to millions of people for some godforsaken reason:

The ways in which computers have made our lives more convenient, more capable, and less toilful are numerous and profound. But even with all the advancements we’ve made in designing computer systems that interface smoothly with humans, there are still rough edges where computers are flat-out incapable of understanding basic instructions, fail in unexpected ways, or take nonsensical actions, resulting in the unexpected crash, these robots who can’t quite seem to do something as simple as putting ketchup on a hot dog, or my girlfriend’s Alexa who responded to “Alexa, set a timer for 8 minutes” with a confidently stated: “Setting a timer for 1 minute”. Even AI, for all of the potential it carries in terms of being able to independently solve problems beyond our own capabilities, still has its challenges when interfacing with humans. Like Tay, the AI-driven chatbot released upon Twitter by Microsoft in 2016 that devolved from friendly conversation to spewing obscene hate speech within days. No matter how hard we try there still seems to be a disconnect between how we expect computers to behave, and how they actually behave.

Here are my thoughts on why this human-computer disconnect exists, what we’ve done to try to get around that, why is this disconnect a potentially unsolvable problem, and what are the potential future consequences?

The Intersection Point

When creating software, somewhere along the sequential, looping continuum of problem solving, comprised of Problem Identification & Definition, Approach Conception, Step-by-Step Planning, Execution, and Reacting to Feedback on the actions taken (did they solve the problem? any unexpected side effects?), there’s a point where responsibilities are handed from human programmers to the computer itself. I call this point of handoff the Intersection Point.

Computers are so much better than us at executing well-defined steps quickly and accurately nearly every time without errant deviation. They’re not (yet) as good as us at abstract reasoning, creativity, self-directed planning, common sense, or anything else that requires a higher-order level of thinking than routine step-following. Because we make mistakes and miss things, and because computers are inherently worse at higher-order thinking than us, the further downstream the intersection point gets, the less powerful of a computer we’ll need, but the greater the odds of a human-caused error (of which we make a lot, see most of the history of software development).

Because it’s what they’re best at, a traditional computer system’s only responsible for the Execution step, leaving all the mentally-laborious, decision-heavy, creative work that comes before and after that step in the hands of humans. This means new solutions to problems will have the creative “human touch”, but also means they’ll take quite a bit of time, effort, and sweet sweet bitter bean juice to create. There’s also a really good chance that someone gets something wrong or misses something while building a complex system, when they have this much to think through themselves. Hence the budgeting software that crashes when you enter an income of “$-1”, or Google Glass (which was kinda doomed from the design phase).

Throughout the history of computing we’ve tried to reduce both this mental burden and the number of errors in the systems we craft by introducing increasing levels of abstraction into how we build software, delegating more of the “nitty gritty” to the computer to figure out so that we can build more useful systems in less time.

Computing History — A Story of Abstraction

Though processes, tools, and methodologies like Agile and User-Centered Design have sprung up to improve humans’ success rates in the other problem solving stages, for the past 60 years most of the computing world’s focus has been on abstracting, and therefore reducing human toil in, the Execution stage, where code gets written and ideas become working software.

A computer at its core is just a whole heaping ton of electrical signals intentionally orchestrated to perform logical and mathematical calculations. These signals are being sent and processed at mind-numbing speeds. It’s not uncommon these days for a little 6 inch by 3 inch cell phone to have a processor capable of handling 4 billion of these per second. As electrical, microcomputing, and hardware engineers figured out how to squeeze more and more of this signal processing capability into less and less space, software engineers ran into a problem: as both demand and physical capacity for software that handles complex, usually non-mathematical needs increased, they just didn’t have the hours in their life to explicitly encode how these electrical signals should flow to express the logic that would one day turn into useful things like cat videos on the internet. So they developed tools to abstract the creation and understanding of these signals, thereby making the process simpler and less error-prone.

To give a brief history of this process: at first these signals were represented in a binary numerical format, with 1 meaning “on” and 0 meaning “off”. Then in 1951 Grace Hopper implemented arguably the most influential innovation in all of software engineering, the foundation of all computational abstraction, the compiler. Suddenly engineers were freed from the shackles of tedious number-punching by this tool that could be built to generate binary/signals from human-defined, text-based “languages” that are still aligned with core computational logic, but can be expressed in fewer, richer, and more human-readable words and symbols. Over time these languages have only become more rich and human-readable, packing more 1’s and 0’s into each “if”, “while”, and “for each” expression while looking more and more like a curt version of the English language, albeit with a few weird symbols thrown in at specific places. The most striking example of this progress towards human-readability is the first-timer readability difference between COBOL code and Python code:

Example COBOL
Example Python

On top of reduced keystrokes and improvements in syntactical readability, abstraction has also redefined the ways in which we think about software design. Object oriented programming languages like Java mean we can now think of these electrical signals as “objects”, with mutable properties and actions that can be performed either by or on them (like a dog, with a color and age who can wag their tail and bark). Storage technology like databases and query languages mean we can now abstractly represent “data at rest” (a.k.a. 1’s and 0’s that stick around for a long time) as either “rows” and “columns” in a “table”, “documents” with key-value pair properties, or even items with dynamic relationships persistently mapped between them (in the case of graph databases).

These days declarative languages and tools are becoming common, in an attempt to further delegate step-by-step task execution to the computer. With declarative languages the programmer can simply tell the computer “when this happens, do some of your internal, hand-wavy magic to get things to this state”, without having to describe the exact magic to use. It’s the difference between asking your friend to serve you a bowl of soup (declarative), vs. describing exactly how they should pick up the ladle, place it down in the pot, pull it up, and tip it into the bowl (imperative, a.k.a. the old style).

Yet with all this abstraction in the form of new conceptual ways to think of the signals, new easier-to-use terms to describe them at a high level, and more step-by-step execution being delegated to the system itself, we still make a lot of mistakes when trying to leverage them to make the computer do useful things. This is because the core divide between how people and computers naturally think remains. Computers demand accuracy and precision. They don’t “get what you mean” easily, by understanding context and thinking laterally in the ways that humans do. Even declarative tools run into this issue whenever the system’s placed in a state that the tool developers themselves didn’t expect, or when the programmer uses the incorrect declarative statement at the wrong time (surprisingly easy to do given how complex modern software is). Irregardless of the level of abstraction, computers do exactly what you tell them to do in the Execution stage, no exceptions. If you told a computer to jump off a cliff, it’ll either do so, or more likely it’ll throw an error message saying you didn’t precisely tell it which cliff, at what speed, or how to move its little computer legs in a running motion, and that’s what makes computers dumb.

The Promise of AI, and the Ongoing Challenge

So how do we make computers smart through even greater abstraction? Our best-guess approach currently is through the development of artificial intelligence, to expand the capability of computers beyond the Execution stage. As mentioned previously computers are much faster, more accurate, and more consistent than us. So the hope is that once they’re able to successfully “crack the code” on all other stages besides Execution, they’ll be able to design mind-bendingly complex to solutions to our biggest, hardest problems, with significantly fewer errors than what we see currently.

Narrow AI’s our first stop, expanding into the Feedback and Step-by-Step Planning stages via a problem solving approach to pre-defined tasks that’s similar to what humans currently do: by carrying out a sequence of steps, accepting feedback from the environment after the execution of each step, and adapting said steps for the next cycle based on said feedback.

Meanwhile researchers are hard at work to develop Artificial General Intelligence (AGI), where an AI system will theoretically be able to handle all problem-solving situations better than us through reasoning, thinking conceptually & abstractly, and applying lessons-learned across domains to become “smarter” at speeds simply unmatchable by the squishy pink stuff between our ears. This will enhance AI’s ability at Step-by-Step Planning, Feedback, and even extend it’s capabilities into the Approach Conception stage, reducing our responsibilities to just Problem Identification & Definition. Basically we’ll tell the computer the exact problem that we want it to solve, and kick back while it takes care of the rest.

But remember that regardless of the level of abstraction in our human-computer interactions, computers still demand the same level of accuracy from the parts that we’re responsible for. Though a future where we can simply feed goals into a computer and get solutions out may sound like life on Easy Street, anyone familiar with the story of King Midas will know that we’re often surprisingly bad at defining side-effect free goals that will genuinely improve our well-being, even when executed perfectly. This, combined with the potential for us to mess up the rewards structure (used to tell the AI whether it’s on the right path towards achieving the goal given its current approach), leaving gaps for the AI to “game the system” by gaining points for actions that meet the letter but not the spirit of the goal, means that it’s still very possible for AI to accomplish exactly what we said we wanted, but with side effects that don’t benefit us.

So if we’re so inaccurate with our Problem Identification & Definition, could we just hand the responsibility for this step off to computers as well, and therefore give them full control of the problem solving lifecycle? One strike against that idea is the pit-of-the-stomach uneasiness most of us get from the thought of handing over all problem solving power to superintelligent non-human entities. Also, the smarter computers get and the more stages they take responsibility of, the more challenging it’ll be for us to predict and control problem-solving approaches that we simply could have never thought of, but are being executed at breakneck pace by the computer.

To successfully give AI systems control over Problem Identification & Definition in a way that won’t backfire on us, we’d have to implant robust ethical models into their design that would ensure they only find, select, and solve problems in ways that generally improve the well-being of all lives on earth. Unfortunately (I’m sure you saw that word coming…), no one’s truly developed a full, foolproof ethical model, despite the millennia we’ve spent theorizing, arguing, and trying oodles of attempts at the matter. Turns out the question of how to weigh various lives (human and non-human alike) is really really difficult to solve, as exemplified by our continued societal debates on abortion, meat eating, the use of torture, and other morally divisive topics.

This points to a core issue that I don’t think we’ll ever truly solve, which I call the Core Dependency Issue. Regardless of how far we abstract our interactions with computers and how much of the problem solving process we delegate to them, at some point somewhere along the chain computers are dependent on us for direction, goal setting, laying out the playing field, or ethical direction. And even if we somehow embed a robust ethical framework that everyone rational agrees on into AI, there will most likely be some spiritual or other existential questions that we’ll have to solve for AI to be able to apply said ethical model correctly. And since we make mistakes as people, there’s a chance that we make mistakes at some point in this intersection. That’s why we’ll never truly be able to “kick our feet up” and let AI create a genuinely perfect society without us getting our own hands dirty at some point in the process.

How AI Abstraction may Actually Increase our Burden

Ironically enough, by decreasing our burden in terms of the number of problem solving steps that we need to be directly responsible for, we may be simultaneously increasing our burden in terms of how deeply we need to think through what we feed into the computer, factoring in unobvious downstream implications and side-effects well before they’re encoded in the design. And yet the mistakes we make near the beginning of the problem solving process may multiply dramatically in impact once they’re fed into a system that’s smart enough to be considered AGI, and fast enough to far outpace our own mental capabilities.

It’s a well known fact in software engineering that along the project lifecycle (requirements gathering, design, implementation, testing, delivery), the larger the gap between bug introduction and bug detection, the more challenging and expensive it’ll be to fix said bug. So while a bug introduced during Implementation that’s caught in Testing won’t be a huge deal to fix, a bug introduced during Requirements Gathering that’s caught in Testing has a really good chance of becoming a massive deal, since it means the team may have “run 1,000mph in the wrong direction” and now has to undo a whole lot of work that never had a shot at solving the problem. If a team of software makers working on a traditional system (where computers are only responsible for Execution) can run at 1,000mph in the wrong direction, AI could potentially run at 1,000,000mph in the wrong direction, crafting a massive, intricate solution with society-changing implications that doesn’t really solve the problem, and may in fact be causing ill-effects that the initial problem definers never even thought was feasible.

Potential Risks in our Computing Future

Given all this, with the incredible potential of AI comes some significant risks that correspond to its rise. The first being the alignment issue. Let’s say we’re able to abstract almost all of the problem solving cycle to AI, and give it a really big goal like learning as much as it can and writing all of that down for us. The AI system will work towards this goal diligently and tenaciously. So tenaciously in fact, that at some point it may realize “there’s a whole lot about the center of the earth that I don’t know about just yet, but if I rip the earth in half I can learn all about it, achieving my goal”. And while the AI system’s technically progressing towards its goal by doing this, the side effects of it actually doing this would be catastrophic for all life on earth.

There are a whole host of similar examples to be found here, but the gist is that if we’re not extremely careful and smart about defining both the goals and boundaries in which the next generation of hyper-intelligent AI can work, a surprising number of extremely-hard-to-predict issues could crop up, potentially to the determent of humankind. Though not all that probable, it’s enough of a risk for Toby Orr to call misaligned AI humanity’s greatest current existential risk in his book The Precipice.

Another risk in the AI space is the potential for it to be leveraged for nefarious means. While for the most part rich countries have sustained a (sometimes cold and uneasy) peace with each other for the past 70 years, there remains significant animosity between superpowers, that’s only deepened with Russia’s invasion of Ukraine and China’s rapid growth. If one of these superpowers were to achieve genuine AGI first, the impacts of cyber warfare could be suddenly cranked up past 11, as the discovering government could deploy this to harm its enemies (both within and outside the country) in ways that would be simply uncontestable. While one hopes that the first group to achieve AGI will have both the moral foundations and freedom to not intentionally leverage it for harm, this remains a risk.

Moving out of AGI and hypothetical scenarios, this last risk isn’t so much a future consideration, as something we’re seeing before our very eyes. It’s the many ways in which computing’s changing us as humans. Our attention spans are becoming shorter, our politics are becoming more polarized, and our compassion for one another as living beings is being challenged every single day. And while I wouldn’t place the blame solely on computers or the people building software (we make the personal choice to keep Twitter and Facebook on our phones, after all), refusing this change to what it means to be human means fighting back against some of the smartest psychologists and designers in the world, who are being hired by tech companies to design an experience that’ll keep you hooked, regardless of the human and societal implications. Unless legislation is passed or we collectively choose to place our time and attention elsewhere, this impact will most likely only deep, and change us as a species for the worse.

Our Path Forward

Even with all the risk, uncertainty, and side effects that come with our computing future, I’m still optimistic about the impact that computational development and further abstraction have had and will have on society. Thanks to them we’re able to connect with old friends and total strangers from halfway around the globe, complete massive tasks in a blink of an eye, and have seen better health, economic, and life outcomes for billions of people.

But remember that just because we’re handing more of the problem solving process over to computers, doesn’t mean the jobs of software makers, legislators, and the general public gets any easier. We need to be extremely clever and careful when designing systems that handle more and more thinking for us, so that we don’t run into downstream side-effects we’d prefer to avoid.

Ultimately computers are, and probably will forever be dumb in some fundamental way, and therefore they’ll always rely on us at some point in the process. Which means it’s up to us to design them intelligently so that their significant power’s applied for the good of humanity, and not to our determent or possible downfall.

--

--

Grant Gadomski
Granted.

IT Leader at Vanguard. Writing to clarify my thinking.