Artificial Experience

Artificial intelligence is education’s mad cow disease

steve wright
Educate.
10 min readMar 17, 2024

--

My teaching credential is in English and English was the context of my first 10 years of teaching. However, for the last 8 years, I have taught Computer Science which has provided a reductive but useful perspective on learning. Computer Science is a discipline defined by problem-solving. Humans tend to think of computers as unknowable complexities of digital magic but they aren’t. Computers are just logic machines. A computer is just a box of very many, very fast, switches. A computer can only understand 1’s and 0’s, the two positions of a switch, On or Off, True or False. Computers are only complicated. Humans are complex.

Complicated is the domain of machines. Complications are solved by formulas; the volume of your home, the distance between you and Mars, your financial health. Humans are not particularly good with complications. Computers make solving complicated problems possible, easy even. Humans think this is great.

Complexity is the ooze of life. Should I buy this beer if I can’t afford rent? Should I buy this yacht just because I can? Should I work harder? Do I deserve to be happy? Am I content? What is love? Why is that so beautiful?

When humans try to solve problems we have to account for the complex, chimeric, non-binary nature of being alive. We can’t escape the gray areas. Regardless of the childish political extremes we sustain, we live in the gray area between True and False. Computers, however, provide a reductive environment with unambiguous rules where logic can be simulated, inputs can be tested and effect can be definitively determined from cause. This makes computers, particularly interesting learning tools, a way for students to explore and create where the feedback is reliable and predictable.

Students often personify their computer saying things like, “It keeps getting it wrong.” And I respond, “You just haven’t yet made it do what you want it to do.” A computer is more accurately compared to a hammer than to a brain. It is a tool designed to drive a nail more than it is an autonomous entity designed to exist independently. A computer does not have intelligence, or at least it didn’t until humans changed the definition of intelligence to include what computers can do. Humans have intelligence. Humans author, build, own, design, manipulate, control, deploy, and benefit from computers. Whatever artificial intelligences we claim to create are always only a set of instructions written by humans that manipulate complicated but not complex machines, machines created by humans to generate deterministic outputs that humans desire from inputs humans provide.

I sent a question to ChatGPT, a computer program aspirationally referred to as Artificial Intelligence, a question about the future of high school education. ChatGPT — an Artificial Intelligence — returned a response with a description of education as Artificial Experience. ChatGPT, an Artificial Intelligence, calculated that education will evolve into Artificial Experience.

“Virtual and augmented reality technology will also become more prevalent, allowing students to experience immersive learning environments that bring lessons to life.” — ChatGPT

ChatGPT is ingenious code written by inventive and insightful humans that became inventive and insightful without the help of ChatGPT. ChatGPT is ingenious code that scours massive stores of text in search of patterns; patterns like groups of words that appear frequently in the context of other groups of words. For example, “Gear” often appears in the context of “machine” and “joy” often appears in the context of “emotion”. ChatGPT then uses those patterns as a sort of template to generate responses that resonate with those patterns. Again, computers are not magic, they are just powerful. They are made by humans to serve humans by dealing with complications. To understand what ChaptGPT does as intelligent requires us to first reduce intelligence into something that machines can do. Computers have not evolved to approximate human intelligence. Humans have redefined intelligence to include computers.

A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain. It creates an adaptive system that computers use to learn from their mistakes and improve continuously. Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. — Amazon Web Services marketing explainer

We began with an attempt to understand the Human brain using computer simulation but the moment we realized that the calculation speed of a computer could turn these simulations into products, we chose to market machines as independently intelligent; we chose to forget that machines are a limited but useful analog for the human brain and instead to sell these remarkable calculators as emergent super-humans.

We are eager to ask “Is this computer a human?” but reluctant to ask, “What does it mean to be human?”

Artificial Intelligence is not an evolution of humanity, it is an invention of humanity. Artificial intelligence was intended, designed, built, owned, and sold by humans. These humans are now asking us to give up on understanding the complex condition of being human in favor of living a simplified analogy for life in a reductive digital simulation. We are being asked to ignore the spaces between the extremes and view reality in the radically simplified terms that machines can understand. Yes or no, true or false, right or wrong. Those who own the Artificial Intelligence machines are asking us to live reductive binary lives so that we can be algorithmically manipulated and economically exploited for their benefit.

We are being asked to teach children to contort themselves into the shape of machines because only then can machines be human and once machines are human those who own and control the machines own and control the future. It’s a very old story.

ChatGPT, in a very literal sense, is meaningless, without meaning. It is at best a Stochastic Parrot, a random text generator that responds to input with text that resonates with patterns it found in its massive database of text generated over time by a planet full of Internet users. Artificial Intelligence can only look backwards through troves of Internet content. Artificial Intelligence is to meaning as a hammer is to a building.

Meaning is the human mind does when it assimilates layers of experiences; layers of dirt in an exposed cliff left behind by millennia; rings in a Redwood tree telling the story of abundance and drought; wallpaper and paint and wallpaper and paint and wallpaper and paint recording the stories of lives lived in a Home. Humans can see life in the interstitial spaces in time. We can feel life, even when we try to choose not to. We are haunted by living. A machine can only record that an event occurred. A machine can record near infinite numbers of events; dots on a timeline, separate, distinct, meaningless as a whole. Meaning is accreted by humans.

This does not mean that Artificial Intelligence is useless. The patterns that machines can uncover have been used to find unknown sources of cancer and will hopefully be used in the future to untangle massively complex problems human minds need help to see clearly so that we can build understanding, so that we can develop insight, so that we can make meaning. However, there are reasons we have not deployed Artificial Intelligence to, for example, limit the negative externalities of Capitalism and those reasons are the same as why we are currently, disingenuously, deploying Artificial Intelligence to maximize profit. Artificial Intelligence is owned and its purpose is intended by its owners.

I took these many lines of text to denigrate Artificial Intelligence as a way to introduce Artificial Experience as the latest version of non-liberatory education like what John Dewey derided as Traditional Education and Paolo Freire ridiculed as Banking Education. At its very best, Artificial Experience is an immersive but limited imitation of life, a blue pill, an Education roofy. This is antithetical to experiencing real life, this is the opposite of experience as education, the opposite of experience in education. ChatGPT’s response to “What is the future of education?” is not an imagining of a generative future. It is a projection of patterns from the past.

The world’s first K12 computer science educator was Seymour Papert. Probably this isn’t true. I am sure a teacher somewhere had a hobby, a passion, that they shared with their students before Seymour shared his. However, in 1970, before the Internet, before cell phones, before personal computers, Seymour Papert and Marvin Minski started the MIT Artificial Intelligence Lab. There is a rich confluence of thought between artificial intelligence and theories of learning and it was Seymour’s goal to use computers to understand the nature of learning and intelligence. As computers became more powerful, simulations of learning became more nuanced to the point where these simulations were providing valuable data that humans could use to build new insights into how learning happens. However, these positive developments were not exploited to help humanity. We got distracted.

Speaking of his work at the MIT Artificial Intelligence lab, Papert said:

“We started with a big ‘cosmic question’: Can we make a machine to rival human intelligence? Can we make a machine so we can understand intelligence in general? But AI [Artificial Intelligence] was a victim of its own worldly success. People discovered you could make computer programs so robots could assemble cars. Robots could do accounting!” — Seymore Papert, 2002

Seymour lamented the loss of the big idea, giving up on the understanding of what intelligence is for the smaller but profitable idea of process automation. What this meant for education is that when computers were brought into the classroom they were used to automate teaching, to hammer students as nails as opposed to facilitating experience, inquiry and problem solving as Papert dreamed it would be.

“Computer scientists weren’t supposed to bring computers into classrooms. They were supposed to bring computer science to children in classrooms.”

Seymour Papert

Beginning around 2010 there was a big push to include computer science in education. Nonprofits worked to expose young people to the creative adrenalin of hackathons where young people would come up with an idea for a tool that could positively impact their world and then stay up all night with a group of peers and see if they could build it. Corporations, experiencing difficulties finding employees, lent a hand and a new Advanced Placement course was created, AP Computer Science Principles (CSP). CSP is a unique course. It had no prerequisites. Lots of free, high quality curriculum was made available when universities like UC Berkeley, Harvard and Carnegie Mellon and corporations like Facebook, Google, and Amazon assigned time, energy, and money to making them. I have been teaching this course since it began in the 2016/17 school year.

Initially, CSP had a unique and valuable focus on student choice and reflection. CSP always included a traditional Advanced Placement test where students sit in a room and fill in bubbles with a number 2 pencil but CSP was also one of very few AP tests that included “projects”. There were two of them. The first was a reflection on the societal impacts of technology innovation and the second was a computer program that students wrote over several days that had several requirements for specific algorithms to be included and required student reflection on how their work met those requirements. The societal impacts project was eliminated by the third year of the class. The second project still exists but because students were using ChatGPT to do their reflection, the reflection is now a timed writing exercise that happens during the bubble-in part of the test. I’m unclear on the politics of these changes but they are coherent with the gravity of Paolo’s Freire’s description of Banking Education, where specific facts are valued and the regurgitation of those facts by students is accepted as sufficient evidence of learning worthy of reward.

As the initial energy around teaching kids to write code fades from hype to hopefully something lasting and valuable, there is a new push for things-that-kids-need-to-learn — Artificial Intelligence. Tech startup companies are building narcotic tools and pedantic curriculum designed specifically to “save time” by, in my opinion, eliminating learning. Write your lesson plan in a fraction of the time. Analyze a chapter and finish your essay in minutes, not days.

Wall-E, the scariest dystopia ever

There is a cost to allowing machines to do our thinking for us. In this case, the cost is the loss of intuition and insight. When we work through a problem ourselves we don’t just come up with an answer, we build something like cognitive reflexes. I grew up in Sacramento, CA. I spent every summer in the swimming pool. I swam competitively, I worked as a lifeguard and I still have a deeply embedded muscle memory that takes over when I’m in the water. I don’t have to think about how to swim in the same way I don’t have to think about how to walk. This is true only because I learned to swim. Exploring an idea many times from many angles creates cognitive reflexes like muscle memory. We use these cognitive reflexes as intuition that helps us spiral deeper, helps us know which thread to pull, which idea to follow. (Jerome Bruner)

As we build intuition in one domain it helps us to see echoes of patterns in new domains, echoes of expertise built through experience. These patterns are maps to an instinctual source of knowledge, insight across domains (Robin Hogarth, Educating Intuition). This is really abstract, but that is exactly the point. The ability to think abstractly, to see patterns across disparate domains, is a difficult cognitive skill that is hard won by doing our own, human, exploration. When we explore and get stuck in intellectual cul-de-sacs we are required to take serendipitous leaps of intuition, to build understandings and create potential for insights into knowing and understanding life.

I am sure there is a deeper analogy here, a pattern I can’t quite see. Something about what it actually means to be human, something about embracing the imperfect, the inefficient, something about beauty in noise, listening through the scratches on the album in a effort to be with the musician, saying hello to a stranger on the bus, loving cheese whiz, learning from pain, swimming for as long as I can along the bottom of a lake, cleaning a toilet that isn’t mine, wondering why I’m sad, if is it ok to be happy, living my life out loud, feeling lonely and awkward and wondering if it is OK to be me.

--

--

steve wright
Educate.

The protocols of neighborliness are in contestation with the protocols of purity and the most important question we can ask ourselves is “Who is my neighbor?”