Bits and Behavior
Published in

Bits and Behavior

A screenshot of a Commodore 64 program printing hello world.
A Commodore 64 BASIC program.

Programming evolves, privilege reigns

The first computer program I wrote was in 4th grade in 1989. My teacher, Mr. Strictland, had set up a row of twelve computers in the hallway and my class of twenty four students lined up, walked single file outside our classroom, and were told to sit down in pairs in front of one of the computers. I’m pretty sure they were Commodore 64s: I remember chunky beige keyboards, an ugly colored CRT, and big block full caps letters. Every computer had the same thing on the screen: a big black rectangle on the left with a blinking white rectangle and another empty black rectangle on the right.

I sat down with a friend and there was a piece of paper next to the keyboard with some instructions. It told us to use the keyboard to enter the text on the paper worksheet, exactly as it was listed on the worksheet. We pecked one symbol a time, verifying our transcription. It was an incomprehensibly boring task: P U T 1 2 , 6 <ENTER>, P U T 13, 6 <ENTER>. Every keystroke was reflected on the screen, until after about 15 minutes, we had a huge column of text, the same word “PUT”, over and over, each with a different pair of numbers. I wanted to make sure we’d entered everything correctly, so I read out each line and had my partner check it against the worksheet.

The last instruction on the worksheet said to press a special key — I think it was a function key— and then see what appeared. We pressed it and a white duck appeared in the black rectangle. A duck! We’d thought we were doing some strange math problem, but we’d been drawing a duck the entire time! This shocking revelation quickly spread throughout the class, with everyone behind on transcription racing ahead to make their own duck, and with those of us who had our duck tinkering with the numbers we’d entered, slowly transforming our ducks into incoherent pixel art.

That first encounter with code was fun, but hardly a robust learning activity. But what did stick was a simple idea: if I gave carefully written instructions to computers, they could execute them later. I brought that powerful idea with me with every new encounter with code. The first time I saw a TI-82 graphing calculator program, I had some sense that the instructions determined what the calculator printed to the screen. The first time I wrote a BASIC programming in DOS, I understood that the statements I wrote determined the behavior of the games and animations I made. The first time I wrote a Pascal program in Turbo Pascal, I understood that the programming language I was using could be used to encode a variety of behaviors, even those not visible, as in the complex geometric calculations I was computing to enable my 3D interactive art. And the first time I wrote a Java program in college, I understood that the virtual machine that ran was just a more elaborate version of that duck drawing I’d made nine years earlier. When I started learning web development in grad school, a browser wasn’t hard to grok: it was just another platform for putting pixels.

While the idea was the same, other aspects of programming were not. Running that Commodore program didn’t involve anything more than a floppy disk and a keyboard. Programming those pictures couldn’t have been simpler. But my graphing calculator had more than just put pixel and change color commands; it had hundreds of commands and a whole manual. The QBasic IDE in DOS, while pre-installed, had dozens of menus and countless mystery features with no documentation. And Turbo Pascal had hundreds of menus with countless cryptic concepts, which I only decoded after finding some used technical books at Powell’s technical bookstore in downtown Portland. By the time I reached college, running a basic Java program required installers, IDEs, configuration files, JavaDocs, and a whole desktop computer. Web development in the early 2000’s required libraries, frameworks, APIs, standards documentation, and multiple browsers, but that didn’t seem like much more than Java.

Because my formative learning happened amidst this escalating complexity, I almost didn’t notice how much harder everything was to learn. I was learning new technologies as people were creating them. I was so busy moving forward I didn’t realize I was just barely outrunning a wave. It wasn’t until grad school when I started studying how people learned to code that I realized just how complex programming had become. I worked for years in research to invent tools and languages that might mitigate some of that complexity. But eventually it felt futile: no matter how helpful IDEs were or how useful debugging tools became, trying to understand the ever rising complexity felt like turning around and picking a futile fight with that wave. Millions of developers were hard at work ensuring it would be bigger, faster, and more powerful than ever, crushing anyone that wasn’t surfing atop it.

Several years after starting as a professor, I gave up trying to stay ahead. Everything seemed hopelessly complex. There was no way I could learn fast enough to make anything meaningful with modern programming platforms. I grieved this loss, accepting that I would mostly have to make vicariously through my students. There just wasn’t time to learn enough to make it viable. And there was no sign that the simplicity of 1980’s languages and platforms would ever return.

When I took leave to co-found AnswerDash, I had to catch up. I had to quickly move past my ancient knowledge of PHP scripts and jQuery and move to more modern front end frameworks like React and backend platforms like Express, Flask, Amazon Web Services. My time away from programming regularly had revealed more than a wave: programming platforms were a tsunami, with millions of packages, new frameworks every three months, a thousand services, API keys, billing accounts, and push notifications to my smartphone in case it all stopped working. Somehow, in twenty years, the world had gone from a keyboard and text box to an intricate, throbbing, incompressible ecosystem of black boxes, build scripts, transpilers, and function calls.

While all of this was happening, some sought to recreate the simplicity of the 1980’s, manifesting Papert and Resnick’s mantra of low floors, wide walls, and heigh ceilings. Hundreds of little languages like Alice, Scratch, Agentsheets, and Processing sprung up, offering simpler sandboxes amidst the layers upon layers of source code cruft. Programming language designers did the same, creating languages like Rust and Racket that tried to capture the power of modern platforms but the simplicity of earlier languages. These made it possible for some learning to happen, even as the complexity underneath these systems leaked out as cryptic error messages, and authentic platforms like smartphone apps begged for learners’ attention. But even these environments couldn’t resist complexity. The need to interface with a complex world led to new features, extensions, integrations, and more, like little mini waves of complexity recreating the history they were trying to escape.

When I think about the past 30 years of change in programming, I wonder what the seeming inevitability of complexity means for who codes. As the “floors” and “ceilings” get ever higher, and the walls ever wider, who do we exclude? Or put another way, who could possibly be included, other than people like myself, who have spent decades living through this rising complexity, with some sense of how it all works?

I’m afraid the answer is those with privilege. The privilege to have time to learn the ever increasingly complexities of platforms. The privilege to access schools and colleges with teachers that can teach them. The privilege to access the internet to have some chance to access helpful YouTube videos that give some possibility of grasping the last API. The privilege of knowing English, to access the documentation that seems ever more essential to building even the simplest of things. The privilege of stability, to free youth to even think about playing with code, as opposed to being consumed by the endless weight and responsibility of poverty. The privilege of having parents and family with programming knowledge that they can pass on both directly, and through encouragement.

This doesn’t leave many people. And I’m afraid that as we make ever more complex systems, this group will shrink to be ever smaller. And that the only institutions capable of resisting this rising entropy— the public schools that offer some hope of equal access and the programming system designers that create the platforms we have to use — have never had less reason to talk to each other.

I like programming. I like making programming tools. I like public education. I like diversity, I like equity, I like inclusion. And, since that day in 4th grade, I like seeing code as something to play with. Maybe I’ll spend my 2022–23 sabbatical jumping back into the wave, to create something that is not only simpler than anything we’ve created, but more radically inclusive than people imagined programming could be.




This is the blog for the Code & Cognition lab, directed by professor Amy J. Ko, Ph.D. at the University of Washington. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at

Recommended from Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Amy J. Ko

Amy J. Ko

Professor of programming + learning + design + justice at the University of Washington Information School. Trans; she/her. #BlackLivesMatter.

More from Medium