Skill Development in the Age of Generative AI

Generative AI tools can be powerful assistants in skill development. To get the most out of them, we have to learn how we learn.

Philip Grabenhorst
6 min readApr 16, 2023

“You only have to practice on the days that you eat.” — Shinichi Suzuki

I’m now coming out of the longest dry spell in my musical life. For almost a year, I haven’t performed or taught, nor been vaguely attentive to my instrument. I rationalize it by saying that I’ve been able to deconstruct my technique and come back with more maturity and less tension in my playing. But it’s been rough. Nothing makes you appreciate the ephemeral nature of skilled performance like music. (Thankfully, it comes back fast. Try looking up Heifetz’s retirement routine). Likewise, I can’t think of a discipline where daily practice has so profound an impact on your ability to do your job.

An Apple a Day

In any discipline, people might tell us to “do a little bit every day” … as beginners. When I was teaching, I certainly told my students that. (At the time, I was practicing every day too — 12 year olds have a sixth sense for hypocrisy). Duolingo tracks our language learning streaks to coach us into an “every day” routine. When we first start out coding, most people will say the same: ”do a little bit every day.” There are plenty of programs out there that are designed around this model, such as Codecademy. Reputably, Benjamin Franklin practiced writing every day. One of the more ludicrous variations I’ve come across is this one, by James Altucher, who generates ten random ideas every day. Let’s be honest, that’s a weird skill. I admire the dedication. But what about people who aren’t beginners?

Most musicians I’ve spoken with talk about “maintenance” practice. You don’t have to do as much as you do when you’re growing your skills, but you still have to do something. No violinist would come to a performance without running through their piece the day before, and the day before that… And while we might imagine less physical disciplines to be exempt, I suspect that isn’t the case. I’ve talked a lot with software engineers who have moved away from the keyboard to other areas of the industry, such as managerial or marketing work, and they say the same thing: when they come back to the keyboard, they have a period of “warming up” where they feel sluggish and have a difficult time of it. This is separate from learning new skills (which we should always be doing), it’s the activation of a skill we’ve already learned. We need daily practice both to grow and to maintain our skills.

What is all of this work actually doing? Well, it depends on the efficiency and content of our practice — whether we’re providing sufficient challenges for ourselves, for instance. It also takes a long time. Generally, though, this daily practice will help us to “chunk” certain memories together, freeing up our working memory for more complex tasks. In the book The ABCs of How We Learn, the authors explain the power of this symbolic memory manipulation, stating that “people can begin to work at the level of the chunk, which frees up working memory for considering other relations and alternatives.” The example they give is of a chef, but we can imagine the alternatives in any discipline. A violinist can reason at the level of the phrase, instead of worrying about individual notes. A language learner frees themself from the worries of grammar and focuses on the ideas they want to express. A computer programmer quickly stops thinking about the differences between for and while loops and looks at the larger architecture. Whatever the discipline, we create these chunks when we build skills, and we maintain our ability to quickly surface them when we maintain our skills.

Degeneration

What if we stopped? Back when GPT-3 was first released, Katelyn Donnelly asked this question in a piece entitled Avoiding the Curse of Deskilling. Conceptually, it says the same thing as other pieces addressing the eschatology of knowledge work in the age of AI: there are things we trust humans to do that we do not trust machines to do. Given how well hardwired we are to trust people and things that are similar to us, this may never change. Josh Comeau recently wrote about this topic, focusing on new engineers who are getting into the space. More on the nose, Lyndon Cerejo, writing about UX, asserted that empathy is our most important skill, no matter how advanced our tools become.

But Donnelly’s treatment of the topic hints at some fundamental problems of automation and skill development. She cites the example of an airline crash. The pilots had left off their daily practice, such that they only really “practiced” for four hours in a year. When they were asked to take over and address a situation where the automated system was not trusted, their skills of performance faltered. Even I, last year, spent more time than that on my violin. You wouldn’t catch me dead on a performance stage.

When we talk about AI tools and their most potent contributions, we’re usually talking about generative AI tools. We can, right now, use GPT derivatives to generate huge quantities of code. The idea is that most of this code is repetitive anyway, and we’re alleviating the “drudgery” of having to work through it ourselves.

But what if we allow too much of the drudgery to be taken out of our work? What might happen to our work as software engineers if we allow ourselves to turn into the “4-hour-per-year” pilot? GitHub copilot and other generative tools are aiming to make this possible, but is it even desirable? If we are expected to make informed decisions at the level of architectural chunks, how can we do so if we ignore the abstracted, fundamental pieces with which these chunks are built?

We learn by generating. This is conceptually synonymous with the Constructivist school of thought in Learning Theory, which we talk about a lot at my day job. If we automate the process of generation, then we are not learning. This isn’t a moralistic appraisal — generative AI isn’t bad. It just doesn’t make us more skilled or help us to maintain the skills we’ve already developed.

If our goal is to be good problem solvers and reason effectively on complex systems, then we have to do a lot of that on a daily basis. I think that GPT and other generative tools can be incredibly empowering and help us do this. In his article, Josh Comeau recommends an adversarial approach. We might use LLMs to sharpen our skills by prompting them to produce several alternative solutions to a problem for us to analyze. We’re already seeing this use case in traditional educational settings. What if our generative tools took the form of a red team or a chaos monkey, probing and poking holes in our work? Approaches like these shorten the feedback cycles necessary for learning but don’t take us out of the loop.

I’m intensely curious to see what these tools wind up looking like. Perhaps our Knowledge Management Systems can play a role, where we can use generative tools to automatically apply only the “chunks” of code that we fully understand. We could even use them to devise schedules for skill maintenance, where novel challenges are generated using structures that we’ve already internalized. Exciting, isn’t it?

The Real Goal

To reach our personal, professional, and societal goals, we have to learn how we learn. Even if we get to the point where we trust our machines to craft entire systems of systems without human intervention, we’ll still have to spend our time doing something. Therefore, the problem here is just as much one of individual aspirations as it is of human/machine competition. If we decide that we want to be a certain kind of person (or to help solve a certain type of problem), deliberate, daily practice is one of the most important tools for getting there. Generative AI will either help or harm us as we pursue our goals. At the end of the day, though, the choice is ours.

--

--