Myth: Experience Can’t Be Taught

How I concluded that hiring developers based on ‘experience’ is just plain lazy, and my exploration of an alternative approach.

Hywel Carver
Skiller Whale
8 min readApr 12, 2021

--

Here’s a situation I keep hitting: I write a job advert for a developer, and there’s a box for how many years of experience I require. Can I really specify the gap I’m filling with a number of years?

The answer, I have concluded, is no. This article is about why ‘experience’ is a shitty way to assess skills, and how being a CTO forced me to think about it differently, and come up with an alternative.

What we mean when we ‘hire for experience’

First, let’s be clear. No one is good at their job just because they’ve done it for a certain length of time. When we hire for experience, we’re using it as a proxy for our expectations of the skills they will bring. Specifically:

  1. Improved quality of individual contribution — code that is better, faster and cheaper to maintain because there’s:
    - Less tech debt
    - Faster shipping
    - Fewer bugs
  2. Improvement in the rest of the team, who can learn better coding practice from the more experienced developer.
  3. Avoiding pitfalls and bear-traps. Someone who’s experienced a problem before will help us avoid that problem in future.

We use experience, then, as a proxy for skills (e.g. faster coding) and awareness (e.g. bear traps). We assume that if you’ve been working with a technology for X number of years, you’ll be Y amount good and will have encountered Z number of traps.

That’s because we assume that during the course of time, you’ll likely be exposed to a range of diverse challenges, in different situations, which you’ll need to figure out how to solve — thus learning from experience.

But experience ≠ skills

Experience is not repeatable. Sometimes experience creates skills that are repeatable, but not reliably, and not necessarily in a way that could apply to a different context, which may require a different solution.

Solving a problem does not uniquely define a skill. One person encountering Problem X may solve it by pasting a fix they found on Stack Overflow (without actually understanding it); another may read the documentation and then come up with a few solutions, before selecting the best one for their context.

It sets an expectation that time (and the ‘organic’, haphazard, error-driven learning that comes with it) is the way to progress in your career.

By seeing experience as a proxy for skills, we are conflating the experience of encountering a problem with the capability to arrive at appropriate solutions for other (possibly similar) problems. To put it bluntly: just because you addressed a problem, doesn’t mean you did it well, or that you’d be able to solve a similar problem in a different environment.

We’ve all encountered people with the same amount of experience with very different qualities and capabilities. Time is, at best, a poor filter for applicants. But it’s an awful measure of skills, and reflecting on past me, I think I only used it because it was easy. I’m now convinced that hiring based on years of experience is about as daft as hiring for a ‘10x developer’.

I’m certainly not the first person to say this. In fact, a few years ago Forbes addressed the experience fallacy in tech. Hiring for potential or ‘smarts’ has long been touted as a harder-but-better option, and it has been adopted by companies like Trello, Stack Overflow, Glitch and HASH. Despite this, most job advert sites require this field in order to advertise your job to candidates, so many of us are still forced to use it 🤦‍♀️.

Measuring potential is great if you’ve got plenty of time. But if you’re a time-poor start-up CTO (which I have been for much of my career!), you need something in between measuring potential and measuring years of experience: you need to be able to measure existing skills, and know how to quickly develop them.

The solution to this problem was what eventually convinced me to drop everything and found Skiller Whale.

Dev learning is like riding a bike

Understanding how skills are acquired will help to release you from the prison of ‘X years’ experience’. Bloom’s Taxonomy (illustrated below) explains some phenomena you’ll have seen before.

  • Ever watched a video but then get stuck when you try to replicate what you were shown?
  • Or told your team how you want them to work, only to watch them immediately revert to type?
  • Or hacked on a side-project and suddenly grokked something you haven’t before?

All Bloom.

The top of this pyramid reflects the kinds of learning most of us would consider “wisdom” or “skill”. But if years of experience don’t reliably push you to the top of the pyramid, what does?

Rather usefully, the ICAP framework explains how to get learning results higher up on Bloom’s Taxonomy. It argues that the level of engagement in learning is directly correlated to how much the learning sinks in. This has implications for the styles of learning that result in the best outcomes in terms of understanding. The ICAP framework defines 4 categories of learning (from ‘best’ to ‘worst’) as follows:

ICAP = Interactive > Constructive > Active > Passive

Table summarising the ICAP Framework

The most common formal learning used by developers (watching video lectures) and informal solo learning (reading Stack Overflow / Googling) is primarily passive, so according to Bloom, you’ll recall and possibly understand the topic. However, you’d be very unlikely to apply it or create something new with it. So, in real terms, pretty useless.

To get to the top of the pyramid, where they not only grok it, but can analyse it, apply it to new contexts, learning needs to

  1. Introduce a problem
  2. Provide the knowledge to fix it
  3. Allow the learner to apply the knowledge in a new context
  4. Give the learner a safe, interactive environment to get stuck and ask questions
  5. Never ever provide cheat codes

The ICAP framework — in particular the emphasis on how engaged a learner is with what they’re learning — is relatively intuitive: Think about your dullest class at school. Do you remember what was taught? Do you still understand it? Could you apply it today? Now think about when you first sat on a bike. Do you remember and understand what you did? Could you apply it today? If you were cycling, would you be self-aware enough to improve your technique? Of course you would!

Can you actually shortcut experience?

Hypothesis

If I could intensively expose a mid-level dev to a series of well-selected problems, with the tools and support to arrive at solutions, then I could get them to ‘senior’ level (skill-wise) significantly faster than they would naturally get there. For this experiment to be interesting, I don’t mean a saving of months, I mean a saving of years.

Method

In order to be that efficient, I would need to know what the ‘missing’ skills were. This meant getting a clear picture of where the person was — rather like a Training Needs Analysis, but on an individual basis. I broke a technology down into 50–100 individual topics, and asked questions on each one to probe the extent to which developers understand and can apply different discrete concepts and features of a programming language. Question types range from self-reported comfort levels to code exercises to test understanding and decision-making. Where possible, I asked people to write code, but it would be a bit unreasonable to arrive at question 43 and be asked to draw a UML diagram with 50 classes that demonstrates a modular application architecture. “How confident are you at architecting large, modular application structures?” is an imperfect, but more realistic, question in this situation.

Once each person in a team has taken the assessment, I was able to visualise their abilities in every area using a heatmap for each individual topic (darker blue = higher score).

As a CTO, a big part of my job was helping my board and exec team to understand what I was doing, so it’s important that these results can also be meaningful to non-technical people. I did this by expressing the individual topics in terms of 8 broader areas of competence:

So now I know what skills to address, I need to test whether I can actually fast-track that skill acquisition in a way that mirrors high-quality experience.

Method part 2: training (but in a good way)

When I hear the word ‘training’, the word “effective” is far from my mind. I wanted to create a new type of training that was about pushing beyond recall to change what someone is capable of (getting right to the top of Bloom’s Taxonomy).

And, informed by one of the most painful struggles any CTO trying to develop their team faces, I wanted the training to have close to zero productivity impact. Sending the team off for a week just isn’t realistic for most companies, but 1 hour every 2 weeks is.

Results

I figured the best way to evaluate if my experiment worked was to ask the manager of a team to rate a person’s skills in a given technology as ‘junior’, ‘mid-level’, or ‘senior’ before and then after training. In the case of the company that agreed to be my guinea pig, Plandek, the team went from ‘junior’ in React.js to ‘senior’ after 12 1-hour sessions over 8 months. That feels like a huge improvement on the 3-years-to-move-up-a-level norm.

I am now focused on proving this with different teams to see how broadly applicable the results are.

Get involved

If you already thought experience was a poor predictor of performance, or if you’ve been convinced by my argument, or if you’re just curious enough to give it a go… I’d love your team to be part of the experiment. Book some time to chat to me about it here.

--

--

Hywel Carver
Skiller Whale

Co-Founder & CEO of Skiller Whale; published curriculum author and keen funk saxophonist.