When it comes to making money, software developers are an odd bunch. Developers produce code, and code is in high demand. At a glance, it seems obvious that this work pays for itself.
But when a developer produces code of their own choosing, they quickly find that while their code-writing skills have market value, their output (the code they write) has none.
Given parallel conversations these days about open source, funding protocols, and paying creators, I thought it’d be fun to break down some of the behavioral quirks that make developer markets so challenging (yet fascinating!) to work with.
A rough glance at the history of software suggests there are three ways that developers make money:
1) Sell output (code) — popular in the 1980s/early 1990s
2) Sell skills (time) — popular in the late 1990s-today
3) Sell access (reputation) — popular in the future (?)
In the early days, it was assumed developers could make money by selling software itself. As with other digital goods, this proved to be difficult. As the software industry grew, however, developers found an easy landing in the job market, selling their time to employers. In the future, as we move towards a reputation-based economy, I’m guessing that developers will find ways to make money based on who they are, rather than what they produce.
At first glance, developers look like the perfect customers. They have a lot of money to spend (both theirs and their employers’), they use a lot of tools in their work, and they recommend new ones to each other all the time.
Yet despite these characteristics, software is notoriously hard to monetize, for two reasons: 1) the tendency of code to be shared freely, and 2) developers’ desire and ability to create alternatives.
Like other digital goods, code has zero marginal cost, meaning that once it’s been created, it can be replicated and consumed for free. Initially, developers used licenses to put up artificial barriers around consumption: a relic from our physical world. But as with music, journalism, or any other kind of digital content, this approach hasn’t been very successful. Information wants to be freely shared, despite our best efforts to make it behave otherwise.
The other option is to charge for something that developers don’t want to deal with. This is challenging, because developers are both the creators and consumers of their own products, which makes selling to developers a bit like selling ice to an Eskimo.
Let’s say I need to buy a couch. If I don’t like any of the options I find, unless I know how to make my own furniture, I’m going to shrug my shoulders and buy one.
But if I’m a developer who can’t find any library I particularly like, I might decide to write my own version (and even enjoy it!). With the vast amount of code that’s publicly available today, it’s easy enough for me to make a copy (fork) and try my hand. And if I’m successful, it’s also likely I’ll share my code with the rest of the world, whether to show off what I’ve done, or because I get joy out of someone else using my solution.
However, this is mostly true for code that developers want to tinker with. The more “not-fun” the code is, the less likely there are to be alternatives.
So if you sell something that developers really don’t want to deal with, it’s more likely you can charge for it. For example, hosting, security, reliability, support, and “money code” are popular areas where software makes money. In effect, it’s charging for convenience, rather than the code itself.
But not all (I’d venture to say “most”) code has this advantage, and I imagine those opportunities are shrinking, not growing, over time, as we automate away more and more of the hard bits.
If developers can’t sell their code, they can sell their skills (measured as time), whether by freelancing or joining a company full-time.
Employment and reputation are at odds with each other, however, because getting hired means writing code that your employer wants you to write, rather than your own. Some companies address this problem with 20% time, hack weeks, or other flexible arrangements (it’s difficult to think of any other job market that enjoys these benefits!), but in the end, it’s a perk, not a core part of the job.
Selling time doesn’t allow a developer to make money producing their own code. It merely funds their lifestyle so they can write other code in their spare time.
Freelancing is no better. Your hours and choice of projects might be more flexible, but you’re still paid to produce something for a client rather than for yourself.
There’s another problem with selling time. If code is your output, and code is basically knowledge, then over time, we don’t need to write as much code as we used to. A lot of code that gets written is already interchangeable between developers, and increasingly machines. In the long run, it’s possible that we won’t need to hire developers to do some of this work as often, or at all.
If that sounds ridiculous, consider how much the cost of launching a software product has dropped today, compared to 10 or 15 years ago. It’s not just that infrastructure costs have gotten cheaper, but that fewer developers are needed to perform the same tasks. (Edit: Dan Luu compares developer salaries to that of lawyers, noting that the latter bifurcated over time — in part because “lawyers have gotten hit hard by automation”, and speculates on future developer earnings.)
So what’s a developer to do?
Firstly, the above options will probably always work for some developers, some of the time. Someone will always make money off of selling software, or making it more convenient, or working for a company, or freelancing. I don’t mean to suggest that any of these options will ever go away completely.
But for developers writing code in the public domain, every option above is a workaround, not a solution. At the same time, the perceived value of this public code seems to be rising, as more and more companies rely on it. So it seems inevitable that new opportunities will arise.
Much of the chatter is centered around funding projects themselves, but I wonder whether we’re looking at the wrong side of the horse. Maybe it’s not the projects, but the people behind the projects, that carry the most value.
So long as we focus on code, we’ll keep coming back to the problem of substitutes. Code has many substitutes. Getting paid to write code has many substitutes (whether human or machine). But a developer’s reputation has fewer substitutes.
By reputation, I mean something you can provide that nobody else can. A few examples I can think of:
- Charge for prioritization: If you manage an open source project that I use, then I might pay you to respond to my bug report, my email, or to have 1:1 office hours with you. I think developers shy away from this because it seems wrong somehow, but the definition of open source says nothing about how it gets produced, only how it is distributed.
- Charge for knowledge: Much like a musician appearing in concert 😉 I’ve seen some open source developers host workshops (ex. Workshop.me) or teach online courses (ex. Egghead.io).
- Charge for brand: Patreon is an obvious example. Also, some companies retain employees that they consider beneficial for their ties to certain open source communities. I wonder whether this might become more popular over time: almost more like a brand sponsorship than employment. A similar example might be the string of targeted hires that Microsoft recently made, which undoubtedly boosts their brand.
These examples might look trivial today, but what sets them apart from the other options is that the funding engine is the developer’s reputation, not the code they write.
In the early 1990s, it seemed inconceivable that people wouldn’t pay for software. Today, it might seem inconceivable that developers might eventually not sell their time. Given that the world at large is tilting towards the individual, I’m curious to see how those trends impact developers, and whether they might find new ways to capture value in the market.
I write about money, governance, and the internet. Get future posts by email by subscribing here.