A mental model for choosing technology
I’m writing to hopefully help save you some anxiety when making your next big technology decision. Choosing technology is one of the hardest things an enterprising young software engineer has to do. From the moment you start learning, you’re flooded with decisions — Linux, Mac, or Windows? Web apps or native? Machine learning or good old fashioned CRUD? Java, C++, Rust, or Python?
Without experience in each of the options, making the right choice feels impossible. This is something that I’ve encountered again and again, and I feel like it’s a major reason why many bright young people might shy away from software development as a career path.
So, this post intends to highlight a few strategies I use for choosing technology and reducing the risk and anxiety that comes with making not-fully-educated decisions.
Understanding the options
The very first thing needed to make a good decision about technology is a baseline understanding of the different choices. For some things, like operating systems, this is fairly easy. Others might be more challenging, either because the tools are difficult to compare, or because it’s hard to express exactly what you’re looking for.
Places like Reddit, StackOverflow, and Hacker News are extremely valuable for this type of search. A thread with well-informed people listing things they’ve used in the past is a good starting spot. Asking a community on Slack, Discord, or Gitter is another way to get a list of tools or technologies that might solve your problem.
Once you’ve got the list of things you’re comparing, then you just pick the one with the most GitHub stars. Mission accomplished!
If only it were that easy…
There are endless ways to compare tools and technologies. Ergonomics, efficiency, popularity, community, rate of change, size, price, language, backing organizations, etc, ad infinitum. Picking the right metrics to grade tools on can be as hard as picking the tools themselves! Still, there are plenty of things you can do to make the right decision.
Try them all out
The first strategy I use is also the most obvious. When deciding between tools, especially tools that you may use for months or years down the line, take them all out for a test drive. This may not be feasible for a multitude of reasons — too many things to try, deadline pressure, expense, etc. Still, it’s the best way to get familiar with how things work.
A few tips when trying things out — keep the scope small, and make apples-to-apples comparisons across the tools. Keep notes. Place a high priority on how well supported you feel from documentation and the community — these are critical to smooth adoption of a new tool. Ask a lot of questions in forums and chat communities. Focus on the things that you will be doing repeatedly, and don’t put as much weight on the one-time setup, even if it’s a bit onerous.
In an ideal world we’d have all the time and money we need to make a well-researched decision after trying each of the N different options for a particular technology. Over here in the real world, we’ve got to choose the language and framework for our startup Nflunzr (influencer-powered flu research) by Friday and live with that decision for the next three years.
Ask some questions
I start with these questions when I need to make a decision without trying a tool out.
How high-quality is each tool?
Evaluating the quality of tools is tough. Naturally, a tool will tell you a lot about what it thinks it’s great at, and not much about what it’s not good at. Paid tools in particular tend to not make it clear the context in which they are good, which can make them challenging to compare.
I listed some high-signal indicators above, but there are plenty of other indicators that are useful to judge quality. I use the following guidelines, roughly ranked by importance.
- Suitability
- Legibility
- Stability
- Adoption
- Community
- Optimization
Suitability is how well the tool solves my problem. This is almost always the most important criteria when choosing a tool. If a variant of your problem is plainly solved in the README, in an example in the documentation, or by a method of the API, the tool is suitable for your task. If your use case is buried in the issue tracker or not addressed at all, the tool likely isn’t a good choice.
A good way to understand the suitability of tools is to look for a repository showcasing things that have been built with that tool.
Legibility is how easy it is for me to understand what a tool is doing. This might come from robust documentation, clean and well-documented code, or an exceptionally intuitive interface. If I can’t understand what a tool is doing, the chances of me using it correctly are much lower.
This is extremely important when choosing an unfamiliar tool. There are a few cases where choosing a less legible but more popular tool (looking at you Git) is a good choice, but generally the quality of documentation and interface correlates strongly with success using the tool.
Stability is how frequently the tool changes and how often it breaks. Stable tools are well-tested and tend to have small, incremental changes. Unstable tools have frequent, large changes and may have limited or no testing. I will always favor an established, stable tool over a new, unstable tool.
I assess stability by checking for test suites, a structured release process, and watching the issue tracker. Community is also an important source of stability feedback — breaking releases generate a lot of noise.
Adoption is how widely-used the tool is. It can be tough to get a good sense of this — even high numbers of downloads or GitHub stars can be misleading. I find the best metrics around adoption are the number of dependent repositories, number and size of organizations using the tool, and amount of 3rd party information (StackOverflow questions, blog posts, active subreddits) about the tool.
Even though a widely-adopted tool may not be the best choice, it tends to be less risky — if companies are invested in it’s functioning, it will tend to be more stable, and more people using the tool leads to a better community.
Community represents how much human capital a tool has. This could be a well-curated issue tracker, a large user-base, an ample plugin ecosystem, or an active chat channel. Tools with strong community speed up development because of the amount of second-hand knowledge that is available. It can be challenging or impossible to get assistance with tools that have a weak community.
Additionally, community reflects the values that the software represents. Choosing tools with healthy, inclusive communities is a way to express your values in the software world.
Optimization revolves around the size and speed of the tool. Size of the tool is generally fairly easy to assess — bigger tools have more code, more dependencies, and a larger artifact size. Speed is much harder to assess — generally, to get an accurate read you need to actually use it in your project and measure the results. Still, you can use benchmarks and community knowledge to understand how it might perform under your workload. All things being equal, I prefer a smaller, faster, more extensible tool to a larger, slower, more complete tool.
I rank optimization as the least important of the things I look for. It’s context dependent, but I lean on the mantra of “make it work, make it good, make it fast”, and the other points are more important getting to a functioning and maintainable solution.
How important is it that I get this right?
If the cost of failure is low, then I can rely on some low-signal indicators such as popularity or GitHub stars to make my initial decision. If I make the wrong choice, I should be able to fairly easily pivot later. This would apply to choosing a library that I only call a few times, for example.
If the cost of failure is high, typically because of some form of lock-in down the line, I need to rely on high-signal indicators to reduce risk. The best signal is personal experience, and the second best one is other people’s personal experience. I try to make this type of choice by asking good questions of the right people.
If this is in a work setting and will influence the product years down the line, it’s well worth paying for the right person’s opinion.
One high-signal indicator that isn’t necessarily covered by tool quality, especially for people trying to decide among foundational technologies to learn (e.g. first programming language or framework) is the existence of jobs hiring for that skill. Learning in-demand technology is a good way to make sure you’re employable and focused on the tools that will advance your career.
How much do I want to learn?
I ask myself this one a lot when starting personal projects, but it’s valid in work contexts as well. For some projects, choosing an unfamiliar tool as a means to learn a new language, framework, or technology is a great idea. Typically those projects should not have a well-defined timeline or end state. For this type of project, choosing a hyped new technology, interesting programming language, or unorthodox approach is perfectly appropriate.
For projects where the primary goal (money, eyeballs) is secondary to the learning, it’s better to choose something you’re familiar with. Even if you’re leveraging a new library or framework, using a language or platform that you’ve worked with in the past is almost always a better idea than pinning the success of an important project on your ability to both evaluate and learn a new technology.
The only case where this doesn’t hold is if the new technology is so monumentally better than your current experience that you will really be handicapping the project if you don’t branch out. In my experience, this never applies to programming languages, frameworks, cloud platforms, or code editors. The only places I’ve seen this truly apply are data management (the difference between a relational database and any other data store is enormous), and some APIs.
Making the call
If you’ve come this far, you’ve asked yourself a few questions about the nature of your project and the relative strengths and weaknesses of the technologies you’ve identified. Now you’ve got to make your choice. You still might be struggling to feel confident about your analysis, or it might just be inconclusive.
Just pick one.
Seriously, if you can’t make a decision, flip a coin or ask your friend to choose the one with the coolest sounding name.
If tools are so similar that it’s really hard to identify the strongest choice, chances are they’re mostly interchangeable in practice. Take the leap and try one out. Just picking one and running with it will build your personal experience and also, more importantly, get you familiar with the problem domain of the tools so you have a better feeling for the important differences between them.
Even if your first choice is an expensive disaster, you should be able to make the right call the second time around.
At the end of the day, there is rarely a “perfect” answer. If you’ve done your homework, thoroughly analyzed the playing field, matched up tools with your requirements and their relative strengths, there isn’t much more you can do.
Building software involves taking calculated risks and choosing a technology stack from a sea of mostly-similar options. Bias towards action, build your knowledge base, and pretty soon you’ll have a good sense for exactly which tools to use in your area of expertise.
Hopefully this was helpful, and I’d love to learn what your techniques for choosing technology are in the comments!
If you’re a startup or small company struggling to work out the kinks in your tech stack, I’d love to help. Drop me a line at info@lagiers.studio.