You’ve got a great product idea, so now what?

Christopher Hazlett
Code and Conveyors
Published in
11 min readJul 13, 2024
Photo by Kelly Sikkema on Unsplash

Software ideas are easy. Execution is hard. Getting started can be even harder.

That’s not really the most controversial of hot takes. It’s obvious to anyone who’s ever put their cursor at the top of their IDE or their product brief, paused for a moment, let out a great big sigh and considered whether or not it’ll be worth it. But, if you’ve read to this sentence, I’m going to guess that starting with a blank page is the only thing that wakes you up in the morning.

Over the last two decades, I’ve built a lot of software, some as a solo engineer, some for clients, some that have failed to launch, and some within larger companies that had already established standards. Most, however, were built where there was nothing but an empty IDE and UI scribbles in my notebook. So how do you get started when faced with an empty line and a blinking cursor?

What follows is not a best-technology-to-pick list, it is the series of questions I ask to make my life as an engineer and product owner as simple as possible. I will outline my selections to provide context. Fair warning, I am a distributed systems engineer who loves to work in Go, though I think these questions are fungible to any development environment. So here goes…

On Infrastructure and Architecture

I always start with production. An old team lead that used to work for me once said “I deploy to production first before I write any real code.” He would deploy an empty service into each environment and then start the project. He’s now also a CTO, and an incredibly smart dude, so take his advice, just like I did more than a decade ago.

So now I ask this at the start of every project: How will this application run in production? Which leads me down the following rabbit hole:

Will it be a series of services or a monolith?

I’ve built monoliths, worked as an engineer in monoliths, and I’ve lead teams that managed monoliths. In the beginning, it seems so easy. Everything just works. Your mind can wrap around all the code. There are rainbows and puppies everywhere.

Fast forward 2 years and your tests take 45 minutes to run and everyone is unhappy, including you. And unfortunately, barely a single engineer can wrap their head around how the damn thing even works. So I like a service oriented architecture. I don’t like a micro-service architecture. I like services to be exactly as big as their domain requires them to be. Don’t confuse this with a Monorepo…more on Monorepos later.

What databases will I need to back it?

Another old colleague of mine taught me the phrase Y.A.G.N.I. (you ain’t gonna need it), so I start with Postgres, which for almost all uses at the beginning of a project, will suffice. When I need to improve read times and data has really started to grow, I’ll introduce caching in Redis or search in ElasticSearch or Vertex. What I’ve learned over the years is that most database issues are design problems, especially early on. So start with something easy and well supported, and then go crazy once your problem requires going crazy.

Will it be event driven vs real time?

This is always an easy answer for me. For the last 15 years, I’ve been building event-driven software. So all services I design or manage are built that way. When you design software with events in mind, you solve important problems like idempotency and race conditions as a matter of daily development. When you need real-time data access, it’s much easier to achieve by layering on REST or gRPC calls into the endpoints or models that resolve your events.

In the past, I’ve used Kafka and RabbitMQ. At MasonHub, because keeping costs down was important, I built a Golang queue and stream processing library on top of Redis. It was fantastic, had 2ms latency, processed 100’s of millions of events, and I loved working on it. But maintaining something as close to the metal as that proves difficult over time, especially when it’s not your core business. Most recently, I’ve been using Pub/Sub on GCP. It’s been a breeze to leverage and only required some plumbing code to ensure use was actually easy during development.

How will I monitor it when it’s running?

This questions seems like a weird one when you’re first building a system, but retconning monitoring right when you’re about to deploy is a recipe for disaster. You can use Datadog or other expensive systems, but when you’re starting out, the most important thing to do is get logging right and ensure it plays well with your chosen platform. In Go, the Zapcore library has proven really useful, but Go 1.21’s slog library makes this library mostly obsolete. Whatever you use with whatever language you use, make sure it’s a structured log (i.e. named fields) and that your log parser (either on AWS, GCP, Heroku, etc) parses them to your satisfaction. Then decide what the things you need to know most are. For me, they almost always fall into the following categories:

  1. Run-time — How long did a request/function take to run? How long did a query take to run? This makes it easy to build dashboards on almost any platform. If you don’t make it a priority during the early days of development, I guarantee you’ll spend days trying to figure out why your site is slowing down and your customers will be unhappy with you.
  2. Errors — What errored and what line of code? During development, that level of logging is essential, and once you launch, your life is hell without it.
  3. Queue/Event Depth and Latency- Just because it’s an event driven system doesn’t mean it works well. The depth of any given data stream or queue and their pickup latency will tell you the story of your system. Logging it will let you graph it. Some platforms do this out of the box…just make sure you understand it.

These questions and their answers force me to think about the structure of the software, the platform it can/should run on, and the tools available on each.

For instance, when we started MasonHub, I knew it would be a series of services across multiple repos (more on repo structure later because my ideas on that have evolved), and I wanted it to be easy, containerized, and give me ready access to Postgres, Redis, etc, so I went with Heroku, which essentially just wraps AWS. It was dead simple, I could add all the monitoring tools I needed and access was simple to manage/control and it got us off the ground fast. Overtime, though, costs started to become an issue and I longed for easier access to a greater number of tools.

However, I began working on a new project in February and decided to go with GCP. The product development of Cloud Run over the last 5 years solved a lot of the concerns I had in 2018 about infrastructure bloat and maintenance costs. Because almost everything I’ve built for the last 15 years is a mix of event-driven and real-time, I also needed something that was easier to maintain than our homegrown queue and streaming library at MasonHub. GCP’s Pub/Sub solved those needs exceptionally well without some of the hassle/cost issues we had at Heroku. And 5 or 6 years ago, monitoring on GCP made my head hurt, but it is so much better now (in my opinion).

As for AWS…anyone who knows me knows that I consider AWS’s user experience to be hostile, so if I can avoid it, I do.

I have never had the pleasure or displeasure of using Azure so I have no opinion on the service.

On the Software Development Lifecycle

Nothing will make you more unhappy 6 months into a project with 150,000 lines of code than a 150,000 line piece of software that is difficult to deploy, so after deciding on the production platform, I ask the most important question of all: How do I get from an idea to production in minutes?

A lot the answers to the following questions are dependent on the language you choose to work in and the tools available. I’ve professionally developed in C# (so, so long ago that it’s not really relevant anymore), Ruby, Scala, and Go, with the requisite amount of Javascript thrown in to be effective, yet constantly frustrated. For the last 6 years, I have primarily and lovingly worked in Go, so all of my choices are related to that the ecosystem but are fairly good rules to live by in most languages, though I suspect your mileage will vary.

How do I run this software locally so that I have confidence that what I do on my laptop will work in production?

I have used Buffalo (now a defunct Rails-like framework for Go) and I have used Rails, and I have to admit the hot reloading of code while you are editing can be nice. However, I find that going straight from software that’s spun up like this to a remote environment can become unreliable. So I take a two pronged approach when developing my Go services locally. I use Docker Compose to stand up services and networking (something I’ll describe in a future article) that I’m not actively working on and run a reloader like Air for those active efforts.

Once I’m done with a bit of work, I’ll do a local deploy via Docker compose and ensure it’s working. That local deployment step will pick up some weird issue that would be hard to detect in a remote environment. I actually prefer the added step of deploying locally even if it adds a bit of a delay to my day. I like to have confidence.

How should the code be organized to simplify my development life?

I guess this is a hotly debated topic, but after years of spinning up repo after repo in a micro-service environment, I am a convert to the greatness that is the Monorepo (not to be confused with a monolith).

Dependency management between services is so much easier, and I can see what a change in one service or library does to the rest of my ecosystem. If you’re on the fence, I highly recommend going this route for a multi-service Go system, even one with a UI. While there are certainly some complications to solve and it can make the learning curve a bit steeper due to the sheer size, I think it’s the right way. In a future write up, I’ll lay out how I set up the latest Monorepo using Makefiles, Docker Compose, and Github Actions. I spent a week working with Basil, but that framework nearly broke me. The documentation is, in a word, impenetrable without dedicating ones life to it. Done properly, a Monorepo can ensure you’re not breaking the whole stack of services you’re building just by looking at all the red lines in your IDE.

How do I change my database structure reliably and safely?

There are a lot of tools out there for every language and framework, so I won’t walk through each one. I’m pretty sure I don’t know them all anyway. I’ve been using Migrate for Go, and it works well in each environment. I also keep a Postgres Test and Development instance running in Docker to ensure I’m testing both my deployment and test environment changes. It allows me to rebuild the DB without much fanfare.

Whatever library or framework you use, there are some choices I try to avoid when implementing a schema management library.

  1. Decouple schema migration from the application run-time- We had a migration tool that ran when the application started up and it caused weird ghost in the machine type issues until we decoupled it.
  2. Keep data migrations out of schema migrations- More on this in the next section, but a long-long time ago, I used a migration engine to upload common data into the system (config type data) and it became difficult to maintain. Worse, it made the code smell and created inadvertant interdependencies between data and the application. Much sadness.

How do I work with real data in tests and during development?

Every single engineering job I’ve had has had that moment where you can’t build new things or debug an issue because you can’t recreate the data locally. Trust me when I say that solving this early (even if it’s just solving 80% of the problem initially) is worth the time. Spend the week building a reliable way to destroy and rebuild your development and test databases and a way to fill them with good (and bad) data and then stay on top of it when the DB structure changes. This is different than figuring out DB structure migrations. Don’t rely on your DB migrations to insert or update data. It’s hopelessly difficult to manage and your future self will be mad at your current self.

On Testing

So many people have written about the importance of testing in the past. You know it’s important. I don’t need to tell you that. I also don’t care if anyone does TDD or BDD or some other type of Driven Development. As an engineer and product owner, I want to answer the question: how will I make changes to the system with confidence and speed? To me, that has always meant the following:

  • Keep Test Coverage High-At least 85% across the whole system you’re working on. For crucial pieces of the system, I’ll push that up to 95%. This seems weird to push early on in any build, but I find that when I want to go fast, I need a lot of tests. And great coverage means I can build something and move on to the next thing confidently. If I leave tests to later or short change testing, I’m going to break what I just did and waste time.
  • To Mock or Not to Mock- I like unit tests that use a lot of mocking. It forces me to develop software with dependency injection at the forefront and keep my functions easier to digest. I also like integration and database tests, but not to many because then automation gets all slow and I get surly. My recommendation is to find the right balance for your work. I don’t know what that balance is, but I’ll typically write unit tests to cover all scenarios and integration tests that write to the DB on the happy path. Kind of an 80/20 breakdown, if you will. 80% unit, 20% integration.
  • Automate, Automate, Automate- Get your pipeline functioning even if the first test that runs reads only “assert.True(true)”. Make sure it works locally, make sure it works on your CI/CD pipeline and make sure a failure blocks deployment to the right environment.

Notice that none of the blank page issues have to do with the actual product. That’s on purpose. I don’t know if the software your building solves an important problem or is a viable business, so for the sake of this article, I assumed it is an amazing idea and there will be many happy returns. You’ll answer the hard questions about how to make your product great every day.

I promise you, though, that if you spend the initial time asking the right questions about your software’s building blocks, you’ll be freed to think about the product instead of how to get it into the hands of customers and instead of wondering if it even works. Even more importantly, when you launch, you’ll continue moving quickly even when you add team members.

Happy Building.

--

--

Christopher Hazlett
Code and Conveyors

Chris is the CTO at OptechGroup, a boutique consultancy, that helps clients achieve their operational and technical goals.