The Speed vs. Scalability Dilemma, or: How I learned to stop worrying and love CI

Adam Daw
CoVenture
Published in
4 min readMar 15, 2018

It’s inevitable that at a given stage in the growth of your development team — either in the nascent steps of a company that’s focused on building software or later on in organizations with smaller internal IT or engineering initiatives — that the topic of speed vs. scalability will arise.

The positions typically pitted against each other are presented as something along the lines of “It’s just a prototype, we won’t be keeping this code-base” versus “I don’t want to crash the first time we get more than 1000 simultaneous users.” Both of these statements come from an entirely reasonable place, as the need to deliver and iterate quickly can be crucial for early-stage startups, and quality of service is one of the factors that is going to determine whether your offering is mere curiosity or can become an essential part of your users’ day-to-day.

At some point, you must make the decision: do we push a given component now, understanding and accepting the technical debt we’re incurring, or do we delay the release to triple-check, solidify the code-base and make sure all parts are scalable from the get-go. On a team of more than one developer, you’re going to have more than one opinion on this to one degree or another. The approach taken can either be mandated or reached by consensus, but regardless of how you decide on it, you will make compromises on speed or scalability.

You have to expect these compromises. In a game of limited resources, the allocation is never ideal. What I want to talk about today isn’t which of these sides is more or less correct, or why a particular decision is right at a given stage. Instead, I’d like to discuss tools at our disposal to mitigate the downsides of either approach: applications and processes we can implement to speed up or scaling or make our rapid iteration more stable. These aren’t necessarily new concepts, nor are they controversial, but it seems that sometimes we forget why we adopt or implement these devices. So for those to whom this is old hat, consider the following as homage to some familiar friends. If any of these approaches or concepts are new to you, then I encourage you to explore them further and share your findings as you implement them with your team.

Continuous Integration

The concept of Continuous Integration (CI) was first introduced in the early 90s by Grady Booch at Rational Software. Over the course of the past quarter-century, it seems to have evolved into an inscrutable or monolithic topic that everyone seems to know they need, but no one is quite sure how to go about implementing. Put simply; the concept is that “we should probably not all be working on overlapping and potentially conflicting pieces of code without making sure that they will work together on a regular basis.” Today, this is usually exemplified by the merging of multiple git branches into a master or staging main branch, resolving conflicts and performing testing on a regular basis.

Continuous Integration is a practice, not a specific application, though many tools exist to automate this process. Many of them are available for free to open source or initial prototyping projects, and a quick search for “CI tools” will present you with some options.

Testing & Code Coverage

One of the steps that you can in your automation is testing. Testing in this context typically takes the form of Unit or Integration Testing for the various components of your application. What the difference between those two approaches is and how or where you want to use each is not the topic of today’s post, as each of those could benefit from more than one post on its own. Suffice it to say that through a continuous integration implementation, you can simplify the process of testing the overall code-base, as well as maintain or guarantee a certain level of coverage and accountability for your developers. By automating this process, you can approach the situation more objectively, and have a degree of confidence that you have tested any given component of the application with some rigor.

Automated Deployments

Errors will happen — bugs will never completely go away. We can, however, make the deployment or recovery process from an issue as fast as possible through inclusion in our overall automated process. We can automate the propagation of various branches to different locations, allowing features to be tested in an environment that resembles our production systems as closely as possible. Moving beyond simple testing, this can identify issues that would be obvious to a human but difficult to describe to an automated suite.

We can also use this process to quickly re-deploy when our systems fail, spinning up new servers, replicating databases and alerting the right people when our applications start acting out of character.

Conclusion

These tools and steps allow us a certain degree of comfort that we can build well and quickly without sacrificing the overall rigor of our process. By automating the approach, we lighten the load on the team while making sure that we’re protecting our users from the inevitable conflict or oversight. We can be anywhere on the Speed/Scalability spectrum, and the automation of these tools can be beneficial. Furthermore, implementing them early on in the growth of our application and engineering team can prevent significant headaches in the long term.

--

--

Adam Daw
CoVenture

Head of Engineering @ CoVenture | Owner @ Bespoke Informatics | Ottawa-SF-NYC. Ottawa [SFDC Developer Group|Lisk Community] Organizer