How to future-proof a Tech project?
In the daily life of a developer, there are three different kind of projects:
- A prototype: The goal is to challenge an existing “idea” (product, UX, dev…) but we don’t aim to send it to production at all.
- A “one shot app”: This could be a really specific/hidden feature of your product or, let’s say, a “temporary” project like a creative website made for the release of a new product.
- An app you’re going to be working on for a long time, like the main website of your company.
I’m going to focus on the last situation, where you really need to think about the long term scalability of every area of your project. It’s going to evolve, for sure, and you already know it.
Being aware of Product & UX strategy is really important. Will you need to switch your website into an e-commerce soon? If so, it is more important to be aware of your companies Product & UX strategy. Depending on the evolutions, you might already need to plan some tasks on the development side to anticipate it, and design your architecture differently.
Anyway, you get it, you need to be really well informed about the next milestones of your product to adapt your work accordingly.
If I take the front-end development as an example, you have to manage your dependencies with npm (or yarn). Don’t even think about manually importing libraries into your project like the old times, or you will be cursed for about 5 generations.
It is important to be able to have a quick overview of the dependencies of the whole project (through the package.json). I also highly recommend to set fixed versions on your dependency packages. This way, you can be sure you will share the same environment with all of your coworkers.
It is not completely true with npm though, since the inner dependencies aren’t always fixed…
On many projects, it’s common to have global dependencies installed on your computer which, allow it to run. It can be some background scripts, applications, Go, Node… I highly recommend to fix these versions as well. First of all, you need to reduce the number of global installations as much as possible, and for example, test the versions of the libraries you are using in the C.I. You could also use containers to work locally (that way you have the same global dependencies everywhere).
It can seem really trivial to do, but it actually has a lot of benefits:
- It will avoid hours of debugging caused by a wrong package version.
- You need to manually update the versions of your dependencies which has the benefit of forcing you to read the changelogs as well.
You know you have done a good job if it is easy for everyone to install the project from scratch.
Standards are really important when you work with people on the same codebase. You all need to format your code the same way in order to make it clear for everyone. You have two ways to do this on the front-end side:
- You are able to define the codebase rules and you use it as a linter (like Airbnb did). Or you can also extend the one created by Airbnb and add your own rules.
- You don’t want to bother writing your own linter, and you don’t have any time to spend on arguing with your team about a specific rule. You can use Prettier or Standard.
Any choice is good, as long as you are using one.
Creating some scaffolders will allow you — again— to standardize your codebase and to gain some time on the repetitive tasks you do. It’s also a good practice I encourage, since it can help you to bootstrap a feature really quickly, and it helps you switch to another implementation really easily through codemods.
Jérôme Smadja and I already wrote about the genesis of the front-end design system at BlaBlaCar (How to build a Design System? and Design System for the web). As you can see, we’ve made some scaffolders to create the components with their folder, CSS files, unit tests, accessibility warnings etc.
Another example would be our pre-commit hook. At BlaBlaCar, we have some scripts testing the node version of the environment, the format of the package.json to be sure all dependencies are fixed (reference point no.2 Maintainability).
Consistent workflow → Automation → No human mistakes
I’m not talking about code architecture here, but mainly about team interactions. If there is something to implement in a micro service, you shouldn’t hack and put it in the API because “it’s faster”.
Sometimes, it can be quicker to implement something where it is not supposed to be, because “the other team has more bandwidth”. Hell no. In order to keep your project consistent and scalable, you need to keep the responsibilities where they belong. Otherwise you will definitely pay for it sooner or later.
There is one particular thing I like in the BlaBlaCar front-end team: no one really owns a feature or even a Jira ticket. Everyone is able to work on everything, and I think this is something good. There is no single point or failure, and it leads to a more consistent codebase with more relevant code reviews. However, to work like this, you need to have really good specifications of your codebase. I am not talking about feature documentation hidden somewhere in the limbs of a wiki, but about:
- How does the API work? (SDK, API documentation…)
- How to monitor the website? (resources like Sentry, New Relic…)
- How to release? (Release scripts and process…)
The comments could sometimes act as specifications in the code. To be honest, we don’t have that much: we only comment if we feel like this is really tricky to understand by itself (context necessary, or function definitions), usually asked by someone in the P.R.
However, I consider unit tests really important. In terms of code health, and for the specifications as well. It’s easy to read, and easy to understand what the piece of code is supposed to be. If you are working in TDD, you can even write these specifications with the Product Owner from the beginning. If not, writing the tests will usually challenge your code anyway to make you realise the implementation is not good enough… And more importantly, the specifications are alive along with the project. I sometimes consult the code coverage. I don’t really believe in the sanity of a codebase through code coverage, but I still think it is a good start to be sure every critical part is tested and documented.
The end-to-end tests are interesting as well for the same purpose, to translate the “wiki product specs” into feature developed specs.
It would be a shame to do such nice work and fail so close to the end. You need confidence in what you’re going to push to production.
- Having a good C.I.
- Dependencies version checked
- Easy to rollback
- Easy to push to production (a “push to prod” button is my holy grail)
When you are pushing a feature to production, you need to involve the product team to check on preproduction beforehand to validate the branch and be sure we are aligned on what we push to production.
Something really important to take into consideration when you are doing a progressive release: you need to have a rollout plan. What will be the percentage of the users seeing this feature? Do we start with a specific country? Are the translations for this country ready? Once you started to rollout your feature, I suggest you (or/and your Product Owner) to work really closely with the data analyst to monitor the rollout.
It’s really important to know, somehow, the last changes. Depending on the project, you might need to update the changelog (updates + dates), or it could be a hook script on Slack to warn everyone. To automate that part might be a good idea!
Last but, not least, I wanted to talk about the definition of done. You need to know what is the end of your task. Is it when you merge your ticket? When it’s validated by QA? When it goes to production? When the progressive rollout is done? When the legacy code is removed? As long as your tasks aren’t done, you should still keep a ticket or something reminding you (and your team) that you need to finish it.
I don’t understand how people can push some code in production without being sure it’s working properly, both in term of technical and product efficiency. It’s mandatory to me to add some monitoring on the key points, and to track the user behaviour to know how to evolve. If the metrics aren’t good enough, you will have all information athand to evolve. You need to be able to compare what you are doing, so the sooner you will add your listeners, the better.
Let me give you an example of the first release of the new front-end architecture of Blablacar. We moved from an old Symfony2 / Twig / Jquery stack to a brand new server side rendered ReactJS app. We decided to test it on the login page, so the first thing we did was to check the tracking on the “old” page, to update it. Then, we released the new page, but with the same HTML/CSS, so the same UX/UI. Why did we do that? Simple: if we changed too many things at once, we wouldn’t have been able to validate the architecture, the metrics could have changed for too many reasons.
Now, we have standardized the tracking system, and we have for each flow:
- A start event of the flow.
- An attempt of completion: the member reached the end of the flow and is trying to perform an API call to validate their action.
- Followed by a success or a fail event.
By releasing new pages, we instantly have interesting feedbacks which is leading us to update/improve some features. It’s highly valuable!
You might find all of this obvious, to advise people to be organized and well prepared. Yet, it’s really rare to find teams where you can really set up a good work environment like this from the beginning, and being able to maintain it over time. It’s your responsibility as a developer to fight for this level of requirement, for the sake of the project.