I am a huge advocate of not just software testing, but TDD (Test Driven Development). To me, good testing is not only required before shipping code to production, but adopting TDD is actually a boost to productivity.
I will not go into too much detail about the merits of TDD here, as there are many articles that cover this topic. TDD at first feels like it is slowing you down, after all your users don’t care about your nice test suite, they want features! Well I am of the opinion that it is better to ship working software slowly, than buggy software quickly. The idea of rapid iteration, moving fast and breaking things, and other developer / deadline prioritization philosophies has landed us in the current quagmire we find ourselves today. With security breaches, privacy leaks, hacked systems, broken apps, and buggy software in general. This article does an excellent job waking up the reader to our current problems, and how certain formal mathematical techniques can be used, like TLA+, to mitigate or in some cases eliminate the risk.
Fun fact: Margaret Hamilton, a computer science legend, developed the Universal Systems Language specifically to prevent catastrophic errors, and it ended up saving the 1969 Apollo moon mission.
Ok, not all of us are writing mission-critical NASA software. Some of us are trying to build the next tinder, I get it. But as developers, we need to take our craft more seriously. Tests should not be optional for any production code. At the bare minimum, the core business logic / API should be fully tested. Its not the end of the world if there is a UI bug, and tech debt is a real thing we all constantly grapple with. But when it comes to user data we need to move slow and tread carefully.
TDD should actually have been called “Test Driven Design” in my opinion. Because, when done correctly, the test suite is the side effect. The real utility of this approach is that it forces the developer to design the abstractions up front, before writing any code. The developer has to think:
- “What am I trying to accomplish with my code?”
- “What should the API / interface look like?”
- “What tests do I need to write, that if passing, prove I accomplished #1?”
Being forced to answer these questions early is crucial, and it will save you so much time in the long run. Because all software evolves, business needs change, and you need to be able to adapt by adding / removing features. How can you do this without being sure that your system still behaves correctly? Without a full test suite, you cannot. You will not be able to refactor or optimize your code, and over time the project will become unmaintainable. I have personally seen this on many projects. Don’t do this.
Remember, it is totally ok to hack for a few hours as you explore a new domain, API, or library to gain some context. And if you are just doing some prototype for hackathon, testing may be a waste of time (though even under time constraints it still could actually save you time). But once you are ready to actually write some production code, you must first write the tests. In fact, I go so far as to say the tests are actually the valuable part of the project. They are the proof that your system does what it says, and the living documentation for new developers to get up to speed with your codebase. The implementation code for that test suite is just a detail that can be swapped out or refactored any time in the future (and surely will be).
Ok, enough preaching, we agree tests are good. But not all tests are equally valuable. Integration tests are the most valuable. You want to test the full working system (with as little mocking as possible - ideally none). End-to-end tests are the second most valuable as they help protect against bad user experiences with a buggy UI. Unit tests are the least valuable in my opinion. Testing pure logic is useful when writing the test implementation code, as it helps keep the code clean. But if you finding yourself testing something that has no side effects, that is a good time to ask if this should just be an open source npm module, so that you can hide the abstraction for your business code (and provide value to the community). This also helps you create good interfaces by thinking about how someone else would use this bundle of logic in a different app ( a perfect example of this is the GraphQL authorizer utility I created).
While writing SLC, I focused almost exclusively on building integration tests for all the GraphQL queries and mutations, as that is the actual value of the app and involves user data. I did basically no React testing and little end-to-end testing, in the interest of time. I setup code coverage reporting so that overtime, I can work to increase the coverage. Feel free to help me! :)
Even when working alone on a project, it is important to have a good Continuous Integration pipeline. This will help you avoid regressions and searching your git history to figure out when something stopped working and why. ( we have all been there, its no fun )
AWS has two great services, CodeBuild and CodeDeploy. The latter is more complex and overkill for a small app like SLC. CodeBuild basically just listens for git web hooks and runs a set of commands as defined in the buildspec.yml. We run all our test suites in the AWS staging environment first, and we only run if the hook was triggered from a pull request branch or from a push to the staging or master branches. If it was a push to the master branch, and all the tests pass, we deploy to the production environment. Simple.
If we accidentally deploy a bug, we have to rollback the service manually. In the future hopefully there will be a serverless plugin to help with this. The odds that we have deployed a bug go down as we add most tests of course. All of our tests use the jest framework. Again, the less things to learn the better. The React ecosystem has pretty much standardized around jest ( with good reason, its great ), so we use it to test our node APIs as well.
Now on to Part 6.