Creating a SDK from scratch
A tale of despair and overcoming
It all started at Garage Beer, one of the many craft beer places we coincidently have less than 10 minutes walking from our office in Barcelona, with a discussion that some of us belonging to integrations squad were having about the future of the team in terms of work and autonomy.
A couple of weeks later, we got together to discuss the details of the project, namely in terms of implementation and security (which is very important considering that on the client-side there’s not a way of hiding the code that it’s being executed).
Since we were starting it from scratch, we had the chance to apply the rules and practices we considered the best at the time. We decided to follow Behaviour Driven Development as a guideline and for each module and feature completed, we would make sure to have something tangible and deliverable, with proper documentation and working tests. This implied some additional work in the beginning setting everything up, as explained further below, but eventually paid off when we realized it takes just a couple of minutes to add a new feature with the tranquility that everything is working just as it was before.
- The latest version of the language should be used, at the time ES2015 (including some of the main features such as classes, arrow functions, promises and so on), taking advantage of Babel to transpile the code and ensure backward compatibility.
- All the code should be split in modules, which correspond to a set of features (for example, in the Asset¹ module we encapsulate all the functionality regarding the manipulation of assets).
- Each module functionality will on itself be fully described in a ticket to be assigned to a single developer, making sure they are independent and they can be simultaneously worked on.
- Each ticket will result in a pull request that will be reviewed by several developers from integrations squad with experience in our other SDKs and from the frontend team with experience in the language it was being developed in.
- Each ticket would only be considered finished after having code implementation, documentation for every function and class written, working tests for all the SDK methods and samples showing how to use all the features implemented in that ticket.
This whole process helps to attest the SDK follows all the quality requirements to be released.
The decision to make this project a public SDK and an open source initiative was made way before the start of its development. At Bynder we value a lot the effort of open sourcing projects and sharing knowledge so we thought it would be interesting to take some action in that direction and sharing some of our own work with the outer world, benefiting of the eventual interaction with the community.
Given that the whole purpose of this SDK is to make asynchronous calls to Bynder’s API, we decided to implement an APICall class that would receive all the request data and also the not less important authentication tokens. Each request needs to be authenticated using the OAuth 1.0a protocol and, as it’s part of the specification, all of them need to include the authentication data in the header. Besides that, given the nature of some requests, they require a specific way of encrypting the information to be passed. Based on all of this, several methods were implemented to fulfill the need for a solid yet simple solution to make the API calls. Here we have the class diagram displaying the properties needed to create the class and the implemented methods:
This is an example of how it would be implemented in one of the SDK methods:
For the authentication process we relied on a third-party dependency called oauth-1.0a, which was thoroughly reviewed by our Infosec team before being approved for usage.
In order to provide the best development experience possible, we decided to build an interface entirely based on Promises, giving the user full control over his asynchronous code. This is an example of how to use the SDK employing the classic
Here we have the same code but using the ES2017 feature
Building and automation processes
By the time we started working on this project, we realized we needed a way to automate the main tasks which would be easy to configure, to run and to maintain. The main processes would be transpiling (using Babel), bundling (using Webpack) and running tests (using Jasmine).
We decided to use Gulp which can’t be considered a dependency since we are technically not depending on it. Thanks to its big open source community, Gulp has several plugins that make the creation of simple tasks extremely easy. Simply running
gulp will trigger the execution of ESLint, Jasmine, Babel and JSDocs, saving the time of having to run each one of them separately and being able to control the flow based on what each of the tasks produces.
Because we decided to use ES2015 and some of its features wouldn’t be compatible with the range of environments we wanted to target, we decided to use Babel. We only used the
env preset in order to make sure the features we were using were rock solid and stable.
When we were making tests to use the SDK on a browser, we noticed the modules system wasn’t compatible with the way Babel was transpiling the code and therefore we needed a way for the exact same code to be compatible with a direct usage in the browser. After some research, we realized the best option was the use webpack to bundle the source code into static assets that could be used in a browser. Once again, to ensure backwards compatibility, Babel was used (with the same preset) to transpile the code and making it usable back to IE11.
Later in this project’s development, an early user of our SDK noticed that installing it with all the latest dependencies using
npm install would force it to download versions of dependencies that hadn’t been tested and were causing the library to crash. We promptly debugged it and found that the dependencies were not deterministic, or in other words, we could not define at all times what our dependency tree was, creating chances for inconsistencies when some of the packages were updated. Because of this we decided to use Yarn which thanks to the creation of a lock file (that needs to be added to the repository), creates a detailed dependency tree where it’s possible to find all the dependency packages and their resolved versions. Because of this, after
yarn install is executed, regardless of where and when you’re doing it, it will ensure you have the exact same dependencies and sub-resources installed, eliminating the problem our user found in the first place. By the time we released the SDK, the last version of npm (v5) was already able to solve this same problem.
Following another of our established principles, we decided that from the very beginning we should document every single function and class we would commit to the repository. We did that because it’s a well-known good practice throughout the entire development community and if we don’t do it from the start, it’s boring as hell.
We adopted JSDocs because it seemed to us the option that best fitted our needs being very customizable and complete as well. This is an example of how we document our code in this project:
And this is an example of the output:
Testing is pretty much the cornerstone of any software project. In order to build on top of what you have, you need to make sure it is stable enough and adding tests to the code ensures exactly that. Besides that, it is pretty much a community-wide good principle and here at Bynder we have no problems admitting:
All code is guilty until proven innocent. — Anonymous
We take into full consideration that imperfection is part of human nature and exactly because of that, from the beginning we started creating tests for each and every of the SDK’s public methods. The release process is only possible when all the tests are passing.
To create our tests, we decided to use the well-know BDD testing framework Jasmine since it is very lightweight (it has no external dependencies) and the syntax is pretty straightforward, being both easy to read and to write the tests. Regarding the tests themselves, we went for a rather unorthodox and honestly quite questionable approach. We run our tests against a development environment in a traceable cycle, meaning that all the changes done will safely be undone and the environment will end exactly in the same state as it started. In this way we can test the SDK against any environment without facing any risks.
Take for example the metaproperties² test state diagram:
This approach mixes a bit integrations testing with acceptance testing, since we are testing the API behaviour and at the same time we ensure the SDK methods do exactly what they are supposed to do. There are clear inconveniences as well as some advantages but in the end, it was all a matter of choice and specially, since we had a lot of freedom in this project, it was a way of trying a not such conventional approach.
The last but definitely not least step of development of this project was the release. Before shedding the first sun rays on our code, we had to make sure that everything was perfect, all the code committed was good, and guess what? It was not. We are only people and people make mistakes so we needed a way of separating our private and intimate commits from what we really wanted the people to see.
Because we faced this problem already when releasing other public SDKs, we essentially used the same solution which is quite simple and also very effective. The method was described by Drew Olson in Braintree’s tech blog and basically consists in having two repositories, one public and one private. You’ll keep doing all the normal work in the private one and keeping one branch only for the releases. When you feel you’re ready to release, you squash all your nasty commits into a super nice and tidy one, ensuring that all the private commit messages stay private and won’t ever be available publicly. After that, you just push that branch to the public repository et voilà.
Once it is there, the last step is to publish it on npm and make it available for the whole world to use.
The article you just read is a real-life story of a project that we worked on and how we decided to approach it. We are fully aware that there are different alternatives and this is just one that has proved to work for us.
It’s also meant to share our perspective of the problem and some of the principles we find adequate for our developers to follow. Some detailed development insights were added, in order to add some technical value, without making it too boring to read. We know that it is important.
As a final regard, this project’s source code is fully available in Github and any external contribution is very welcome.
 — A [digital] asset is something represented in a digital form that has an intrinsic or acquired value. For example images, videos, documents, audio files, etc.
 — Metaproperties are metadata that you use to classify and define your assets better. They help categorize assets and search through them.