ARK Core 2 Technical Update Series — New API v2
As we approach the testing phase of the new ARK Core v2, we would like to review some of the technical challenges encountered, some decisions that had to be made, and why we switched some of the technologies during development. This is the first blog post in a series of technical updates, this one in particular is focused on the new API v2.
Overview of the upcoming technical blog posts:
- API v2 (you are currently reading it),
- Testing suite in ARK v2,
- Architecture and structure of v2,
- Webhooks in v2,
- How will the deployment, testing and switching core from v1 to v2 occur,
- Smart Bridges, Atomic Swaps & ACES.
Our primary goal is to keep the new API v2 simple & clean and allow new developers to quickly jump in and easily extend it. This is where Hapi comes into play. It offers a rich framework and plugin system that scales from the personal to enterprise space.
Hapi is actively developed and maintained by a team of experienced developers ( https://github.com/orgs/hapijs/people ) from Walmart, Twitch, Auth0 and many more.
Hapi allows us to implement every version of the API as a plugin which makes it possible to remove a whole API version by simply commenting it out. This is in line with our goal of making it as easy as possible to modify the ARK Core v2.
A major issue with the API v1 was that it didn’t follow any standards and was not RESTful in any way. The API v2 will be fully RESTful and follow the JSON API specifications as closely as possible.
What this means is that all API endpoints that serve resources like blocks and transactions will act like collections. So instead of calling `/api/blocks/get?id=` you will call `/api/blocks/{id}` where `/api/blocks` is the collection the record you request via `{id}` will be grabbed from.
This change will allow you to understand how the API endpoints work and are structured without having to constantly check the API docs because the structure is standardized.
The initial development of the API v2 started with Restify, because it is well established in the Node.JS world when it comes to API development and offered a simple interface. But it soon became apparent that it didn’t fit the needs and goals we had for the new API.
The limitations of Restify became more and more apparent as the API grew and the structure became harder to manage. It became difficult to keep it simple and clean because Restify doesn’t offer a clean and well defined plugin system.
(Note: there is a plugin system implemented, but it is rather basic and didn’t feel smooth to work with, and during first implementation phase we had to develop our own plugins for versioning, pagination, throttling and validation.)
The migration from Restify to Hapi went smoothly as we were able to delete hundreds of lines of code thanks to Hapi’s configuration-based approach. This includes many plugins we previously had to develop and maintain ourselves, due to the fact that the Restify community is rather inactive.
Hapi focuses on giving you the ability to write reusable application logic and takes a configuration-based approach, saving time and keeping your codebase leaner and more organized.
The main reason for the transition to Hapi (besides the cleaner code and architecture) was the plugin system. The plugin system of Hapi offers a clean structure and hooks into many events of the HTTP request flow, allowing us to support both the v1 and v2 API without having to implement any nasty hacks for versioning as needed with Restify.
An important change in Core v2 is the improved API that is now fully RESTful and allows easier and more detailed access to information about the blockchain and all entities inside of it.
An issue with APIs that allow public access is that they can be abused to perform attacks on servers by flooding them with requests, causing too many queries and killing the database. A small change to help prevent this is the introduction of request throttling.
(Keep in mind that this is not a replacement for a proper DDoS mitigation as it only affects the API from being flooded.)
You can check that the request throttling is working by performing a request to any API endpoint and checking the response headers, which should look something like this:
As previously mentioned we will implement request throttling to prevent flooding of the API. Also, caching of API calls will reduce the load on the database as most data in the blockchain is immutable.
We leverage Catbox which offers a wide variety of drivers for popular caching solutions like Redis, Memcached or Riak just to name a few.
As we are taking a configuration-first approach to v2 it is as easy as changing a bit of configuration to enable or disable caching and swap out engines.
The goal of caching is never having to generate the same response twice. The benefit of doing this is that we gain speed and reduce server load on the ARK Core v2.
We provide an option for ARK Core v2 node owners to choose if caching is enabled or not, and then by defining a persisting layer for storing the cache — like Redis for example.
Hapi lets you set cache settings on routes, but that only affects the cache headers it sends down. In order to cache the server data we need to define a server method.
(You can also cache manually, but server methods are so convenient that we encourage you to use them.)
# Sample cache configuration of ARK Core v2
We want our APIs to be fast and scalable — so we made them like this :).
Since backwards compatibility is a must have for the foreseeable future we had to envision a solution that won’t break services currently relying on API v1. Ultimately, we ended up with offering several ways of accessing the API version of your choice.
The classic (but no longer recommended) way of handling API versioning is by simply adding the version to your URL. This will give your users the ability to use different versions of your API right in the browser.
We will offer this method so you can access an endpoint like:
`/api/wallets` (defaults to v1), `/api/v1/wallets` and `/api/v2/wallets` right inside your browser.
The recommended way now is to use an `Accept Header` which has the option to be registered with the IANA.
Accessing the API with it would look like this `Accept: application/x.ark-public-api.v2+json`.
The structure of this header is`application/tree.name.version+json` and there are 3 different Standard Trees that can be used for such headers:
- The unregistered tree (x) is primarily meant for local or private environments.
- The personal tree (prs) is primarily meant for projects that are not distributed commercially.
- The vendor tree (vnd) is primarily meant for projects that are publically available and distributed.
If you are not familiar with how the standard trees work you can simply use an API Version header which would look like `API-Version: 2` and yield the same result as the previously mentioned method.
Some more examples of the all new cool API v2 to give you some insight of the upcoming ARK API v2:
Calling ‘/api/blockchain’ :
Calling ‘/api/delegates’ :
The new ARK API v2 will bring in many new features, better data handling, provide throttling and caching right out of the box. By following the latest trends and established best practices we’ll be ready to build upon our foundation with peace of mind for future releases.
This concludes our first technical blog post and we hope you learned something new! Stay tuned for the next post in the series that will focus on implementation of best ‘testing suite’ practices, hurdles we had to jump and what all of this means!
Originally published at blog.ark.io on February 9, 2018.