Sit back and REST for a while [Chapter Three: Mission Completed]

Mateusz Szymajda
Fresha Engineering
Published in
7 min readMar 30, 2021
Safe landing

As you already know from the previous articles — our first attempt to define a proper way of API testing in our projects was not that fruitful. Therefore, in the last article of this series, we’ll explore the final approach toward solving this problem.

Just keep pushing 💪

We were not 100% satisfied with Postman’s possible implementation performance due to some major trade-offs that we were not willing to take at this moment, thus we took a shot at a custom solution I’ve mentioned before. At that point, we were initially aligned on the technology stack which consisted of tools we already use — to avoid possible obstacles, ensure a quick start and gentle learning curve for other engineers. We selected:

  • Fetch API as a base library to handle REST methods
  • Jest as the test runner (already in use in Cypress)
  • Custom libraries for formatting JSON requests and responses
  • Lerna as a system workflow and packages management tool

And it all seemed to be working perfectly fine from the very beginning. Actually, we’re quite surprised by the development experience which was swift and painless. Additionally, the tests were running astonishingly fast — and also the individual specs were starting asynchronously in Node runtime which speeds the whole suite up even more.

We’ve started with simple mechanisms to send requests and wait for the response, so then we can validate their contents in order to make sure that the actions triggered on endpoint A are reflected by the correct data being returned by endpoint B. It was very simple and limited at that time, but at least it worked. We then extended the assertions for every request to assert the response code and response time. However, keeping and re-writing multiple-level JSON payloads in the test code doesn’t seem to us a good idea, in fact, it was terrible to read and maintain. And here comes our custom Request/Response Formatters, which again we designed to be a separate package that has definitions of all resources and all requests with automatic parsing of relationships.

Let’s have a look at the example of hypothetical and simplified Sale in the system. First, we need to define resource types, attributes and relationships that occur in the API communication:

export const sales = registry => {
registry.define(RESOURCE_TYPES.SALES, {});
registry.define(RESOURCE_TYPES.SALE_REQUEST, {
attributes: ['price', 'number'],
relationships: {
employee: RESOURCE_TYPES.EMPLOYEES,
},
});

What exactly happened here — we have exported function that has information on what resources are needed to make an actual sale request. We have registered the SALES resource type and SALE_REQUEST. In order to make this API call, we also need some attributes — value and quantity of the sale items. Additionally, the employeeId reference is needed so the system knows that this particular sale is associated with the staff member of the given ID and that’s a separate resource-type definition with its own attributes and relationships. With this already tackled, the Request Formatters lib will know how to properly parse the requests and responses. Let’s now tell our framework what the sale request actually will do and put that in a separate file:

const post = ({ Cookie } = {}, requestParams) => {
const payload = {
data: registry
.find(RESOURCE_TYPES.SALE_REQUEST)
.resource(requestParams),
};

return request('fresha-api/sales', {
body: payload,
method: POST,
});
};
export default {
post,
};

At this point, we can create a spec file and start sending the requests, but in the test body we need to first get the employeeId and pass that through to the payload body of the POST sale request:

describe('Create sale', () => {
it('POST sale with one item', () => {
const [{ id: employeeId }] = await employees.get({ Cookie });
const saleResponse = await sales.post({ Cookie }, {
saleItems: [
{
number: 1,
price: 555,
employee: employeeId,
}
],
});

expect(saleResponse).statusToBe(200);
expect(saleResponse.saleItems.length).toEqual(1);
expect(saleResponse.saleItems[0].price).toEqual(555);
});
});

It’s a very basic and simplified example, but as you can see the framework and custom libraries make our lives easier, we don’t need to extract employeeId because we have its definition already registered, we don’t need to remember all needed relationships and constraints of the endpoints. Retrieving interesting response attributes is easy, painless and also reduces the number of code lines.

After that we’ve moved to the second and most important phase of the implementation — as one of the key points in this whole project was to keep maximum consistency and reusability of the technology, we decided to move the Cypress seeders and some utils to the upper scope of the codebase in order to be able to share those seeders between API and E2E tests. To do so, we had to solve a couple of additional problems:

  1. The seeders are using Cypress commands and functions — hence they wouldn’t work in API tests as they are.
  2. With Lerna implementation, seeders and some other modules need to be basically separate NPM packages.
  3. The frontend apps, Cypress, API tests and all of those additional packages need to be correctly building as Docker images.

That wasn’t very problematic, but it took some time, as we had to extract the seeding logic from the Cypress methods and rewrite it with our Fetch-API functions. It also meant that now our seeding classes became separate packages handled by Lerna.

Another thing to consider is the problem with Cookies handling 🍪 in such tests. In Cypress or similar frameworks that run tests in the browser (whether it's actual browser window or headless mode), you get the cookies handling mechanisms for free. Here, in a Node runtime, it’s not that straightforward — you need to explicitly define the approach for getting the cookies and passing them to the next requests (for instance cookies used to authenticate the user and keep as a valid session). It’s not rocket science, but it requires a couple of attempts to make this neat and simple. For instance, you can implement it as a part of the main request function and pass cookies object (obtained from a POST request to user authentication endpoint) as an argument for every other request that needs to preserve this particular session. It gives you the flexibility to control which test receives which session cookie and even test against some edge cases with invalid/expired session tokens.

Frankly, what you might want to consider is having some test cases that validate sessions, tokens and in general — user authentication handling in the environment that reflects the production mechanisms. Then it probably makes sense to customize tokens/session handling for the rest of the tests — such as extending the expiry time to avoid flakiness. However, that’s of course if your production tokens tend to expire shortly after being created or need a frequent refresh.

At the end of the day, our solution has reached the MVP level and was released to other teams for feedback and further improvements with a clear plan to implement it as a part of our common development process.

Finally, API tests have reached the point of being a part of our standard GitHub checks set — the test runs had to be green on every Pull Request in order to merge changes. Still, there’s plenty of work and exciting challenges ahead of us concerning the seeding strategy, reporting, KPIs tracking, or response mocking just to name a few. I will definitely write more about our next steps and the problems we’ve faced — so you won’t be that surprised when you start implementing API tests at your company or maybe this and further articles might put some more light on these tests’ peculiarities.

Last, but not least — just a few words that might be helpful before starting such projects in your organization:

  • start with clear business and technology argumentation, you need to be sure what problems will your solution solve, what value this will bring for both — engineers and product teams and potentially — users.
  • plan ahead — create a clear roadmap of how and when you’re going to implement each phase of the project — it’s extremely vital to have a bigger picture of what it takes to deliver, how much time you need and what resources (this will also help your managers to make fact-based decisions rather than rely on guesstimates).
  • do your homework — be prepared for questions, doubts and discussion with people that will not agree with your point of view — and that’s okay because this is how good solutions are made, by constructive criticism.
  • find people in the company that are willing to make a change, to take a step further and do something awesome, encourage them and gather up the dream team — don’t always strive to be a one-man army — it’s no fun.
  • you will fail and that’s completely fine as long as you draw conclusions, mark out next steps and action points to improve in the next iteration — it usually takes a couple of tries to get something done right, that’s why it’s called ‘research & development’, ‘proof of concept’ or simply ‘experiment’
  • it’s a long game — most of these projects will take an extensive period of time as you build up a clear vision around the idea and start to implement it. New parts will be added and additional mechanisms developed — prepare to keep the momentum for a prolonged period.

As you can see new projects are always challenging, there’s a lot of doubts, blockers and generally speaking — rough times along the way. However, this is how you can make a change not only for your organization but also for yourself. This particular project enabled Fresha engineering for further steps toward complete Continuous Delivery — and yes, this is just a step in a process that should be ongoing and includes many other aspects of your company, but we believe that the goal is totally worth it. For me personally, it was a huge jump out of the comfort zone, but on a positive note, it was very exciting to plan and then see this initiative come to life bit by bit. It was also fun and engaging to work with such a tremendous Fresha team! 👩‍💻 👨‍💻

And on that bombshell, it’s time to end 💣💥

I will update you more on our project’s performance and the next action points for this initiative as the dust settles down a bit and we gather more data. So, stay tuned as we’re just getting started! 📻

--

--