Making third-party services integration contactless

Michael Kibenko
NI Tech Blog
Published in
5 min readMay 22, 2022

The first time that I encountered the subject of third-party integration when in one of my previous jobs I was required to create an architecture of third-party payment system integration. The first thing we thought about is to exchange APIs and connect with them. But in this case, we had been required to think about the APIs’ stability, backward compatibility, and potential data loss which isn’t so bad, but it isn’t suitable for all occasions.

In this article, I want to share some contactless third-party integration architecture we are required to think about in order to make an integration with a third party.

So, in what cases do you need a third-party integration to be contactless?

  • Security: if you have a closed service that isn’t available for outside traffic, or supposes you already don’t have an authentication and authorization mechanism in your destination service but you need to push forward the integration process. In this case, you can use your cloud security mechanism as a service, instead of making your service available for outside traffic.
  • Data mapping: if you need to map the incoming data to your standards and you want to save the resources of your service for doing it there.
  • Data size: if you potentially have a large integration data that can’t be passed through HTTP.
  • Development speed: if you need to make something really quick because of your business needs (Rapid application development).
  • Replay: if, for some reason, you need to replay the integration process again and again.

So, how did we implement contactless third-party integration?

Architecture elements

Architecture elements:

  • Third-party service: which uploads and receives integration events using the shared storage. We used there a JSON file with the relevant content.
  • Shared cloud-based storage: stores the integration requests and statuses (under the dedicated statuses folder). It isn’t mandatory to use a storage, you can change it to any API Gateway or Event-driven elements that will be connected to serverless functions by a trigger. We decided to use storage to avoid any data loss and potential large files.
  • Serverless function: processes the integration request. You can read more here about why and when using serverless computing in a comparison with other architectures.
  • Destination services: to process the integration, which saves and publishes the data in our system.

Serverless function logic

  • Event: enable a “file creation” trigger for your Serverless function, on every “file creation” or “update” event, your serverless function will be executed with the relevant event data such as added file name and path.
  • Getting an object from your storage: making your event data from the storage, avoid making this code coupled to your other logic, to make it easy to change the trigger.
  • Validate received data: we used validate.js for incoming data validation, with a shared validation schema between the companies, if there are any validation errors, they are finishing the flow and sending relevant status about the validation errors.
  • Status notifier: we decided to add statuses also to the shared storage as JSON files, using a predefined interface between the companies that look like this.
{    ok: true|false,    action: “create”|”update”|”delete”|”publish”,    content: {…integrationContent},    errors: […errors]}
  • The map received data to your format: in this step, we map the received object to our system format and standard. Try to make this code pluggable to reduce potential refactor problems when you need to use this architecture for a different entity.
  • Call your inner service for integration: here we are calling our service to create and publish an object in our system.

So, what are the disadvantages of using contactless integration?

  • Latency: this process potentially can be time-consuming, if you need this solution to be a real-time process use event-driven mechanisms instead of storage.
  • Storage management: after some time the storage becomes heavy, to avoid this you need to manage the storage. For example: delete (soft or hard) the processed files, and maybe compress them for the long term.
  • Storage management: managing the uniqueness of the file, we used timestamps as a part of the file name to make it unique to avoid file rewrites by other files.
  • The file size: potentially you can get a huge file, you need to be adaptive for that situation (using streams, etc…) or control the file size at the beginning of the process.
  • Testing: harder to make automotive tests, we used AWS mocks for unit tests and testim.io for E2E tests in a different environment.

Extra suggestions on building and using contactless integration:

  • Dev Velocity: use any CI/CD tools for smoother development. We used AWS SAM with predefined templates and Jenkins for cycling.
  • Working with the third-party company: before the development, arrange your integration interfaces with the third-party, such as status format, validation schemas, and execution time ranges.
  • Rate management: if you need rate management for your integration, change your storage to one of the event-driven technologies by passing the events to one of the queue services (for example SQS) that will trigger the serverless function.

Conclusions

We have been using an integration project that potentially can include a really large entity, and image uploads and we needed it to be super fast to see the business value as soon as possible, despite this, as software developers. We wanted initially to make it good and ready for production because good architecture is key to building modular, scalable, maintainable, and testable applications. The typical way to communicate is using APIs, but every project may have specific needs, so every time you have something special open your mind to find the best solution for your needs. In this article, I tried to present just an extra way of doing a really quick and scalable integration project without any need for extra authorization mechanisms and logic in your service which provide us the ability to push a large amount of data from a third party to our system and stay managed and with a stable architecture.

We worked on that project together with Alex Verevkine, thanks for a really interesting time :)

Happy coding :)

--

--