F# Microservice Patterns @ Jet.com

James Novino
Sep 5, 2018 · 9 min read

Written By: James Novino

In this post, I elaborate on the details of how we build, design and scale microservices on Jet’s Order Management System (OMS). This post is an extension of Abstracting IO using F# posted previously.

Background

A Microservice Architecture is a method of developing software that tries to focus on building single-function services (modules) with well-defined operations. This architecture pattern has grown in popularity in the recent years as it offers some key benefits such as:

  • Deployment Flexibility — Can deploy services independently without affecting the rest of the system.
  • Isolated — Easier to follow code since the function(service) is isolated and less dependent.
  • Modularity — Services can be built with varying technologies or languages, and don’t require an entire system overall to update the technology stack.
  • Reusability— Services can be shared across systems or business units.

These benefits are among the main reasons companies like Jet, Netflix Amazon, PayPal, and other tech companies have all chosen microservice architectures over the monolith. However, microservices aren’t without their challenges:

  • Deployment Complexity — While microservices provide valuable benefits when it comes to isolated deployment this often requires sophisticated deployment infrastructure to achieve.
  • Support — While microservices provide benefits when it comes to code isolation, they are run in a distributed manner which can often complicate the support and maintenance of systems.
  • Monitoring —Monitoring asynchronous interactions is complicated and challenging and requires a lot of specialized tooling.

These are just a few of the trade-offs of microservices, there are many more which are better described in other posts.

At Jet, we have been using microservices since the very beginning. In the last 4+ years, we have learned lessons about how to build, deploy, manage and support an ever-growing set of requirements and services. In the previous post on IO Abstractions, in the section on “Service Abstractions”, we briefly discussed how we use OMS.Infrastructure to build microservices. This post is intended to elaborate on those details.

Microservices

Most microservices at Jet follow the decode → handle → interpret pipeline that was discussed in the previous post. This pattern was initially motivated by the notion of a DSL (Domain Specific Language) by defining a DSL, and then build interpreters for it. The decode/handle/interpret flow is essentially a highly specialized version of this pattern. Note: The limitation of this approach is that in attempting to decouple effects from the core logic it falls short. The function may need to make Asynchronous calls to retrieve data in order to perform domain logic. As a result, part of the function is “pure” in that rather than performing effects it simply returns an , but another part may not be “pure”.

The decode handle interpret pipeline has a few core constructs which make up the basis of a microservice.

1. A set of that a microservice can handle. These inputs are commonly represented as a discriminated union in F#.

2. A set of that a microservice can interpret. Note: This not the actual output of the microservice but is instead an internal representation of what those outputs should be. More on this later.

3. A function that deserializes the incoming messages to an appropriate strongly typed input.

4. A phase that takes an from the previous step, runs some business logic to calculate what side-effects should occur and generates the accordingly.

5. An phase that takes an and executes the side-effects that the output usually represents.

With some plumbing code, the above constructs can be chained together to form a single handler which makes up a microservice. Shown below is an example of the constructs and their composition.

The above example demonstrates the core constructs that was described earlier in the post (i.e., ,, , ,). In addition to that, we can also see some additional functions such as , , , etc. I’ll elaborate on these additional functions in more detail below, but before we get there let’s talk about the core concepts.

Decode

The above example shows how the different components are chained together to form a microservice to start the type is being created in the function:

The function accepts a and turns it into an that the microservice can handle. Note: This pattern can be used on arbitrary “raw” inputs, such as . A happens to be a specialized representation of a “raw” input type used by OMS system. Typically in our systems, we match on the whose value varies by the system the is being created from. For example, the event type is used for events from EventStore whereas a custom published field is used to set the for messages generated to Azure Service Bus or Kafka.

The decode does not have to match on the ; we utilize things like the or the to make these determinations when necessary. Note that the decode function returns an (an optional F# type) which allows us to skip any invalid messages.

Typically, for Service Bus Queues or Kafka Topics, we generally have only one type of message for that queue/topic/channel in which case we don’t necessarily need to look into data or metadata. Another typical pattern is to assign a dedicated function to a single incoming stream.

The incoming stream would then be mapped over to create a list of handlers:

Handle

The function accepts an and return’s an output. The handle is essentially a middleware step meant to encapsulate any business logic. For example, a typical use case is for the handle function to fetch an aggregate (state representation of event-sourced data) to determine what type of (side effects) are required. In the example below, the service is responsible for starting a fraud check.

The example above shows how the handle would encapsulate some domain logic to determine what type of side-effects if any need to be executed by the .

Interpret

The accepts the a from the function and returns an . The unlike the other functions is responsible for enacting side-effects. Continueing the example from the section above the interpret function would either handle the or send a command to our fraud service to evaluate the order for fraud.

While in the example above, we are only emitting a single side-effect, it’s not uncommon for a interpret to emit several different side-effects for a single . Another common practice is to emit different side-effects based on the result of the first side-effect. For example, in the case above instead of ignoring the write result, we could match on the outcome and emit an event to log the response:

Handle + Interpret (Handler)

A handler is a function responsible for encapsulating the “domain logic” from the handle and interpret functions and applying it to the incoming message stream passed from the consume → decode chain. A handler is created by composing the and functions by passing them into the RunWithMetrics module.

The RunWithMetrics module is inside of the OMS.Infrastructure package which we discussed in Abstracting IO using F#. This module contains two functions:

-

-

These two functions compose the and functions into a handler that can be composed with the function to create the service. The implementation of these functions is fairly straight forward:

The main difference between the and the functions is that the function uses a provided function to generate the and that it requires that tracing header be present on the message to derive information about latency, producer, etc.

As you can see from the implementations above, both of the functions evaluate the and then pipe the output from the handle into the . There are some utility functions throughout this implementation like which are used to wrap async functions to get the elapsed time and write a metric to configured metric stores.

Consume

Now we have discussed most of the required parts of the microservice example from earlier, and we have been able to build the pipeline:

One of the elements we haven’t discussed yet is the function. The function is streaming (subscription) method which we use to connect to and consume messages from systems like Kafka and Azure Service Bus. This function is detailed more in the previous post. In brief, this function allows us to represent streaming messages as an , where the type is simply a discriminated union (DU) representation of messages:

The type is intended to represent three common message representations:

  • Message : Single Message
  • Batch : A collection of messages which are in a sequence and the commit order matters
  • Bunch : A collection of messages where the commit order does not matter

All of this is detailed in the previous post in more detail. But by composing the we now have a pipeline which has a signature of :

While we have defined all the logic needed to handle our inputs and outputs, we still need to have an entry point to create an executable for the service — we need something to kick off our microservice. This is where the or the functions come into play.

The example takes the incoming streams as an arbitrary sized and passes it into the function, which applies the pipeline function over the AsyncSeq. The implementation for the function is below:

This function is the final piece of the puzzle. We now have an entry point for our program. Typically this pipeline would be called from a Program file which would be responsible for setting all of our configurations and initializing logging configuration before starting our pipeline.

Conclusion

This post went into more detail about how we build microservices, namely the decode → handle → interpret pipeline. There are many different aspects of the boilerplate that was not discussed in this post, namely how we deal with things like idempotency, the implementation of the handler function, the configuration module in OMS.Infrastructure. Another important topic that is not covered is parallelism and scaling of these microservices which is the topic in a later post.

If you like the challenges of building distributed systems and are interested in solving complex problems, check out our job openings.

Jet Tech

Sharing our engineering org’s learnings & stories as we build the world’s best experience to shop curated brands and city essentials in one place.

Thanks to Gad Berger, Lev Gorodinski, and Krishna Vangapandu

James Novino

Written by

engineer @jet

Jet Tech

Jet Tech

Sharing our engineering org’s learnings & stories as we build the world’s best experience to shop curated brands and city essentials in one place.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade