Service Oriented Architecture and its implementation on-premise and in cloud

Rishabh Ajmera
11 min readOct 10, 2019

In today’s cloud world many believe that anything written in cloud is a “service” or “micro-service”. Use of containers or App Service or Serverless offerings does not make something a Service, in my opinion. One is Architecture/Software Design where as other is Technical Solution/Implementation Detail.

This blog post attempts to clear out the meanings of the two and then shows how right Design can be implemented on premises or by using modern day Cloud Technologies.

A Service is a Technical Authority over specific area of business capability. This definition is from Udi Dahan.

A service should be autonomous. It should not be sharing its raw business data, it can share only stable data with other services via events.

By following the above rules, we would avoid the need of shoveling and moving raw business data from one service to another and thus causing less churn if the data is updated in the service of authority.

As a service, you won’t allow other services to reach into your data. You won’t share your raw business data with others. How does one work with you?

As a service, you will provide component that can be present wherever access to your data is needed. And thus all the behavior and access to data of the service go through service which allows service to be the sole authority over that data.

A Service owns cohesive unit of data which is required to be together based on invariant between its different data points. So for example, Name of the customer should not affect the price of the product, hence two should not be in the same service.

You will typically end up with several services, following the above rules. If these services are hosted independently and users are made to access each service independently, it would end up being a very bad usability experience. Hence, instead of exposing the services in their raw form, components from service are hosted into a System which houses several components of these services to allow seamless access to the users.

Consider the example of a website, Users who author the content need different set of tools in order to make the content presentable as opposed to users who view the content. Thus typically you end up with Content Authoring System and Presentation System.

Lets consider 2 most common ways in which this can get implemented.

If we have 2 different databases, it does give autonomy to the team owning the system to change the database. However, data getting generated in one system is not used in that system hence at some point of time a Sync or Move of the data will need to happen to make the generated data reach the consuming System.

With single database, we won’t have to shovel the data. However, all the functionality is now coupled to the database. Any changes to schema of database would require approvals from all the Systems that is using that database. Imagine the amount of features in an enterprise level software, all surviving on single database and risk that business would see in making any changes to such a database.

The right thing to do is to create Services over cohesive units of data and then have the components from these services be deployed in whichever Systems that presents or operates on its data. This is different than database per System, since now the data from the service is not getting shoveled, instead the component of the service is being placed wherever it’s data is used. This allows for keeping all the behavior associated with that data in that service. It is also different than single database approach because just like Green service, there will be Blue service with it’s own cohesive set of data (and database), behavior on that data and it’s components. And a Red service and so on. In case of Single database solution, all of data of these possibly separate services end up into one database giving no autonomy for change or keeping the change isolated.

Architectural Needs for Composing Services into System

There are 3 needs from Architectural standpoint in order to achieve composition of services into System.

Let’s consider these Needs one at a time and see how these can be achieved/solved up to an extent in today’s technology offerings. In order to describe the needs in more realistic manner, eCommerce example consisting of 3 services has been chosen, with each service owning it’s raw business data as follows.

These services have been selected for ease of explanation. The solutions presented to each of the architectural needs can be applied to any Red, Green, Blue services.

Note that when meeting the needs, the solutions presented assume that all the code related to accessing data and business logic of a particular service is present in codebase of that service. Component(s) i.e. library is given out of the service(s) to be put into System, after which system handles interaction and deployment of those components.

Architectural Need 1: UI Composition

Architectural Need 2: Handling Posts

Posts here indicate Post requests i.e. request from users to change the state.

Single Service Post (Updating State of Single Service)

Composed Post (Updating state of multiple services)

To understand the Need better, consider use case for Placing Order and how 3 services would interact. From end user standpoint, the overall action they are performing is to “Place an Order”. However, this would start from “checkout page”, after which they would enter in Delivery details (for shipping service), Payment details (for Billing Service) and Order related information (for Sales Service) and then Place the Order (interacting with Sales service). All of this interaction provided to user in a seamless manner by System hosting components of these services and can be achieved in following manner.

Note that OrderId is a stable concept and is shared across different services. it is passed over in the UI so that each service can associate it’s data for that order to the passed OrderId.

This enables OrderPlaced event to have just OrderId in its payload, instead of having data related to Order for all the services. Consider if a service needed to change it’s data associated to that Order, it can do so without requiring any changes to the OrderPlaced event. Also any new service getting added in future will not cause any changes to the messages and hence existing services will not need to be modified. This is possible because we are sharing only the stable business concept of OrderId even in the events.

Many a times, teams follow Messaging based architecture but share raw business data in messages across services and thus don’t get real benefits of Event Driven Architecture. In some scenarios, like when integrating with 3rd party one would need to expose Apis since http is the most widely used protocol. Another such scenario is when Systems are deployed remotely and you need to integrate/bring over the data from remote location, you would package data across services. But those are exceptions and not norms.

In the case above, to track completion, shipping implements timeout, but it can be easily implemented by any of the services, by subscribing to the start (OrderPlaced) and the end of processing messages like OrderBilled or OrderShipped. We can also have multiple of these timeouts implemented depending of business needs on how closely the process needs to be tracked.

Architectural Need 3: Notification of changes and handling of notification in always running process

Components of a service involved in achieving a Use case are referred to as Autonomous Component. Dividing a service into separate islands of data ends up creating separate Business Components. By creating multiple instances of background process, for any of the above component, we are following competing consumer pattern.

Typically a separate background process per service is created. One can go to lower granularity than that by creating a worker process for specific use cases/handlers in that service. This way we end up with multiple worker processes for that service, which gives you the flexibility to host them separately and also scale them out separately, as per business needs. For eg: Imagine the use case of Priority shipping needs to be processed quickly as opposed to regular shipping. If handling for both of these are in the same worker process, then we cannot scale them separately. However, by keeping them separate, we can create multiple instances of Priority shipping background process as opposed to regular shipping.

Deployment Views

There are 4+1 Views of software design. One should remember, that one can follow SOA when on-premise, and not follow SOA even when deploying to the cloud. Having said that, let’s look at how we can deploy the above architecture in both on-premise and in cloud world.

On-Premise deployment

Cloud deployment

Service fabric reliable services are also another option to host in cloud, apart from Azure functions.

Note that with Microsoft aiming for lift and shift many of the cloud deployment models are available for on-premise hosting also. Azure functions and Azure Service Fabric are examples of such Azure offerings.

Dealing with multiple Resources and Atomicity

Databases provide atomicity across a batch of operations performed against them in a single session. Consider the need to persist data and notify other services about it by publishing events on bus and doing these in atomic fashion. Let’s first understand the need for atomicity and then solutions.

Need for Atomicity

In the architectural needs 2 and 3 where state is modified in service, the service may also notify other services about the change. Hence there is need for atomicity.

Solutions

  1. Distributed Transactions spanning several resources

This is becoming legacy. Microsoft in the past had come up with concept where in resources can enlist themselves into transaction and if the transaction is spanning different resource types, transaction will be escalated to Distributed transaction and the framework would take care of guaranteeing atomicity. Some of the Microsoft products like SQL Server and MSMQ is supporting Distributed Transactions (referred to as MSDTC). However, this is not technology of the future and is not supported by Azure cloud resources.

2. Outboxing

A slightly different implementation can also periodically look at the outbox table and publish events and delete published events from outbox table. NServicebus implements outboxing to meet the atomicity need. The implementation may vary slightly from what is described in this blog.

3. Orchestrator Function in Durable functions (hosted in Azure function)

Documentation on Durable functions can be found here. Durable functions can work with several of azure resources not just Storage and Azure Service Bus, by the means of operating on them in the activity functions.

Orchestrator function provides reliability for calling Activity function. It stores execution history into storage table and then schedules the activity by adding messages to storage queue.

Shortcomings and workarounds

Both NServicebus Outboxing and Azure Durable functions provide solution for atomicity when actual operation is inherently asynchronous in nature. Hence in case of Http request coming in where in User actually waiting for immediate response, there is not clear path to work with persistence and messaging, at the time of writing this blog.

In case of NServicebus one can use callbacks (Request-Reply) mechanism to make the asynchronous handler being called in synchronous manner. Thus in the handler one can then use context (provided by NServicebus) to save service data using the shared database session and publish events on the context. However, one needs to be careful on handling exceptions in the handler in this situation.

Demo and Code

Demo website implementing above SOA concepts in eRetailer System deployed to Azure App Service and Azure Functions can be found here. Final Source Code for the same can be found in Azure Devops repository here. If you want to develop similar application, follow the steps mentioned in Code Along.

Code Along

We will implement eRetailer System with 3 services Shipping, Billing and Sales, doing a composed post as identified in Architecture Need#2, shown below for reference again.

We will rely on the setup that was done as part of a different blog about Managed Identities in Azure. It was done across 5 “Code Along” exercises, the code repository for the same can be found here. The final codebase at the end of Exercise 5 (found here), is taken as the starting point for the Exercise 6.

Code for Exercises 6 and 7 can be found in Azure Devops repository here.

Exercise 6: Wiring-up events

Similar to eRetailerSalesEndpoint (Azure function App) created in Exercise 5, we will create eRetailerBillingEndpoint and eRetailerShippingEndpoint as function apps. Update the topology so as to allow the pub-sub of the events to happen in correct manner. We will not be persisting data yet, hence we will not require durable functions, since we will work with only “Azure Service Bus” resource, in this exercise.

Azure Devops folder for Exercise 6 in code repository can be found here.

Exercise 7: Persist data to Azure Storage Table and notify completion with email

Taking the code further to persist data and publish events using Durable functions, we will complete the eRetailer System. The System will send out Email, to the Email address entered as part of DeliveryDetails.

Azure Devops folder for Exercise 7 in code repository can be found here.

--

--