Microstructures and other Velocity Drivers — Part 3

Paul Pogonoski
6 min readNov 16, 2022

--

So, now things get interesting, if not a little contoversial. Today we actually discuss Microstructures.

As I say in my Introduction, please be patient and keep an open mind as all will be revealed in the following chapters.

The previous section can be found here: https://medium.com/@paulpogonoski/microstructures-and-other-velocity-drivers-part-1-c0361b766e20

Microstructures

Microstructures a key Velocity Driver. In fact, the key Velocity Driver in that their existence influence the remainder of the Velocity Drivers.

So, what are they?

Firstly, I wish to make a clarification. I will be using the word “infrastructure” in this chapter. This is not, in anyway, an approval (tacit or otherwise) of the use of the word, especially when a public cloud is involved. I’ve already, and will continue to remind the reader, that the use of the word is no longer reasonable in the world of the public cloud. However, I’m using it originally for two reasons:

  1. I’m trying to be agnostic and not cloud specific
  2. I use part of the word in the new term, Microstructure. Thus, wishing to make the subconscious link.

As its name suggests, a Microstructure is an analogy of a Microservice, in that they are the subsets of the total infrastructure that used to support the total solution. And just like Microservices, they can be grouped, or defined, by the functional domain they support. However, luckily, unlike Microservices, they are more easily identified and grouped.

Simply put, a Microstructure is the grouping of the infrastructure that supports a Microservice.

So, if a Microservice is a group of containers running in a Kubernetes instance, then candidate infrastructure components would be (being as Agnostic as possible):

  • The Kubernetes Namespace definition
  • The Kubernetes Pod(s) definition
  • The Kubernetes Service definition
  • The Kubernetes Ingress definition
  • Any Kubernetes Secrets or Env Vars
  • The Load Balancer for the Service Ingress
  • The Firewall rules
  • The Database definition

Now, some of these would be less likely candidates if the Kubernetes was not a cloud service. But if it was on AWS, say, then all of them would be in the Microstructure for the Microservice:

  • The EKS Namespace Terraform resource
  • The EKS Pod(s) Terraform resources
  • The EKS Service Terraform resource
  • The EKS Ingress Terraform resource
  • The AWS Secrets Manager Secret Terraform resources
  • The AWS SSM Parameter Store Terraform resources
  • The AWS ALB Terraform resource
  • Any AWS S3 buckets Terraform resource needed for the Microservice
  • The AWS Security Group Terraform resource
  • The AWS IAM Roles Terraform resource needed for the Microservice
  • The AWS RDS Cluster and DB Terraform resources

All of these would be defined in the same single GitHub (or equivalent) repo, and most likely the same repo as the Microservice code.

The CD pipeline would then be able to incorporate a conditional Microstructure pipeline that could be invoked to create, modify, or destroy the Microstructure.

At this point you are probably asking:

  1. So why these components? For instance, why wouldn’t there be a single ALB, or SG, or RDS cluster; and what about the EKS definition?
  2. Hmmm, who’s responsible for the conditional Microstructure pipeline? That is, aren’t you in danger of having different parts of the pipeline owned by different teams?

Two very good questions, so let’s take them one at a time, starting with question 1.

These components were chosen because they can be uniquely, and singularly, tied to the Microservice in this instance of using the AWS public cloud. Any cloud service that is shared cannot be considered in the Microstructure and is a Shared Service and the responsibility of the DevOps team (more on this later).

Further, making the ALB, SG, and RDS cluster unique and separate per Microservice has very little impact of pricing, and my cost lest then a single, large, shared instance of them, because:

  • SG’s are free
  • An ALB is changed per instance, per hour, but the lifecycle management costs are reduced
  • An RDS instance may well be cheaper as costs are defined by RDS instance type, and these will be clearly much smaller and cheaper than a very large cluster for everyone. Also, the lifecycle management costs are reduced as you can upgrade/change without effecting other services.
  • In lower environments these resources can be removed or shutdown when not in use, further saving money and allowing greater flexibility
  • When the Microservice is removed from service, then the supporting Microstructure can be similarly removed with a single instruction to Terraform, making the lifecycle cost of the Microservice much less because management of it is simpler.
  • Deploying the Microservice into different regions is significantly simplified by adjusting the pipeline to run in that region.

This is a concrete example of the impact of replacing infrastructure with cloud services. They are cheap and simple to define and deploy, and the notion of a large, shared, complex, and non-modifiable, shared service, or infrastructure, needs to be retired, unless when it makes sense (more later).

Now for the second question.

If your organisation does not require a development team to own and run its own CI/CD pipelines, then why?

CI and CD pipelines should be separate and distinct. The CI pipeline responsible for the build, test, and packaging for the software artefact. Most likely a container image placed into an artefact repository. The CD pipeline may then, but not always, be initiated to take that, and possibly other artefacts, to deploy the Microservice into the required environment. This separation allows for the same artefact(s) to be deployed many times in many places, under different conditions. It also allows for multiple development streams (maybe branch oriented or may be trunk-based) to create their own artefacts without impacting the CD pipeline.

This separation may seem the perfect reason to make the Dev team responsible for the CI pipeline, and the DevOps team responsible for the CD pipeline. Then adding the conditional Microstructure pipeline to the CD pipeline would seem to make this separation even more sensible. But it shouldn’t happen, especially if deploying to a public cloud.

As already mentioned, the Microstructure definitions are likely in the same repo as the Microservice code, because they are, in fact, symbiotic. Because of this the Dev Team should own and be responsible for the CD pipeline. They are responsible for the entire operational ecosystem of the Microservice, except for the overarching Shared Service of the EKS cluster, the VPC the cluster is in, the network in that VPC, and the account the VPC is in.

This is a very practical example of what most people think is “shifting left”. However, the initial definition, as described in The Phoenix Project, was about moving testing, and proving out the solution, as soon in the lifecycle of the application/Microservice. Regardless of which definition you subscribe to it still works because the Dev team is relying on no-one but themselves in making sure what they are delivering is a resilient, error free, scalable and performant as the Product Manager’s SLA’s require (more later).

So, what of infrastructure that is shared, that isn’t in a Microstructure? These services will be managed by the DevOps team as shared services they offer to the development teams. More on this in the chapter on the structuring of the DevOps team.

I said at the start of this chapter that I considered Microstructures the key Velocity Driver. How does separating and bundling infrastructure, or cloud services, increase Development and Delivery Velocity?

Delivery Velocity seems more self-evident, in that the Dev team is not waiting for a separate team to deploy the infrastructure before they can deploy their code, especially when that separate, centralised, team is fully, or near fully, utilized. On top of that, the infrastructure may be part of some larger monolithic infrastructure, meaning that there would be wait time scheduling changes. So, separation allows the Dev Team to take ownership and modify as needed (more next chapter).

Development Velocity does seem a harder claim to make. So, if the Dev Team owns the Microstructure in Production, it would naturally flow that they also owned it in the lower environs. If the team owned it in the lower environs, then they could create, modify, or remove as they needed, not waiting on the separate team as above, thus compounding the impact. Thus, design changes can be easily accommodated as issues are found in an iterative, agile, step delivery of the code. Indeed, the Microstructure would be built up the same way, with an initial MVP then additions as the code dictated. This would result in much greater velocity.

What I’ve just demonstrated is that Microstructures are clearly Velocity Drivers, just like their software counterparts, but it is the key driver because it is the catalyst for other Velocity Drivers. In this case the Dev teams become the owners of the Microstructures.

[1] https://en.wikipedia.org/wiki/Conway%27s_law

[2] The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win
Authors: Gene Kim, Kevin Behr, George Spafford
Publisher: IT Revolution Press
ISBN:978–0–9882625–9–1
Published:10 January 2013

--

--