Minimum Viable Migrations — bringing agility to Cloud Modernization
Cloud Modernization approaches often lack the agility needed to ensure success and maximise value to users. Thinking in terms of events not only provides a path to state-of-the-art Event-Driven architectures, but also a non-terminal architectural destination ready to evolve with customer needs and technological advances. We need to move away from a “big-bang” migration mindset and use events to map a series of non-terminal Minimum Viable Migrations (MVMs) to stay current and adaptive to change. This article presents the concept of Minimum Viable Migrations with a particular focus on migrations to Serverless Cloud-Native architectures.
Legacy
Legacy, a term often whispered as if it’s an obscenity, is a fact of life for all technology companies. Legacy is, in some ways, an achievement — it worked, it was successful and now it’s time for evolution.
Migration projects are needed to evolve and remain agile. The latest evolution many face is to Cloud Native and Serverless, so how can we bring Agility to such migrations?
Agile Migrations
Most teams will take an Agile approach to launching new products and services. Starting with a Minimum Viable Product (MVP), and releasing iteratively — gathering feedback, testing assumptions and delivering value to customers more rapidly.
A minimum viable product (MVP) is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development.
MVPs allow us to test hypotheses earlier, learn faster, reduce waste, ship to customers earlier and validate assumptions.
When it comes to migration projects however, this often goes out the window. Teams look for a “big bang” release once the new system reaches “feature parity” (an equivalent set of features and functionaries) with the existing system.
Cloud Migration and Modernization
Cloud Migration, the process of moving digital business assets and operations to a cloud provider (or to another cloud provider), became extremely popular with the advent of the public cloud. These migrations often took the form of “lift-and-shift” migrations, with some later focus on refactoring only when needed.
Migrations are at their heart a shift in domains, from a Domain A to a Domain B.
This could be a “Traditional” Cloud Migration from on-premise to a public cloud provider.
A migration of Language or Framework.
A Structural Migration from a Monolith to Microservices.
Or a Cloud Modernization from a classic cloud hosting solution to a Cloud-Native, Serverless, Architecture.
Serverless is emerging as the future of the Cloud — a range of services that allow you to build and run applications without having to think about servers. Serverless architectures reduce total cost of ownership, allow developers to deliver more business value and are automatically scalable from day 1. As such, many companies are looking to migrate their applications to Serverless through “Cloud Modernisation Migrations”. For many this involves mastering a new set of technologies along with refactoring and restructuring their application to make the best use of the cloud. Doing this in an Agile progressive way allows an iterative approach to modernization — reducing risk and delivering value faster.
Progressive, as opposed to “big bang” wholesale migrations, can seem more complex on paper. We need to duplicate data, write interfaces and keep two systems in out minds. Further to this, we can risk impacting the design of the new system with the integrations to the old. It becomes hard to be bold in changing systems fundamentally if we’re progressively moving — and after all “we don’t want a faster horses”.
Luckily, with the right approach and technology, progressive migrations can prove simpler, more cost effective and less intimidating. Cloud Modernization Migrations are, by their nature, not simply “lift-and-shift” but require refactoring of the systems. This can add complexity, but also the opportunity to discover creative progressive migration approaches and make the most of the cloud.
Modelling Systems
Computer scientists have many tools at their disposal when it comes to modelling the systems they build. All of which with different levels of abstraction and standardisation.
Waterfall migration projects, typical of lift-and-shift, often model the “system-as-is” and the “system-to-be” — but what about the “progressively-evolving-systems-between”. If we are to move progressively from the system-as-is to the system-to-be, it’s not a single stepping stone state, but instead a journey.
Modern state-of-the-art Cloud architectures are often “event-driven”, meaning that systems interact with each other via Events — signals of change in a system.
Event: Signal of change in a system
Instead of defined APIs and synchronous request-responses, systems fire off Events (e.g. into an AWS EventBridge Bus) and listen to events from other systems. The event structure becomes the shared interface and systems can decide what events they emit and subscribe to.
Events are a very useful way to thing about systems, both in the real world and the digital ones. If we focus on Events from the business domain, not implementation details, we can understand and reason about system in a consistent way (regardless of the underlying technical domain).
See my previous article on EventBridge Storming for more on this.
We can understand the Legacy system through the Events it processes. The Legacy system is unlikely to be implemented in an Event-Driven way — but we can still reason about the logical system events that support business functions. In this way we can think about systems, events and channels that events travel down. This will help us draw a map to see the journey to progressive migration.
Mapping Events
If we abstract the processing done by the systems, this becomes a set of interchanges and destinations, with paths between them.
Avoiding the temptation to apply a bunch of graph theory equations, let’s take inspiration from an example of abstraction studied by all designers. The Tube Map of the London Underground.
Famously this schematic abstracts away the geographic position of stations, instead representing their relative positions.
If we use the conceptual model of a tube map of stations, and apply it to a legacy architecture we get something as follow:
Now, we can understand the systems and communication pathways involved. This mapping of services can be conducted at different levels of granularity. It can map isolated systems at a high level that communicate via APIs and it can map the internal source code services of a monolith. We need to understand the logical locations for the processing required for different business operations to build an accurate mental model from which to plan our progressive migration.
Progressive “Minimum Viable Migrations” (MVMs)
Moving from the System-As-Is to the System-To-Be should not be a one step all-or-nothing leap. As discussed above, we should keep an agile mindset and apply the same principles we consider with MVPs for greenfield products.
There should be many steps on this journey — and, in truth, there is ideally no end destination. Systems should change with the customer needs and advances in technology — demanding an architecture that can facilitate MVMs. This architectural style being an event-driven one.
Going back to our transport map metaphor, we need to adapt the shape of the transport network, make upgrades and add stations — all while we keep the trains running on time.
Mapping the journey — Identifying MVMs
Identifying the sub-systems to migrate is a complicated tradeoff. We need to balance many factors and be guided by a hypothesis we want to test or result we want to achieve. It could be a single bottleneck to scalability we want to eliminate, validation of a technical approach, validation that the technical teams are able to work with the toolset of the target architecture, an infrastructure related cost saving, a security deprecation, or even the need to introduce a new functionality the current architecture can’t support…. the list goes on.
The key is we need a limited scope of migration, moving a set of business process or subprocess to a new domain, while leaving the others in the existing domain and ensuring this can go to production as a hybrid of old and new.
If we look at our map above we could have identified that the existing CRM (a custom solution built on legacy technology) can’t keep up with demand and is a bottleneck for the whole system’s scalability. Therefore increased scalability is the result we’re looking to achieve with this MVM.
The business events this CRM module processes have be identified, e.g. LEAD_CREATED, LEAD_CONVERTED, CUSTOMER_DETAILS_UPDATED… We can build a new CRM module in our new target domain.
Mapping the journey — Building a bridge
To be able to release this new CRM module to customers while keeping the other systems that communicate with it in the existing domain we must build a bridge between the two domains. The bridge allows a bi-directional flow of our events between the domains. (Note these are our business events, not technical events)
For instance, if the new target domain was an Event-Driven Serverless architecture on AWS, Amazon EventBridge will likely be the communication bus for events between microservices of the new system. Luckily, Amazon EventBridge has a flexible SDK and support for cross-account events. If we were, for instance, moving from an existing domain of a monolithic .NET application on AWS we could use the AWS SDK to dispatch events directly from the existing codebase. Alternatively, database triggers or networking proxies could intercept and infer events if changes to the existing system are not possible.
Once migration of the CRM module is complete it’s logically just a dispatcher of events to the new system and we can go live, delivering value and testing assumptions! Allowing us to avoid the deadlock of full feature-parity preventing releases and learning. In this example we’ve brought a scalability increase and validated some of the target domain technology choices.
The path of MVMs
MVMs are by the nature iterative. Releasing to real users and testing assumptions informs the next MVM. Progressively each logical system and subsystem are migrated to the new domain with a hybrid approach allowing full business operations to run throughout all steps.
And, spoiler alert… there is no final “System-To-Be”. Instead there is a constant evolution of progressive MVMs, but this time with an Event-Driven architecture amenable to changing customer requirements and technological advances.
Note: There is definitely some overlap with the well known Strangler Fig Application approach from Martin Fowler.
In Conclusion
Minimum Viable Migrations (MVMs) bring agility to migrations projects that are typically constrained by all-or-nothing waterfall delivery. This frees us up to release earlier, test assumptions, deliver value to users faster and most of all learn.
MVMs are iterative stepping stones to a likely non-terminal “to-be system”. At their core MVMs rely on an Event-Driven mindset and architecture as these provide a clean bi-directional interface between domains and the freedom to have a constant mental model that is not impacted by the underlying technology.
Progressive migrations are not without challenges. Managing data duplication and cross-domain networking to name just two. These challenges though are worth it to allow the agility needed to be successful and avoid feature parity paralysis.
MVMs are naturally a good fit for Cloud Modernization Migrations to Serverless, yet the approach is applicable to many other domain migrations.
If you like content like this consider subscribing to our weekly newsletter!