Challenges in Migration of Application to Another Platform

Pravin Tripathi
6 min readFeb 12, 2022

--

Background: I am working on the migration of an application from one platform to another platform, both were developed in-house, and managed by internal teams. The purpose of migration is to decommission the current platform, switch to another platform that has better features available to use, and have good integration with external services.

Motivation: To remove the dependency on the legacy platform which has less flexibility to make new changes, and replication of features from another platform to the current platform is costly. Even if we go for replication, this doesn’t change the fact that we are still using legacy code that was developed long back and has less flexibility to change. After observing multiple parameters, it was decided to move the application to another platform that has a similar capability and more flexibility to change.

Photo by Michal Balog on Unsplash

Let’s start the discussion, on the challenges that I encountered and what decisions were made to resolve the issue. These issues are not uncommon and something everyone may have faced. This article is focused more on application-level changes and additional constraints observed on the target platform while migrating.

Refer to the below article to know more on migration strategy and other challenges,

I believe an understanding of the current application is required, and also services offered by the target platform with the API contract are necessary before starting the migration work.

That is something every team does but fails to properly document. Since the knowledge that is acquired during the analysis phase is not properly documented and transferred between the team, there is always the possibility for folks working on the changes might assume and add changes that are not as per the business need.

You will argue that these happen less often, and I agree. But, we need to understand that the team that is following the Agile way of working on the changes always get some modification from the business team/Product owner, and if we don’t have a sufficient understanding of the platform, it might cause spillover of stories that you are working on to next 1–2 sprint which is not good as the team already planned the sprint based on story point and has an indirect impact on delivery speed to the business.

So, below are some of the understandings that I feel one should have before working on such a project, where migration and delivery of new features are happening in parallel.

In my previous article, I had discussed Strangler Fig Pattern, Asset Capture, and Event Interception Strategy. These are used to route the request to either a legacy app or a new app.

For reference, The above article discusses some of the patterns/strategies for the migration of existing apps and new feature releases together.

Photo by Casey Thiebeau on Unsplash

Here are some of the issues/hurdles, I discovered while working on the migration of an app from the current legacy platform to the target platform,

Meaning and value can differ for the same service in different platforms/contexts:

Photo by Pravin Tripathi

One of the services consumed by the application provides the percentage of match between sent details with the 3rd party services. When the same service is checked in the target platform, it is not providing that match percentage. So, the business logic which works on percent match is having the issue.

In my case, it is not a difficult change, just mentioning here as in some cases it is not an easy change.

Flexibility to change are affected based on additional constraints from the target platform:

  • I found one problem in the core library (similar to the shared kernel) of the target platform. The data model used for the Personal details of the user is similar to what is used in the current platform, and we needed to make some changes to this model so that it can accept additional details. Since it is stored in the core library of the framework and making a direct change can have unknown side effect or crashes as the platform host many applications and is indirectly costly. So, to handle additional details as per the new requirement, we created a new model and stored it in that application itself.
  • We can go for a façade or adapter pattern to avoid corruption in the layer, but that still doesn’t change the fact that new data is still can’t be shared with them. So, this approach might fail for some cases but works for the rest. For example, say you have some changes to data type, and some additional data is now available, but you cannot share that same data to another service due to data loss possibility or partial data which is something we don’t want.

Original behavior to some extent is not easily achieved due to capability or shared kernel constraint:

For example, let’s say based on the response returned from the service which provides the percentage of match between sent details with the 3rd party services, we need to ask additional questions to the user, and this is captured in the separate future task by the human who logins in platform manually.

In such a scenario, integration between two separate components is affected by the service which connects them. It can be like the limitation of the number of questions that are to be sent/info collected from users due to the design of the connecting service.

It happened not due to poor design, but the business till now didn’t have any use case like this, and this is a new requirement.

Another example is if capability acts like identity verification of user via Video, then it needs to checks for the details user provided in the previous step.

If any details in the previous step got modified that require the user to perform the verification, we need to redirect the user to the verification portal again. It is due to capability level constraints in the target platform. It doesn’t make sense in 90% of the cases to verify again.

It is better to avoid using services/codes available in the platform/framework which is not suitable for your requirement:

Avoid using code that adds confusion to the application. If you are using the API approach to get the details from the UI and know that name and mobile number details are required, and if you use the following code that is in the core library and looks like,

class Demog(
val name:String?,
val mobile: String?,
…)

for saving Demog class object or sharing it with other services, then it is better to reject during object creation if partial data is provided which requires additional validation. The same above code can be refactored as

class Demog(
val name:String,
val mobile: String,
…)

It clearly says that both data is required to create an object of type Demog. By writing this way, we are making sure even if the frontend code is sharing wrong or partial data, it will see some error and ask to resend with correct details. We don’t need to add additional validation due to the language feature.

Demog class might not be the best example, but the intent here is to avoid a data model which invites confusion in the code and promote cleanness and readability of code instead of adding redundant fields which contribute toward code smell.

This not only saves time but also removes the confusion that other folks might have while consuming your services.

There might be other changes that new folks in the team might not know as they don’t have a proper understanding of the platform, and knowledge sharing as it is not properly done like,

  • Setting up a mock server is required in the new platform
  • Unit-test code not present for a few of the commonly used APIs in the framework is frustrating. If we need to make changes to it, then we don’t know what will break if we make changes. This could be tracked by following code review when PR is raised for new features or changes, also by following the TDD approach.

The End

--

--

Pravin Tripathi

Software Engineer | I like books on Psychology, Personal development and Technology | website: https://pravin.dev/