Sitemap

Migrating an Enterprise Application to AWS

5 min readNov 13, 2021

Some time back I worked with a customer who was a regional player with presence in only one country and had the global ambitions and a SaaS based product to support that ambition. The customer was in process of digitising their relations with current customers, developing Internet-based sales channels, offering online customer care, and boosting their social media presence to acquire new customers.

They were looking to migrate their complete infrastructure and system support their increasing user base and traffic, address repeated recent performance issues at peak load times and to improve customer’s ability to innovate faster

As a first step, they wanted to migrate their online customer portal, which is currently hosted on-premises, to the cloud. This portal allows customers to check their accounts, change subscription plans, monitor usage, and pay bills/charges.

The application was a monolith application written on Java, Spring and Hibernate, and deployed on WebSphere Application Server using an Oracle database, and is maintained in house. The primary drivers for the migration are to address repeated recent performance issues at peak load times, and to improve PA’s ability to innovate faster. While the portal itself will be migrated to the cloud, it still has to maintain its integration with core services that will continue to be hosted on-premises, specifically billing (via IBM message queue), rewards platform (via SOAP) and payments (via Java RMI) as customer has his own payment gateway and already had direct lines to banks from their data centre .

The account portal also requires lot of text based and identifier based searches and the traffic that account portal is supposed to handle is > 1M request per day.

Proposed Solution

1. Application Migration Design

2. Network Design

3. DevOps Design

Design Decision

  1. Design Principal : Because this was the first time customer was moving to any public cloud, proposed the customer, PaaS first approach to avoid maintenance headache and support.
  2. Restful API(s) are exposed in current implementations which are required to be supported in the new system as well, so NLB is used to connect to private subnets. In case of HTTP services the choice would have been
    API Gateway (HTTP) → VPC Link → ALB → API(s).
  3. Authentication : Authentication can be achieved by utilising AWS managed services like Cognito, Amplify and IAM profiles or via running existing services deployed as containers in separate POD or as sidecar in every POD which will further integrates with Siteminder/ForgeRock and callsign to build a complete OAuth2/Open ID Connect Authentication.
  4. Elasticache and ElasticSearch will be used for Data Streaming between write and read repositories (for enabling quick text based and identity based searches) . Data will be transformed from domain objects via data transformation pipelines which will be consuming kafka messages and publishing them to ES and EC.
  5. AppSync will provide common access points via GraphQL api(s) to connect with Elastic search and Elasticache based on input type (free text or identifier).
  6. CloudMap will be used for service discovery and App Mesh will be used for Inter service communication.
  7. CloudWatch metrics will be used to collect and analyse all the application and system metrics, and raise alarm in case of a configured event.
  8. Database Migration Strategy : DMS (recommended) — Data replication agents will be installed on on premise source DB for data migration with CDC for migrating real time updates and SCT for schema conversion. This might require a callout for application downtime as well. Duration of downtime will depend on db and application changes needed to be done, test, execute and enable. Other migration strategies/tools can be considered like AWS Snowball depending on size of data and frequency at which it is getting updated. For extremely heavy data set, hybrid approach can be used with data upto a certain time can be copied and shipped to AWS data centre via snowball and for rest of the data, an update pipeline can be created to reduce the application downtime. For AWS hosted databases, recommended was AWS Aurora unless the customer has certain reservation or limitation towards using any database (like lift and shift the same database in AWS during migration for phase 1).
  9. DevOps Strategy : For deployment strategy, there could be many possible ways to deploy the core logic like creating containers in EC2, ECS or adding logic to an orchestrator like Step Function with AWS Lambda. We have suggested the customer to go with containerised EKS service to address below mentioned two issues.Customer is currently migrating a small set of services but in future if he wants to migrate the whole of their remaining application to AWS along with adding new features/services and run all the services on EKS, he doesn’t have to we-write the whole logic unlike the step function with lambda.Running many services in EC2 requires a lot of effort from the customer’s end, as he has to manage a lot of things related to DevOps like monitoring, Certificate management, auto-scaling etc, which is an unnecessary cost and a risk as well if your current team is not well prepared for that that.ECS offers only the basic level of networking and is a good fit for applications where a very high level of scalability is required. Also ECS provides very less control over services, tasks and networking unlike EKS.
  10. Cloudformation is recommended as complete infrastructure is currently on AWS managed service, so easier to write, validate and maintain, but terraform can also be considered basis certain design decision.
  11. CloudWatch :For all sorts of data metrics and logs, cloud watch is suggested along with Prometheus for monitoring to provide easy and hassle free integrations with AWS components and services. Alternatively, some other open source tools were also proposed to the customer.
  12. Security : Encrypted and signed token (JWSE) with OAuth and federated tools like ForgeRock, if customer managed authentication and authorisation is used. AWS Cognito and IAM, if AWS managed AA is used (Recommended). Artefacts get security scanned during the build pipeline before deployments. To mitigate Identity Fraud : Token and IP checked before giving access. To mitigate DDoS attack: AWS Shield recommended with firewall. Rate limiting also get enabled on API gateway.
  13. For Network Security, Pods and Nods are created in Private subnets and all the calls are abstracted below secured API gateway. TLS encryption is used in all layer 4 and layer 7 communication. IP white listing and network security groups. WAF integrated before VPC in AWS network, API Gateway outside of secure VPC zone.
  14. Data Security : KEM is used to encrypt the stored data. Access will be provided on DNS based on RBAC connected to IAM/AD.
  15. DR Strategy : Active/Passive with Aurora DB Async replication : As customer’s current requirement includes services that are not very critical to business (Plan change, Account view, etc.), the recommendation is to go with Active/Passive replication with data being continuously synced between regions to avoid data losses.

--

--

Ravinder Singh Sengar
Ravinder Singh Sengar

Written by Ravinder Singh Sengar

Solution Architect • Technology Enthusiast

Responses (1)