Motivations and Considerations for Migration from MS SQL Server to AWS Aurora PostgreSQL

Venkatesh Hari
Slalom Technology
Published in
4 min readAug 27, 2020

In the previous blog, “Migration to AWS Native Databases,” we have discussed the critical aspects of migration to AWS native databases. In this blog, we would like to dive into a use case of migrating MS SQL Server to AWS Aurora PostgreSQL — focusing on migration motivations and considerations.

Let me begin with the motivations of enterprises considering migration from MS SQL Server to PostgreSQL.

1. Moving away from proprietary DB technology and restrictive licensing

MS SQL Server has Enterprise and Standard editions with Client Access Licenses (CAL) for deployments, DB Server virtualization, and cloud deployments. Microsoft has ended support for cloud deployments for providers such as AWS, which will restrict clients to leverage the best of Cloud platforms and increase costs.

PostgreSQL has flexible open-source licensing, enables deployment on cloud platforms, and benefits from open source add-ons and an active developer community.

2. PostgreSQL is a cross-platform database engine

MS SQL and PostgreSQL run on Windows and Linux platforms. In addition to these platforms, PostgreSQL works on other platforms such as FreeBSD, OpenBSD, NetBSD, Mac OS X, AIX, HP/UX, IRIX, Solaris, Tru64 Unix, and UnixWare. However, AWS supports specific operating systems on their VMs, and most of the enterprise clients leverage Windows or Linux platforms.

3. AWS native databases built on PostgreSQL

AWS has provided DB engines based on PostgreSQL, such as RDS, Aurora, and Aurora Serverless, that offer significant cost reduction and performance required for workloads.

4. Modernization allows applications to leverage the best of open source databases and the latest integration and reporting services to meet business needs.

Microsoft SQL has features such as CLR, Reporting, and Integration Services, and legacy applications leveraged its native MS SQL server features. The enterprise landscape is evolving, and new technologies options have been developed for reporting and enterprise integration needs. Organizations are trending towards PostgreSQL and leveraging new technologies for reporting and integration to modernize workloads. This strategy will separate functionality from core database services to provide flexibility and reduce costs.

By considering all of the above factors, PostgreSQL has become a very compelling option for transaction-based applications.

While PostgreSQL broadly supports most of the use cases supported by MS SQL Server, organizations need to consider the following factors for migration to Aurora PostgreSQL:

1. Technical skill set of your resources

While the SQL language of PostgreSQL and MS SQL is similar, there are few specific syntax differences, modifications in backup procedures and DB operations. While DBAs and development teams will have a relatively faster learning curve due to similarities, you will need to provide training for a successful migration, future development, and operations. However, AWS has offered many features in RDS and Aurora that help you to spend less effort and time on deployment, configuration and operations.

2. Your application profiles

A thorough study of applications is vital to understand the performance requirements, usage patterns, and integration patterns.

For legacy applications, the SQL code might have been written in application code or API. You will need to assess complexity and accordingly estimate effort and cost for refactoring code.

While Aurora PostgreSQL addresses the performance requirements needed for most of the workloads, you will need to review application performance requirements and accordingly define target architecture and a database tuning strategy.

AWS offers Aurora Serverless, and you can leverage this based on the database usage patterns, and application/ end-user appetite for latency when database is called for the first time.

3. Size of the data to migrate

The effort, cost, and complexity will vary for different strategies based on the data size and the possible downtime for migration and cut over. If you have a small database and 8 to 12 hours of allowed downtime, you can migrate and perform cut over within that time frame. If the data size relatively large and application availability requirements restrict your ability to get required downtime, you will need to plan different data migration strategies. These strategies include migrating historical data and then incremental data for cut over or parallel update of databases for new transactions while migrating historical data using a tool.

It is also essential to have tools such as AWS Data Migration Service that supports heterogeneous data migration. When you need to migrate a large amount of data, it is critical to have a tool that provides automatic fail over to continue data migration.

4. Differences between MS SQL & PostgreSQL

There are specific differences between MS SQL and PostgreSQL. Aurora PostgreSQL will use EXECUTE instead of SELECT for executing stored procedures or functions. Hence, It is essential to have an AWS Schema Conversion Tool (AWS SCT), which will highlight, recommend, and modify syntax to make it compatible with PostgreSQL. You will need to identify applications to determine differences that are not auto remediated by tools to define a migration strategy.

Once you define a migration strategy by considering the above essential factors, you will need to understand tool capabilities and limitations. We will do a deep dive of tools in the next blog.

Co-author: Lindsay Scarangello, Practice Lead, Technology Enablement

--

--

Venkatesh Hari
Slalom Technology

Solution Principal, Cloud Enablement, Slalom Consulting, NJ