Going Cloud with SQL Server: Google Cloud Options

This blog is co-authored and PoC’ed by Pop Periyanesh (pop.dbig@gmail.com)

Hi folks! This is all about moving SQL Server to cloud using GCP options, contesting one over the other and a rather lengthy one at that :) please stick till the end for an interesting verdict and engaging next steps…

Going cloud, migrating to cloud, data on cloud and integration to cloud, cloud presence — these are the words we hear these days when it comes to data management or processing applications. This simply means that the organisation is moving its digital assets wholly or partially to the cloud. It could also mean migrating from one cloud to another. A lot of benefits with “going cloud”, to name a few:

  1. Eliminates operational overhead
  2. Far superior flexibility and agility (because of ease of deployment, no upfront cost, no infrastructure maintenance / cost, affordability)
  3. Cost-effective (pay only for what you use)
  4. Scalability
  5. Monitoring and more

This blog contains concepts on hosting of SQL Server on GCP. The following solutions are designed focusing on scalability, migration effort and ease of maintenance.

Ways to host the SQL Server on the cloud

  • SQL Server in Compute Engine
  • Cloud SQL for SQL Server
  • Using Docker Container / Persistent storage

1. SQL Server in Compute Engine

  • Deployment and configuration are similar to traditional on premise database infrastructure
  • Vertically scaling, Persistent disk / predefined machines, load balancing (if we are going to use multi instance database design) can be utilized by designing this way

VM Minimum Specifications Required

Memory : 4GB
Processor : x64, 2.0GHz
Storage : Min 6GB

  • Server failure is a possibility due to below reasons
  • SLA for Compute 99.99%(multiple zone) uptime and 99.5% on single instance
  • Poorly designed VM (by personal choice of configuration) may lead to poor SQL Server performance
  • Pre migration cost and design planning is crucial
  • Compute engine instance prices vary with configurations, if we don’t know the detailed design at the time of configuration, we may end up paying more
  • Also under — utilisation / over — utilisation of resources is a possibility

POC — SQL Server in Compute Engine


  • GCP console> Navigation menu> Compute Engine> VM instance> Create instance
  • We can pick either Linux based/Windows boot image for our purpose since SQL Server works in both OS
  • Desktop friendly windows VM could be used if we want to RDP to the VM and maintain the SQL Server via GUI tools
  • On the boot menu we can choose from a variety of OS including (but not limited to),
    Windows Server (license to be purchased separately)
    SQL Server on windows server (license to be purchased separately)
Boot disk creation: Select OS
  • Create the VM (for POC we have picked Linux based Ubuntu(18.04), since it is free)
  • Once a VM is created, SSH into the VM and update the package manager then install the SQL Server
  • The following set of command has to be run on VM to install SQL Server:

wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

sudo add-apt-repository “$(wget -qO- https://packages.microsoft.com/config/ubuntu/18.04/mssql-server-2019.list)"

sudo apt-get update → Update the package manager

sudo apt-get install -y mssql-server → Install SQL Server

sudo /opt/mssql/bin/mssql-conf setup → Configure SQL Server

  • Firewall has to be configured to allow incoming connection on the SQL Server port(default port — 1433) of the VM instance
Note: We can add firewall rule to a particular VM by using Network tags
  • Now we can connect to SQL Server using public IP and SQL user credentials in SSMS and other tools
  • Database creation, deployment of DAC file, database backup are all tested and worked without any issue

Note: DAC file is Data-tier AppliCation which is a compressed file of the entire Database Model including objects like tables, views, instance objects, user logins etc.

SQL Server on GCP VM instance connected through client machine using SSMS

Use cases:

  • This type of SQL Server deployment is suggested when the database management team is used to conventional ways of DB management, and DBAs have not adopted the cloud ecosystem yet
  • This is the easiest way where we could get GUI based server management tools (windows VM)

Best practices:

  1. This method is not recommended if you are just creating a temporary server for Dev/Testing environments. Please note that this option comes with relatively high operational overhead
  2. For the development/testing environment, we suggest going with Linux OS, SQL Express/SQL Container image on Docker. This way we can reduce the licensing cost
  3. Make sure you have the entire design including technical details like volume, timeline, transactions etc. planned out before this option is configured

2. Cloud SQL for SQL Server

Cloud SQL for SQL Server is a managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud.

  • Cloud SQL is a fully managed SQL Server from GCP
  • Also, CloudSQL is the Google recommended best practice as opposed to deployment of SQL Server in the compute engine because it really portrays the benefits of migrating to cloud and reduces any management of the database
  • In under 5 minutes, we can create Cloud SQL and migrate existing Database, auto scaling is also available
  • Add up to 96 processor cores and more than 624 GB of RAM and 64 TB of storage possible


  • GCP console>Navigation Menu>SQL>Create Instance
  • Choose SQL Server version and other configuration as required
  • SQL instances can be chosen from predefined Developer/Production instance configuration
  • Also, SQL instances compute size and memory are customisable
  • After creating the instance we can connect to the SQL Server through generated public IP


  • By default CloudSQL blocks the connections from external IPs, so we have to add client machine or web application IP into the authorised network
  • Please make sure to add IPv4 and the PUBLIC ip, private IPv4 will throw an error
  • There are some exceptions to authorising public networks, please refer to this product documentation for the same: https://cloud.google.com/sql/docs/mysql/authorize-networks#limitations
  • We can authenticate users by using built-in database authentication
  • Once the configuration is complete, we can connect to CloudSQL Server just like our on premise SQL Servers
  • Connections can be authorised by

Cloud SQL Auth proxy and Cloud SQL connector libraries for Java and Python
Self-managed SSL/TLS certificates
Authorised networks

  • SQL connection can be made using sqlcmd, SSMS, Azure data studio, OLEDB etc.
  • The Access Control can be done for limiting access to database on 2 levels — Instance Level and Database Level

Instance Level is access authorisation for any client from which you are connecting to Cloud SQL
Database Level is access to data in your instance using Server-Level-Roles

  • Data residency, localisation and privacy refer to the requirements on your data about the physical location of data and local regulations. Cloud SQL helps you meet the challenges of data residency in 3 ways:

Storing data: Configuring where your data is stored (using region configuration)
Encrypting data: Controlling where your encryption keys are stored (using Customer-Managed Encryption Keys)
Accessing data: Controlling access to your data (using Access Approval, Transparency and Key Access Justification methods)

  • Database creation using scripts, DAC package and backup file can be done in Cloud SQL instances
  • Some basic SQL Server features like database creation/deletion, database backup, user management can be done in the GCP console itself, but not recommended, there are many SQL Server administrator tools to choose from

Best Practices:

  • Determine the capacity of data load before you configuring and deploying SQL Server
  • When possible use smaller cloud instances than using large monolithic instance
  • Watch the number of database tables for impact on instance response time
  • Use Connection Pooling for efficient resource management
  • Several small transactions are better than one large transaction

3. Using Docker Container

  • Microsoft has an official docker image of SQL Server for Linux based container readily available in Docker Hub
  • We can pull that image and straight away launch the SQL Server instance
  • We can also backup the existing databases and create a docker image with those backups, so all the containers created would have the same data replicated without any extra sync and migration
  • Container storage is not persistent so we have to use Container volumes / external storage (persistent disk) for the databases else we have to design automated backup / restore of databases
  • Using the same external storage for different SQL Server instances can also be done theoretically. Which can make SQL Server more fail proof and can scale horizontally thus minimise the cloud spend
  • We can also connect to SQL Server outside the container by various methods (SSMS, sqlcmd, VS code and other data analytics tools) with minimal work

Requirements (linux container)

Memory : 2 GB
File System : XFS or EXT4
Disk space : 6 GB
Processor speed : 2 GHz
Processor cores : 2 cores
Processor type : x64-compatible only

  • By orchestrating the container using GKE and using scalable persistent storage, we could design a fail-resistant, highly available SQL Server on Google cloud

The above image shows two different SQL containers using the same container volume (a persistent storage) but different ports (1436/1437), so we can also load balance the SQL Server using GKE

Drawbacks observed with Docker containers:

  1. Theoretically we could create multiple SQL Servers within minutes with the same configurations in network storage or local storage but SQL database file and SQL Server instance are mapped 1:1 by design, so spanning multiple instances in the same storage doesn’t scale a Database horizontally
  2. We could replicate a containerised SQL Server instance (not real time but snapshot based) along with their data in different machines (even in Google Cloud Compute engine) easily, we have to configure the port settings and firewall rules for each machine in order to access remotely
  3. Compared to CloudSQL, operational overhead is high and operational cost is unpredictable

The Verdict

Of the 3 options PoC’ed in this blog, Cloud SQL proved to be the simplest, least-effort, fastest and reliable approach and with its 99.95% Uptime Percentage SLO, flexible component-wise pricing structure (1. CPU and Memory, 2. Storage and Networking, 3. Licensing), it is highly scalable for capacity up to 64 TB of storage available, with the ability to automatically increase storage size as needed!!

What’s next?

Why don’t you go back to the options above, try for yourself and feel free to reach out / comment if you have thoughts / suggestions?

As a follow up, give these a follow up read for some more fun:

  1. Cloud SQL Libraries and Samples: https://cloud.google.com/sql/docs/sqlserver/admin-api/libraries
  2. Deploying Multi Regional Disaster Recovery with Google Cloud https://cloud.google.com/architecture/deploying-microsoft-sql-server-multi-regional-disaster-recovery
  3. Reach us at:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Abirami Sukumaran

Abirami Sukumaran

Developer Advocate, Google. With 16 years of experience in Software Development and Data, I am passionate & empathetic about impacting developers & products.