Migrating Serverless Architecture from AWS to GCP

First Line Outsourcing
firstlineoutsourcing
5 min readJun 11, 2020

When you consider using Serverless architecture in your project, you often think about the pros and cons of this development approach. A lot of articles have been written about that, and everyone can choose what works best for their situation.

At the same time, the most common reason why we can’t use it in production is Vendor lock-in 🔒. It means if something changes or goes wrong in a future migration it will be painful. I’m going to tell our story about migrating one of our serverless projects to another vendor.

Our way

In First Line Outsourcing we love and have been using the Serverless Framework(SLS) for more than three years with Node.js. It’s a perfect tool for building different size projects. We use it mainly with Amazon Web Services(AWS), including Lambda, API Gateway, S3, SQS, SNS, DynamoDB, etc.

Recently we finished a small project, but our client asked to move it to Google Cloud Platform(GCP). GCP has almost the same functionality but it was an unusual UI for us after AWS.

SLS is extensible by plugins and open source. GCP support is shown on the main website, and we thought it would be a fun short journey.

Spoiler: not. It wasn’t 😬.

The plugin provides the ability to work with Google Cloud Functions within the SLS. You can read documentation, write your functions and run locally with

sls invoke --function {functionName}

Not the most comfortable way of development when you often change your code and want to see results faster. There is a powerful plugin, Serverless Offline, which helps to serve and reload your project automatically, but it works just with AWS.

The project

In our project, we use Lambda, API Gateway, CloudWatch Events, DynamoDB, and we needed to replace them with GCP counterparts: Cloud Functions, Cloud Scheduler, Firebase Realtime Database.

We use two more plugins in the project, such as Webpack, because we use TypeScript in all projects, and Env Generator. Both of them do not support Google.

The first challenge for us was to find a way to develop efficiently for GCP.

There is one big difference between these two vendors. AWS has API Gateway, which connects Lambda functions to API endpoints. Google doesn’t have this middleware and uses one function to any method or parameter. You have just baseUrl for an endpoint.

For example, you have RESTful API in serverless.yml for AWS

createItem:
handler: api/items/handler.createItem
events:
- http:
path: api/items
method: post
getItems:
handler: api/items/handler.getItems
events:
- http:
path: api/items
method: get
getItem:
handler: api/items/handler.getItem
events:
- http:
path: api/items/{id}
method: get
updateItem:
handler: api/items/handler.updateItem
events:
- http:
path: api/items/{id}
method: put
deleteItems:
handler: api/items/handler.deleteItems
events:
- http:
path: api/items/{id}
method: delete

In the case with Google, it will be:

functions:
items:
handler: items
events:
- http: path

Where `handler: items` is the function in index.js in the root folder.

The scheduler changed not so much from AWS

schedule:
handler: api/schedule/handler.schedule
timeout: 900
events:
- schedule: rate(3 minutes)

to GCP

schedule:
handler: event
events:
- event:
eventType: providers/cloud.pubsub/eventTypes/topic.publish
resource: projects/${projectName}/topics/${topicName}

The second challenge is to find a way to use the same structure as we had for AWS and handle API Gateway work manually.

The most remarkable part of SLS is automated deployment by one line in the terminal.

sls deploy --stage dev

It creates a CloudFormation template, creates resources, uploads files, sets up everything, and provides a report. Love it ❤️.

On the other hand, it can’t create resources in GCP out of the box, but it can deploy functions. The third challenge is to find a way to create resources automatically.

The migration

I described three challenges which we had to resolve.

1. Dev environment

Google has wide documentation on how to build Serverless and has tools to do it locally, such as Functions Framework for Node.js. It works well and serves one function to the selected port. It’s enough for a simple project. There is one issue with that — it works just with JavaScript, but our code uses TypeScript.

We needed to change the webpack config a little bit to get the correct build of JavaScript files, from:

module.exports = {
entry: slsw.lib.entries,
output: {
libraryTarget: 'commonjs',
path: destPath,
filename: '[name].js',
},
...
}

to

module.exports = {
entry: ['index.ts'],
output: {
libraryTarget: 'commonjs',
path: slsw.lib.options ? destPath : __dirname,
filename: 'index.js',
},
...
}

Now we can watch changes with webpack and run all endpoints with @google-cloud/functions-framework.

We use KMS to encrypt sensitive environment variables and .env Webpack plugin to inject them directly to the code bundle. It’s not possible to use KMS with Google, and we decided to move to a simple way —ignore .env file. The first challenge was resolved!

2. Routing

We move routing to one index file and add a `middleware` where we can add common action like allowing CORS. The second challenge was resolved!

function start(req: Request, res: Response, callback) {
// Allow CORS
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');

return callback(req, res);
}

exports.items = (req, res) => start(req, res, items);

exports.schedule = async (event, callback) => {
schedule().then(callback);
};

3. Resources

In our case, we needed three functions: two for API requests and one for Scheduler.

With the first two, SLS is almost enough. Why almost? When SLS deploys your functions, it creates them, but they are not reachable for public requests. You need to change policy. You can do it through UI or with gcloud CLI

gcloud functions add-iam-policy-binding {functionName} --member=allUsers --role=roles/cloudfunctions.invoker

We created a Bash Script file and ran it directly after deploy functions.

What about Scheduler? Yep. You can create it with PubSub topic from UI or gcloud CLI

gcloud scheduler jobs create pubsub {scheduleName} --schedule="0 1 * * *" --topic={topicName} --message-body="{payload}"

The third challenge was resolved!

With Firebase, everything is not so cool 😕. We decided not to spend much time to automate creating resources at this time and set it up manually.

Tips

  • Migration to another vendor is painful but possible and depends on project size.
  • SLS supports other big vendors much better than GCP. Be ready to run through challenges.
  • Google has its own Serverless way, and maybe it would be better to use it for your purposes instead of SLS.
  • If you are confused by GCP Console UX after AWS, it’s normal.
  • GCP and AWS serverless ways are similar but have key differences that can break your code, and you’ll spend a lot of hours rewriting it.

Сonclusion

Migration between providers is a big adventure. It might finish successfully if your project has experienced developers and resources. The Serverless framework itself is flexible and provides you a lot of opportunities for reaching any goals.

If I missed something, let me know in the comments 🙂. Let’s share the experience and make each other more experienced 🤝.

Andrey Zaikin

Founder at
First Line Outsourcing

Move your business forward

Web and mobile development that help companies reach their goals

Instagram

LinkedIn

--

--

First Line Outsourcing
firstlineoutsourcing

Move your business forward. Web and mobile development that help companies reach goals https://flo.team/