Go for Cloud — A few reflections for FaaS with AWS Lambda
Some time ago Rakyll posted a great article about using Go for Cloud. The relevant part of the article about FaaS is pasted below, but you can read the full version here. After reading Rakylls post I felt that a few points should be clarified and that’s how the idea for this article came up. Small disclaimer first, everything I’m discussing here refers to the AWS Lambda only.
Rakylls post — the part about FaaS
I’d like to refer to some of the points from the excerpt above.
Because the final binary is not forked for every incoming request but is being reused:
You may have data races if you access to common resources from multiple functions.
No, you won’t have data races unless the resource that Rakyll is mentioning is living outside your function. Every single time when AWS Lambda function is invoked you’re in the scope of the single incoming request. If the second request would arrive in the nearly same time, AWS Lambda binary is forked once again and you’re in the scope of the new request. These requests won’t compete for the same resources, they will live in separate containers. The final binary is indeed reused but only when a preceding request has finished its execution.
You may need to use
sync.Once
in the function to initialize some of the resources if you need the incoming request to initialize.
Of course sync.Once is one of the valid options. But the second option that should also be considered here is Go init function. Why? Because the init stage of AWS Lambda is free of charge up to 10 seconds. If you start to initialize your code only when you already have your request details and you should be ready for processing it, it’s simply a waste — AWS billing clock has already started ticking. You can find more details in this great article by Michael Hart —Shave 99.93% off your Lambda bill with this one weird trick. There is also a downside of this approach, you don’t have access to the Go context of the current request and because of that, it’s difficult to trace any actions taken in the init functions with e.g. X-Ray. But if you’re just resolving dependencies for your application and no other fancy stuff like HTTP calls — why not at least consider this approach?
Providers are not consistent about signaling the Go process before a shutdown. Expect hard terminations as soon as your function exits.
AWS provider is quite consistent in this matter. You can prepare your function for termination by leveraging the context passed to your lambda function along with other invocation arguments. I described this approach in greater detail in my previous article — How to leverage AWS Lambda timeouts with Go context cancellation.
All in all, Rakylls post is great and I encourage you to read it. That’s just my 2 cents to the discussion. Thanks for reading and see you next time!
If you’re interested in learning more about serverless and Go you should definitely check out my previous articles —
Distributed tracing in Serverless with X-Ray, Lambda, SQS and Golang
How to leverage AWS Lambda timeouts with Go context cancellation.