Serverless; it’s more than a FaaS

In late 2015 I was preparing for a conference talk on Cloud-Native Computing and Cloud Foundry when another new concept of FaaS (Function as a Service) came at me from left-field. In the four years that have passed since then, FaaS has only come up once - using Spring Boot, Spring Cloud Function and Spring Cloud DataFlow for ETL type workloads but the term Serverless hasn’t become prevalent until the last 18 months, for me at least. For the purposes of this article I assume an understanding of the term FaaS but aim to clarify the term Serverless and why in 2019 it could help to achieve the culture shift we all hoped for when DevOps became prevalent.

Serverless is not just FaaS

I will admit my first port of call was initially to search for Serverless Vs FaaS, but with the wealth of marketing and misinterpretation that I have subsequently come across it was only when I started actively working with serverless in an engineering team I realised the potential shake-up this could cause in the industry.

The terms FaaS and Serverless have caused some confusion. The best example to help understand the difference was from this tweet by Paul Johnston

Serverless isn’t about FaaS although that is the primary mode of delivery at present simply because it ticks the boxes that fit in with the cultural shift

So if FaaS is just a delivery mode for Serverless, what is it about? This is the point at which it starts to become less about the technology and more about the organisational /cultural changes that it can promote.

Cross-functional teams

Day 1 on my Serverless journey, I actually questioned the developer experience. I thought to myself why can’t I just give it a build pack and run cf push. PaaS (Platform as a Service) like Cloud Foundry have been doing this for years. As a developer I don’t care about infrastructure, I care about business logic. Now after a few months I’ve never known more about the shape of my production environment. So why?

A few examples:

  • I need to know how to access down stream services? Where do they exist? Should I run my lambda inside a VPC?
  • If we deploy a full vertical stack including infra like API Gateway how does this integrate into existing API Gateways or can it replace it?

These were questions not being asked after the service was built and ready for a handover to production, but on day 1.

Serverless to some extent forces a cross-functional team structure by seeding it with the Ops engineers, as the alternative was to let engineers learn all this on the fly. Serverless framework and its variations generate some form of deployment manifest (CloudFormation templates) which may well be outside the comfort zone of some software developers.

Serverless can assist in squad structure. Container based deployments have in my experience tended to result in centralised platforms and ci/cd teams that are constructed of excellent ops engineers that haven’t necessarily been active software engineers. The result being a centralised platform that doesn’t do as much as it could to make the software engineers job any easier. Collectively the ops engineers and software engineers can work side-by-side on business value.

We all know we should follow the cross-functional team model, serverless can assist in changing the team formation as there is no platform to manage so you can free the ops engineers up to work on business value driven products.

Multi-cloud & vendor lock-in

One element of costing models that changed with the advent of cloud computing was the pay-per-usage models. I totally understand the inertia with contracts from days gone by which locked you into a vendor for ten years at x-million per year but commercially the difference I see here is that you are in control of when and if you change. Lock-in is bad when you’re not happy with the product or financial implications. If you were happy, you probably wouldn’t be looking to leave.

I often here vendor lock-in as an argument not to go serverless. In reality when could we port IaaS from one provider to another with no cost? Everything is portable at a cost. Serverless is no different. I’ve seen many services running on containers in an orchestration platform that used managed services such as s3 or sqs, these are no different to writing a lambda function that uses s3. In this instance you have different things to refactor and migrate. Unless you spend all your engineering time writing abstraction layers or choose products such as Spring cloud that have largely pre-built abstractions on every major cloud provider you are not fully benefiting from the cloud models.

Portable deployments

You might think deployment is another area that isn’t portable with serverless. Even though the serverless framework is trying to alleviate the pain it’s not trivial because other cloud formation scripts will get bolted-on. Then again I’ve seen puppet, ansible, terraform all built around one cloud provider or another. So the myth that shifting from one cloud provider over to another with serverless is that much different to other delivery platforms; in most cases, think again.

Environmental impacts

The increasing demand for technology has an increase in demand for power to fuel it. Cloud providers such as Amazon, Microsoft and Google to name but a few are improving their consumption from sustainable sources. As technologists we have a responsibility to ensure that we can sustainability fuel our products.

Now for the contentious part, and this is not founded in research but a mere observation from my experience working with a number of the cloud provider platforms. The cloud providers we use excel at running our software by providing us with a compute model that can efficiently run our workloads. These same cloud providers are the ones who are trying to fuel our technology from more sustainable sources.

When have you ever worked on a product whereby it was outlined that energy that will fuel its execution is responsibly sourced? I’m guessing never. Our energy consumption is going to grow exponentially, I’m hopeful that by allowing these cloud providers to run our serverless architectures our environment will inherently benefit as they know how to use resources more efficiently than most. This is all our responsibility not just cloud providers, but leveraging their experience of trying to not waste compute resources can only be a good thing.

I would recommend reading this ethics whitepaper — The state of data centre energy use in 2018 it is eye-opening.

Cloud-First development

One of the first things any developer will do is get everything running locally. I also fell into this trap when I moved to Serverless and to be honest I still struggle with this mindset change. I can see a number of local development tools being promoted but as the cloud development tooling progresses throughout 2019, I think we will see less need to locally develop and a natural progression to building everything directly in the cloud.

Community

There is a great community developing around Serverless. The ServerlessDays conference is held all over the globe and sponsored by some of the major players in the serverless space. The next one is based in Cardiff in the UK and will be a great place to enhance your knowledge. For a full list of events please visit the upcoming events list.

Accelerate

As with any new culture shift it can become difficult to get buy-in. Serverless already has its doubters, if you can get your engineering leads to pick up a copy of the wonderful Accelerate then you will see that serverless can be an enabler for streamlining processes.

Patching software can be hard. In a serverless environment you only need to worry about patching the software. However, the application runtimes can be upgraded pretty frequently and do come with deprecation cycles. Those organisations that tend to run their software in project based models will have to re-think their strategy if the software tends to go-live and subsequently stay there without any affection.

Cost

Some intermittent workloads are currently cheaper to run serverless, the research that I have seen so far that argues for running on Kubernetes or EC2 is the use-case of sustained high throughput. This, on the face of it looks to have a substantial saving over serverless. What this doesn’t cater for is the platform development cost, management overheads or patching issues as it focuses on run-cost alone.

You have to focus on the TCO (Total cost of ownership), otherwise you are only comparing the run cost.

From “You are thinking about serverless costs all wrong (Yan Cui 11th Jan 2019)

Yan Cui recently wrote an article about this “You are thinking about serverless costs all wrong” that has a lot of information on the details of this area of serverless. The diagram above clearly articulates that the operational costs (which are easy to measure) form just a small part of the TCO. In order to gain an accurate comparison you also need to incorporate the time to market as well as the engineering effort required to deploy, patch and generally maintain container based vs serverless platforms.

Architecture

There are plenty of great resources on serverless architectures and some of the areas that particularly interested me where that the teams talk often about non-functional aspects.

  • How do we make the function resilient?
  • Should we circuit break?
  • Choreography or orchestration

Jeremy Daly has summarised some of the architectural patterns for AWS lambda on his blog. https://www.jeremydaly.com/serverless-microservice-patterns-for-aws/

AWS also released a serverless application lens that provides some useful best practices for those looking at lambda.

There are some negatives

As the tooling is still developing, there are still gaps.

I’m a big fan of centralised configuration and service discovery. At the moment this is managed more by sensible naming schemes and other managed services or deployment configuration.

Centralised logging — AWS lambda logs to Cloudwatch but it’s not easy to search across lambdas. Getting data into centralised solutions adds more and more cost.

Serverless Test Approaches

When I wrote The Cloud Native QA I had already seen that there is a change needed for testing Cloud Native applications.

The emphasis on unit testing will likely reduce with a higher level of integration and chaos testing principles coming to the fore.

Serverless security

Simply because we now no longer need to worry about patching our hardware, OS or containers doesn’t mean that security stops there. The entry level to learning serverless means it’s pretty straightforward to get some functions up and running, but that doesn’t go to say they follow secure coding practices. The functions themselves still form an attack vector albeit a much smaller one for us to have to worry about.

  • Static analysis security scans for checking library vulnerabilities or analysis against known insecure code
  • Storage of secrets
  • Use of IAM Roles in AWS for example as security guards from traditional network layered deployment models
  • It’s very easy with tools like serverless framework to run “sls deploy” from anywhere, ensure you secure access to your environments and only allow automated deployments to deploy (especially to production environments)

Most of these things are what most organisations would be doing anyway for microservice architectures but it’s easy to forget about these best practices when some of the control is taken away from you.

Believe the hype

Serverless platforms are becoming more and more mainstream as organisations realise that servers and containers are starting to become a commodity item or perhaps as some have coined “An implementation detail”. There will be those that embrace Serverless platforms because of a clear strategic vision and those that just really want to use new technologies. Whichever camp you fall into, serverless is going to become more and more prevalent in 2019.

Organisations will soon realise that serverless is more than just hype.