4 Serverless Myths to Understand Before Getting Started with AWS

Lessons Learned from Architecting Serverless Applications on AWS

Tanusree McCabe
Capital One Tech
6 min readMay 6, 2019

--

If you are thinking about getting started with serverless applications, this article is for you. While serverless provides a number of benefits, in my experience architecting serverless applications, there are several common myths that can get in the way of success.

Myth #1: Serverless Means Functions as a Service (FaaS)

In the quest to focus development time on application code rather than the underlying infrastructure, serverless architecture is a natural evolution for cloud-based applications.

While FaaS offerings such as AWS Lambda may dominate serverless architecture, there are many other serverless options.

A cloud service can be classified as serverless if:

  • There are no servers exposed that you need to directly administer.
  • The service is elastic in that it scales automatically and is highly available.
  • You only pay for what you use.

This results in serverless being applicable not just for web based applications, but also real-time analytics and processing. The following is a representative sampling of serverless offerings from AWS to illustrate this breadth:

Myth #2: Serverless is a Monolithic Silver Bullet

Serverless, just like any other technology, is not a silver bullet for all use cases and is best suited for event-based architectures. Why?

Well, traditional client/server architecture in its simplest form looks like the diagram below.

The challenges of a traditional client/server architecture like this include:

  • Synchronous calls resulting in highly coupled systems.
  • State management required.
  • Not resilient to failure.
  • Not able to scale effectively.

Simply putting the server logic inside a lambda function does not solve these challenges for client/server architectures; in fact, it creates additional challenges such as:

  • Lambda compute and memory limits.
  • Consistent packaging and deployment of lambda functions.

Rather, the event-based architecture pattern — shown in a simple form here — is much more suited to serverless.

Benefits of using serverless with event-based architecture patterns:

  • Asynchronous call enabling decoupled systems.
  • Enables immutable, persistent, shareable events.
  • Highly resilient to failure.
  • Able to scale effectively.
  • Highly observable and extensible system.
  • Independently releasable.
  • Independently optimizable.

Event-based architecture patterns are commonly found when designing and implementing microservices. While there are still challenges to this architecture — such as needing distributed tracing, managing current state, and solving for eventual consistency — the main challenge to consider is how to avoid recreating a monolithic application in serverless. Consider designing serverless microservices to create a much more flexible, loosely coupled event-based architecture. A well-designed microservice has concise logic, no orchestration logic, typically single purpose code, and operates in an ephemeral environment. Designing your architecture using these principles will help keep the monolith at bay to help take better advantage of serverless.

Myth #3: Serverless Means An End to Operational Burdens

As alluded to in the previous myth, serverless applications require a high degree of observability in order to effectively troubleshoot. Speaking from experience, it can be quite difficult to determine where, in a highly distributed serverless system, something went wrong. Particularly in the case of cascading failures.

Beyond health checks, polling, and monitoring metrics, serverless observability requires the ability to interrogate the state of the system as a whole and within each part. Interrogation in this context is the ability to ask questions about the serverless system and understanding the answer. To support such meaningful interrogation, you need to ensure that each component of your serverless system outputs telemetry to support downstream tracing and correlations as well as a full understanding of system state. Defining and testing failure scenarios, as well as audit configurations, helps with determining what kind of telemetry is necessary.

Since serverless offerings are managed services, agent-based telemetry is no longer an option. Instead, serverless functions need to be instrumented along with deploy paths and rollbacks. Extract logs to queryable systems. When instrumentation isn’t possible, the cloud provider should enable metrics and log extracts that can be correlated. For example, DynamoDB can output useful events or metrics about its request state and internal stats.

A powerful tool in the belt of any serverless developer is distributed tracing. Distributed tracing allows for end-to-end visibility of a transaction, even as the transaction propagates requests through multiple hops. This shows service dependencies and enables you to establish a baseline of behavior and performance from which you can detect anomalous traces to determine issues. AWS X-Ray is a distributed tracing service intended to help developers troubleshoot and debug serverless applications. AWS Lambda runs the X-Ray daemon to enable X-Ray visible telemetry.

The following is a simplified example of various serverless system components interacting with each other while emitting the telemetry necessary to enable observability.

Thus, while serverless does remove the operational burden of administering servers, there is still operational effort required to effectively monitor, maintain, and scale a serverless system.

Myth #4: Serverless is Infinitely Scalable

Speaking of scaling, a major benefit of serverless services is high availability. However, being highly available does not equate to being able to infinitely scale. Each serverless service has its own limits to contend with, whether that be lambda’s memory limits or Kinesis’ throughput limits. Other limits may not be as obvious, such as regionality impeding cross region resiliency or IP exhaustion from operating Lambda in a VPC using an ENI that takes up a subnet IP address. (If the subnet that you configure Lambda with has no available IPs, Lambda will not be able to scale.)

Lambda has another challenge related to the fact that lambda functions run in containers behind the scenes and require steps to start. This is known as the ‘cold start’ problem, which can be exacerbated when lambda is configured to run in a VPC due to the additional steps required to support creating and attaching VPC ENI.

Best practices involve optimizing what you as the developer can control:

  • Instantiate clients and connections outside the event handler.
  • Schedule function execution as needed for ‘warming’.
  • Use environment variables.
  • Right-size max memory used.
  • Minimize package.

Conclusion

Serverless is a natural next step in the evolution of cloud technology. With serverless architecture we can focus on innovating applications and solutions rather than administering infrastructure. But don’t be fooled by common myths! Instead, keep in mind the underlying takeaways we covered above:

  1. Look beyond functions for serverless solutions.
  2. Apply serverless to event-based architectures instead of recreating monoliths.
  3. Understand how intrinsic advanced observability is to serverless operations.
  4. Plan for limits / failure scenarios and optimize for serverless operations to improve resiliency of a serverless system.

These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2019 Capital One.

--

--

Tanusree McCabe
Capital One Tech

Architect at Capital One, focused on Monitoring, Resiliency, Cloud, Containers, Serverless and DevOps