From buzzword to meaningful infrastructure option? What’s behind “Serverless”

Ant Stanley
Serverless Zone
Published in
7 min readJan 25, 2018

--

[Interview with Alex Casalboni, translated from the original German article published at https://t3n.de/news/jeffconf-2018-hamburg-905965/]

JeffConf is a community-focused conference built from and for local serverless communities. It’s a single track, one-day event that moves away from the buzzword in order to foster learning and knowledge sharing across the community as we embrace a new way of building applications.

JeffConf was launched last year in London and Milan, with more than 300 attendees. This year, JeffConf will take place on February 16 at the Altonaer Museum in Hamburg, Germany.

Who is Alex?

Alex is a Software Engineer from Italy. He started building web products as a full stack developer back in 2011, and he joined Cloud Academy in 2013 as the first employee, where he learned how to successfully build a global product (hundreds of thousands of enterprise teams and professionals learning across 200 countries) and a distributed team (today, more than 70 highly talented individuals across the globe). He is a passionate musician and coder, and enjoys working with machine learning, AI, signal processing, and anything related to math and data science.

Alex joined the “Serverless Revolution” around early 2016, when he started experimenting with AWS Lambda for Machine Learning applications. He also loves supporting the open-source community by sharing experiments and tutorials and contributing to open projects such as the Serverless Framework.

Who or what is Jeff?

Jeff is “Serverless.” I think everybody should be like Jeff.

Jeff represents a community that autonomously formed around a misunderstood name. Indeed, “Serverless” is just a name, I’d say a provocative name, meant to shake the development community for the sake of marketing.

Of course, there are still servers (as well as CPU registers, memory allocations, physical network interfaces, etc.), but a lot of smart people really like the idea of not worrying about such underlying details, as long as there is something/someone else that can handle it for you, better than you ever could.

The word “serverless” was born in a context where running a Python or Node.js piece of code for five seconds required you to launch a compute instance, configure it, patch it, and pay for it by the hour. And, what if this simple piece of code had to run a few hundred times per second or handle unpredictable traffic peaks? Well, that would probably require a cluster of instances and an insane amount of maintenance.

In this context, we already had third-party APIs and fully managed services to handle authentication, object storage, databases, search, etc., but we missed a new abstraction layer in the compute spectrum that would glue these systems together.

That’s why FaaS (Function as a Service) allowed the community to start a serverless revolution, with the ultimate goal of speeding up the development and prototyping process, while at the same time reducing maintenance costs and effortlessly making most “prototypes” production-ready.

While organizing JeffConf Milan in Italy, I got in touch with Soenke Ruempler (Co-founder of superluminar), who presented an amazing talk in Milan and is now co-organizing JeffConf Hamburg. This is how local communities can help each other and grow stronger together. And this is exactly how JeffConfs will keep spreading around the world in 2018 (we have great news coming up pretty soon).

Can you share some concrete examples or use-cases of companies using Jeff?

Jeff is a great fit for most use cases that align to distributed microservices architectures. Since most serverless platforms come with generous free tiers, Jeff is basically free for prototyping and for low traffic or spiky loads. Once the traffic increases or changes drastically, Jeff will make sure your compute layer scales up or down accordingly, without any change in code.

For example, you can build RESTful or GraphQL APIs, process file uploads, schedule periodic tasks, implement webhooks, build data processing pipelines, integrate third-party services into legacy architectures, optimize computation at the edge, build IoT or Chatbot applications, etc. Since serverless functions are always executed in response to an event, you can design a whole event-based system where your functions run only when something relevant happens, eventually triggering other functions downstream. In such a system, function orchestration can become a complex problem. That’s where new visual tools such as AWS Step Functions and Azure Logic Apps can help.

Adoption of serverless is steadily increasing, and companies like Netflix, Square Enix, iRobot, Nordstrom, AirBnb, Capital One have publicly shared their experiences in serverless.

What are the benefits of Jeff/Serverless?

As with most fully managed solutions, adopting FaaS as your compute layer allows you to offload a considerable part of the heavy lifting required to run your code. This will make your solution faster in terms of time-to-market, easier to operate thanks to its “zero-administration” side, and ultimately cheaper when you consider the total cost of ownership. Indeed, even without considering the potential cost savings related to its PAYG model (by 100ms increments), the overall cost of a product tends to be cheaper when you remove most of the infrastructure, network, and hardware maintenance from the equation.

Furthermore, FaaS allows you to monitor logs, performance, and costs on a per-function basis. This gives you more granularity when it comes to performance and cost optimization, but it also opens a whole new market for tooling and monitoring solutions. Luckily, organizations such a IOpipe, Datadog, and New Relic promptly covered this space in the last 18 months.

What are the challenges for developers or operations who are new to serverless?

With every architectural “revolution” come new challenges. The ecosystem is still very young and quickly evolving, which makes it hardly an option for slow enterprises, although it started gaining momentum in 2017.

Many frameworks are still emerging in the open-source world, with each solving a particular pain point. However, few of them are doing so in a exhaustive and vendor-neutral way. Although the Serverless Framework is leading the public cloud tooling spectrum, new projects such as OpenFaaS are successfully targeting the private/hybrid and container-native world.

Since most of the “lock-in” of FaaS is related to the set of integrations and services consumed by your functions, the tooling framework and platform of choice will often depend on where your data is stored, rather than the quality of its software or its community (this is sad, but true for many organizations that can’t afford to migrate).

While there have been public cases of migration from legacy/monolithic architectures to serverless, a serverless lift-and-shift strategy is not yet available. Migrating legacy components or whole systems will likely require a partial re-design of the architecture toward a more event-driven approach. Some frameworks and libraries have been solving this problem for simple use cases (e.g. Zappa for Django/Flask applications, aws-serverless-express for Express applications, etc.), but I would argue that most legacy architectures diverged from “Hello World” sample apps years ago and their porting is never as painless as promised.

In my opinion, the most underestimated challenge is due to the false expectation of #NoOps. Many public clouds have often marketized “serverless” as the death of (Dev)Ops, which is a cheap and provocative way to spread the buzzword, but this idea becomes dangerous when developers pretend that they can ignore all of the best practices (e.g. infrastructure as code, immutability, test-driven development, continuous-integration, etc.). As a new community, I believe that it’s our responsibility to not disregard the past, but to learn from it and keep innovating in the right direction.

Are there also organizational challenges?

The most critical challenge is probably related to an organization’s ability to trust the cloud provider of choice. Since “serverless” is a cloud-native way of developing applications, hybrid solutions are not trivial to implement. Therefore, many organizations might struggle with the “vendor lock-in fear” even though many aspects of the technology and its tooling make it fairly easy to migrate to other clouds (maybe not as easy as a Docker container, but pretty close).

On the other hand, an organization could build its own FaaS layer on top of existing infrastructure (e.g. a Kubernetes cluster running on-prem). This would allow developers to benefit from the new abstraction layer even if the heavy-lifting related to infrastructure management is not completely removed from the organization.

Depending on your current technology stack, Jeff might be a great fit for new pilot projects or even new features that must be integrated with legacy architectures, especially if you have already embraced DevOps best practices.

Nevertheless, some organizations have skipped a few steps and migrated their Cobol code base to serverless (directly from the mainframe).

Which topic are you going to talk about at JeffConf Hamburg?

I’ve always been fascinated by big data and data analytics, therefore I’m going to discuss “Serverless Data Warehousing & Data Analysis”. Storing possibly unstructured data and querying it efficiently without managing servers is quite a challenging problem, and I’m going to explore a few techniques and gotchas that I’ve learned while implementing our solution on AWS.

This journey started in early 2016 because my data science team needed a way to deploy machine learning models, iterate quickly to optimize their predictive performance, and scale up effortless without the need of a DevOps team (data scientists are not ssh-lovers). I chose AWS Lambda and after a few weeks, we had our first models online, implemented in Python and scikit-learn. Once the real-time predictions were operational, I started working on the data analytics side of the system, as we needed a way to improve the performance and precision of our reports. I’m going to share my experience with the data analytics stack on AWS, which includes Lambda, Kinesis, Athena, and QuickSight.

--

--