From Serverless to Blockchain

Insights from Tim Wagner — VP of Engineering at Coinbase

Forrest Brazeal
A Cloud Guru
10 min readAug 15, 2018

--

Welcome to “Serverless Superheroes”!
In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.

In this edition, I chatted with Tim Wagner, the former general manager of AWS Lambda, API Gateway and the Serverless Application Repository, now head of engineering at Coinbase. The following interview has been edited and condensed for clarity.

Forrest Brazeal: First off, let me say congrats on your new gig as head of engineering at Coinbase! What made this a logical next step for you after being the GM of Lambda at AWS?

Tim Wagner: I really think that there are two disruptive technologies that are changing how we build and operate businesses. One of them, of course, is serverless. I spent the last six years helping to make that a successful platform and ecosystem.

Seeing the turnout at the recent A Cloud Guru ServerlessConf in San Francisco and the maturity of both the services being offered by vendors and the ecosystem that’s growing up around it, I really feel like serverless has amazing momentum now.

The second technology that I think is really disruptive and has an amazing potential for impact is blockchain. Now, that one’s still very nascent. I would say we are roughly where AWS was when I joined to help build Lambda, and it may be even a little bit earlier in the life cycle.

So I think Coinbase is really well poised to both mature those underlying technologies and to do as AWS did by formulating a business around them. I also think Coinbase and AWS share a lot of cultural values that I believe in: strong ownership, obsessing over customers, a commitment to innovation, moving fast.

And I love that Coinbase had a fantastic vision, this idea of creating an open financial system that’s accessible to everyone in the world. So I was really honored and excited that they invited me to join them in that mission, and I look forward to helping accelerate those efforts by continuing to attract world-class engineering talent to the team.

I think I speak for the community when I say we can’t wait to see what you have up your sleeve! You’ve destroyed servers in public before, so I can’t imagine what you’re going to be obliterating now.

Well, I can’t really burn stacks of cash, so blockchain doesn’t offer all the same advantages from a drama perspective!

Seriously though, I think some people have a knee-jerk reaction where they mock serverless and blockchain as technologies that are all fad, hype without substance. Yet you’ve said in the past that you view these two things as chocolate and peanut butter, great ideas that work even better together. What specifically about serverless and blockchain makes them such a natural combination?

Blockchain offers a transparent distributed data store with a really unique trust model. But today, it lacks an easy way to connect that model to arbitrary code execution, and that limits what you can do in a smart contract, as interesting as they are.

And then on the flip side, you have something like AWS Lambda with its scalable and reliable execution of third-party code from a trusted vendor, but today it doesn’t offer a mechanism to use that code as a contractual agreement between two parties. Setting that up would require application code.

So if you could bring those two capabilities together, I think we could offer businesses and organizations of all sizes this ability to create mutually trustworthy and enforceable contracts for a whole variety of goods and services. And I think that would be an incredibly interesting future.

So you would foresee serverless impacting actual blockchain transactions, rather than just handling off-chain work on private networks?

I think there’s an interesting opportunity there to combine those two pieces. Especially when we think about — not necessarily the most extreme form of blockchain, where everybody distrusts everyone else — but “enterprise blockchain”, if you will, where you probably have a very trusted vendor like AWS that’s already hosting your infrastructure.

You’ve moved on from your previous role as the GM of AWS Lambda, API Gateway and the Serverless Application Repository. Reflecting on those years, what did your team achieve that you’re most proud of today?

When I started at AWS, it really took a distributed systems expert to create an application in the cloud, somebody who could understand things like multiple availability zones and redundancy.

And when I left, we had high school students creating Alexa skills that ran on Lambda, doing voice-enabled applications, and in many cases they didn’t even know what EC2 was. They were blissfully ignorant of the challenges of building and scaling and operating and provisioning cloud infrastructure. That’s the thing I’m most proud of.

The thing I’m most excited about now is for the team to continue making the cloud accessible and easy to use, easy to integrate, easy to deliver for a much, much larger demographic than where we started six years ago.

And I think there’s a similar opportunity here with some of these emerging technologies like blockchain to take something that today is complicated — that feels difficult to access — and productize that into something that lots of people can use.

With that said, what would you say is the next step in the evolution of serverless?

Well, you know I love doing predictions. Without giving away any state secrets, I’m really excited by what Eric Jonas’s group down at Berkeley has been doing. I think they point the way to a future where not only are applications easier to build, but we can start to treat the cloud like the supercomputer that it really is.

There’s an amazing amount of silicon sitting in the cloud. And the opportunity to use that in a way where you only pay for the actual work that you consume and never for idle time, I think, is something that we’re going to see programming models and frameworks and application capabilities developing over the course of the next few years.

I personally don’t think that we are “done” with serverless. In some ways, we’re kind of in the teenage years here. There are lots of applications where people don’t yet see serverless as the obvious solution, and I think they will very likely come to see it as an appropriate and beneficial solution over the course of the next few years.

What do you think is going to help these people connect the dots, if they are not seeing the benefit of serverless today?

One thing is that the growing pains will go away. The challenges that we’ve all recognized, like tool availability, monitoring and diagnostics, even service limits — I think those are going to keep getting better. The vendors are all incredibly laser-focused on removing those restrictions and taking that kind of friction away from the application programming.

The other key improvement will be an expansion of the programming model. So for things that might be more challenging to do today, either because of limits or just because of the inability to express an application in a serverless fashion, I think we’re gonna see a lot of innovation and exciting changes over the course of the next few years here.

And I think those two improvements in combination will make the vast majority of applications accessible and available to a serverless approach.

I was really struck by your presentation at ServerlessConf when you were talking about what people are missing when they do a straight comparison of cost between Lambda and EC2. Could you share a little bit of how you break down these numbers for customers?

People have a tendency to forget about or underestimate all of the things that you need to put together to make an actual solution. And so people will compare just one piece of the puzzle, like just the EC2 instance to Lambda, and they’ll forget that they also need to make it highly available, they need to factor in a load balancer, they need a queue to place the job in if they’re building an asynchronous or event-based solution.

By the time they’ve composed that, we have a much more realistic comparison in terms of system architecture. But that’s usually the easier challenge to get past, because generally there’s an architect in the room who understands that and will nod in agreement.

What tends to be more of an educational journey for customers who haven’t necessarily used Lambda or another serverless solution before is this presumption that servers are always hot, that they’re 100 percent utilized.And in point of fact, we know from studies that conventional enterprise utilization is down in the ten to fifteen percent range.

Most of the time, most servers are not doing anything useful. And that huge amount of white space, that lack of efficiency, is something that serverless can take right out of the equation.

That slogan of “never pay for idle” is a real economic game-changer. Until companies have seen that operate, they are sometimes a little bit reluctant to trust. But frankly, once a company has gone there, it’s usually the thing they’re the most excited about.

They love the time to market, they love the simplicity, but the thing that you hear companies like Fannie Mae and Hearst get up onstage and talk about at re:Invent or an AWS summit is often the economic win that they’ve managed to achieve.

And these can be some astonishing numbers. We were at a meeting recently where FICO talked about getting between one and two orders of magnitude of cost savings for applications that they’ve converted.

I always say the proof is in the pudding, you should try it out, and I think once companies have experimented in one part of their organization, we have seen on the Lambda team very rapid adoption and expansion into other use cases as well.

Do you believe that Lambda, or more broadly FaaS, is by definition part of a truly serverless architecture? Or is it possible to build these systems without using functional compute at all?

I would distinguish two things here. There is a huge value in fully-managed services that scale by request and play together nicely. So when you build, for example, on the AWS platform, you can plug S3 and Lambda and API Gateway and DynamoDB together to form a serverless web app and all of that will scale up and down in in a synchronized fashion.

So these modern design patterns do tend to be about fully managed services working together, which is the reason that so often the business logic gets expressed in the form of Lambda.

Now, all of that said, I absolutely believe that there are design patterns where you don’t necessarily need arbitrary business logic on the AWS platform. As a great example of that, you can take audit traces — for instance, from AWS CloudTrail — and then use Amazon Athena to go query those and detect patterns in them, potentially hand them off to Amazon SageMaker or another way of producing ML.

And a lot of that doesn’t necessarily require much in the way of coding, maybe nothing more in some cases than writing a SQL query. In fact, in the design patterns book that I’ve been working on, we actually have a chapter we’re thinking about calling “Codeless Patterns” for exactly that reason.

One of your last acts as serverless czar was to bring Lambda/SQS integration to the world. This has been a feature request in the AWS community for literally years. I feel like there must be a story here. Can you pull back the curtain a bit and explain why serverless features aren’t always as simple to deliver as they may seem?

[Laughs] Oh boy. We started talking about the SQS-to-Lambda feature probably five years ago, so it was very long in the making. It had probably the longest genesis of any feature that I was associated with during my time at AWS, and it was definitely harder than it appears at first blush.

I would often tell people to imagine if you could somehow make the Hoover Dam disappear, think about what would happen downstream. That’s exactly what would go on here, because you can put essentially an unlimited number of elements into an SQS queue before you go hook it up to a function.

So in order to give customers — and ourselves, frankly — some control over that, we had to go invent an entire new feature, concurrency controls per function in Lambda, which also meant we had to have metrics and internal infrastructure for that.

That required us to change some of our architecture to produce what we call the high-speed counting service, and and so on and so forth. There’s a whole lot of iceberg below the waterline for the piece that comes poking above the top.

So we know this was a source of frustration, customers really wanted this feature, and we were so happy we could finally deliver it. Wish we could have done it faster, but I’ve always hoped I could tell a little bit of the journey of that one and why it took us a while to get there.

Tim, I wish you all the best as you move forward with your career. It sounds like you’ll be involved in some awesome new stuff, but I hope you won’t forget about the serverless community either …

Definitely not. I’m continuing to work on the serverless design patterns book, and honestly I have a few blog posts that I’m anxious to get up. I’ve been turning over some new ideas in the back of my head, so I will definitely remain an active member contributing to and collaborating with folks in the serverless community!

Forrest Brazeal is an AWS Serverless Hero and a cloud architect at Trek10. He writes the ACG Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please let him know at @forrestbrazeal.

--

--

Forrest Brazeal
A Cloud Guru

AWS Serverless Hero. Cloud architect @Trek10inc, words and cartoons @acloudguru. Previously cloud infrastructure @Infor. Opinions here are mine.