A Conversation with Cloud Native Thought Leaders
I recently had the pleasure of hosting Redpoint’s second annual Cloud Native Summit in partnership with the CNCF and our friends at Amplify Partners. During the event I moderated a panel with technical leaders including Edith Harbaugh (CEO of LaunchDarkly), Preeti Somal (VP of Engineering at Hashicorp), Matt Klein (creator of Envoy), and Austen Collins (CEO of Serverless Inc).
Themes during the discussion:
1) Serverless is early and growing in popularity
2) The industry is transitioning to workflows that tie together solutions that solve specific challenges
3) Feature flags and service meshes are complementary technologies along a continuum
5) Devops positively impacts business outcomes but more work needs to be done around cultural best practices and bridging the infrastructure and product teams
I’m excited to share an edited transcript for those who were not able to attend the event.
Astasia: Austen, you’ve been working in the Serverless ecosystem for a while now, almost since the beginning. What’s real? What’s hype? What’s actually going on?
Austen: That’s a fantastic question. I was just looking at this, and I remember back in 2015 when I saw Amazon come out with Lambda, I thought, “Wow, as a developer, this is the compute service I’ve always wanted. Auto-scaling, rapid execution, event-driven microservices. Sounds like a dream. I was trying to go around telling people in my network this is going to be the future of compute on the cloud. This is everything. This is a convergence of all the great ideas of our time. I built this project called the Serverless framework to help other people think this way and build applications on this new computer service. I remember posting it to Hacker News and my first comment was, ‘This is a horrible idea.’
Astasia: That’s what they post for everything.
Austen: Yes. That’s certainly the default, like, ‘Thanks for posting to Hacker News. Welcome to the community. Your post is horrible.’
Now I’m looking at this and this is this is such a trip. It’s just wild seeing all this growth. Look, we work with a ton of companies that have built amazing stuff using a Serverless architecture. Is it hype to them? No. Is it hype to a lot of people who are really skeptical about all this stuff? It could be. As far as what’s real, what’s not real, some things that I’ve seen from the vantage point of the service framework are, when you reduce overhead, you liberate productivity and you liberate innovation, and that is really, really cool. We see small teams provisioning hundreds of these Lambda functions.
In fact, Nordstrom, they have a pretty awesome Serverless team over there. They’ve been one of the biggest contributors to our framework. I know that they provision like well over 100 functions or something, and they’re using them for all types of things. It’s not just the amount of productivity that they’ve been able to reach, but the use cases that they have. Serverless isn’t perfect for all types of things, of course. When you have that compute service that has all those qualities, people just want to stick a whole bunch of stuff in it because they want those qualities and they want to build systems that have lower overhead.
They’ve been building a whole bunch of stuff. They have projects to where you deploy Lambda function. They have like five other Lambda functions that will go interrogated with different tests, all types of cool use cases like that. That feels very real, liberating that productivity in that innovation.
Up next, enterprise usage. Seeing it being adopted by these companies like Nordstrom and Coke. FINRA processes 75 billion events a day, all stock trade, equity trades and option trades. They’re using a lot of Lambda for that now. I think that that’s pretty awesome, 75 billion events. I’m not sure what the percentages of that the processing of those, but when I chatted with them, it was significant, and that feels pretty real to me.
Also the fact that about 20%-25% of our Serverless framework users have never used a public cloud provider before. This is their first time using this stuff. I think the serverless stuff minimizes the operational burden. The surface area of concerns is also much smaller and it’s almost like we see this new diet version of the cloud, like a light version of the cloud. It’s like the more accessible flavor. What we see a lot of, is people adopting cloud like via Serverless architectures because it’s so much more accessible and a lot of enterprises that want to do cloud development, one of their big challenges is just getting their team up to speed on these new technologies, and Serverless makes that really easy.
I’m also a big fan of seeing people who aren’t engineers, aren’t traditional developers become developers through service architectures especially. There’s a whole bunch of people out there who have great ideas but they just they’re intimidated by software as we’ve been doing it.
All that stuff feels very real to me. The Serverless open source platforms on Kubernetes bringing portability to functions, I think that’s great. Bringing these options, this experience, this architectural pattern to people who can’t use public cloud, I think that that’s very real and that’s very awesome.
What’s not real? Portability is going to be a challenge in the Serverless world. We’re thinking about it the same way we have coming from the container era. But with Serverless, especially the architectural pattern, it’s not just about the compute, but it’s about the other infrastructure that you’re using with that compute to make some type of outcome whether it’s the database or some storage system. It needs to also have Serverless qualities, otherwise it’s not going to scale, it’s not going to fit nicely in that model.
As well as the event driven plumbing too, that’s also fairly complicated, and the idea that you’re going to be able to move this stuff really easily, is a big challenge. This is a huge opportunity for the CNCF here. We have the Serverless working group. I attend every single one of those calls and we’re working on standardizing a lot of these concepts, and we could chip away at this problem. We can make a lot of progress here. That’s definitely a challenge.
Migrations into Serverless architectures, this is also a bit of a challenge. It’s a lot for companies to take on once. First off, it’s micro-services. You’re deploying all your logic in these small independent units of deployment, and that alone is a big transition for a lot of companies. Moving microservices to writing things as functions thinking more about event-driven computing, all that stuff, that’s a bit of a challenge.
What’s not real is that this technology is going to take over everything. This is a not silver bullet. It is not going to win everything. That’s never real. That’s what I see.
Astasia: Awesome. Thank you. Preeti, at Hashi, you guys meet with a lot of enterprise customers. Are you seeing them use any Serverless functions today?
Preeti: I think I’m going to be the contrarian view on this. A little bit, but honestly as we are out there talking with customers, especially outside the valley, they’re still trying to figure out how to use the cloud, and how to think about applications in a way where it’s not just a lift and shift. The classic story we hear over and over again is, yes, there was a mandate to use public cloud and yes we moved this application but then we got this massive bill. The CFO stepped in and wait, just a lift and shift wasn’t the right approach. What we’re really seeing is certainly enterprises want to get that agility, want to get that operability. And of course, you have people who manage the service that developers don’t have to get exposed to. But they’re still trying to figure out how to get there. Increasingly, it’s about figuring out which projects are the right ones to start with.
Just take security, for instance. A lot of times, people don’t know which services are talking to which services, and somebody left the company, and this is the person that was pushing the firewall rules or the network requirements. How do you unwind all of this? There’s still a ton of work to do.
Astasia: It sounds like here in the Bay Area, we’re more on the bleeding edge and maybe we’re seeing more than some of the enterprise customers that you’re dealing with today at Hashi.
Astasia: Matt, as someone at one of these thought leader bleeding edge companies, Lyft, with a strong platform infrastructure team, are you guys doing Serverless? What’s the use case if you are?
Astasia: You’re not? Dig into that. That’s great.
Matt: Let’s see. We have a couple of non-critical jobs that we run on Lambda. But I would say that any Serverless that we do is — it’s a rounding error on our bill. I’m a firm believer that if you look out like 5, 10, 15 years from now, I do believe that we’re going to move into the Serverless world. But what I typically tell people today, just to echo what was said before, is that we can barely run containers today. If you look at networking, if you look at observability, if you look at how people deploy all the tooling, it’s absolutely horrendous. If you look at what is required, in my opinion, to do a real-time system using Serverless, it’s probably an order of magnitude harder problem. You’re not just dealing with more rapid auto-scaling. You’re dealing with a typology that’s always changing.
I think that we will get there, but we are years and years out, and I think we are much better served by fixing what we have today. I think batch will probably move towards Serverless first because that’s something that isn’t latency sensitive. I think we’ll see that move. But I think Serverless is a buzz word distraction today. I think that we should move towards it, but I think we have a lot of problems that we need to solve first.
Astasia: Edith, it looks like you have something to say.
Edith: It’s funny. I wrote an article two years ago where it says Serverless is the new electricity. What the Pinterest speaker said was awesome where he’s like developers just really care about their own experience. The underpinnings of how this gets out, they just want to ship. I think I still believe in the promise of Serverless. But I also agree with you that there’s a lot of steps to get there first.
Edith: I think it’s taken longer than we expected just because there’s still a lot of people that are just moving out of their own data centers.
Matt: Well, it’s that, but it’s also that just like the way that we run infrastructure today, it still is just too complicated. People are still doing YAML and all of these things. We have a very long way to go, I think.
Edith: I do believe in the long term. I think a developer just wants stuff to work.
Astasia: If we were going to ask the panel raise of hands, do you think Serverless and functions is going to be as big as containers in the next three years?
Austen: It feels like a setup.
This question around the Serverless versus containers is a kind of a strange one to me, actually. I’ve always felt that it doesn’t make any sense to me. I feel like there are this kind of awkward collision course, actually. I wish that Lambda used containers. It would solve so many problems in kind of the development phase, the build phase, all that stuff. We’re breaking our back trying to emulate kind of the Lambda experience without containers. It’s a pain. To think of these things as separate, I think, no. I think that they’re kind of heading in the same direction, and the developers have spoken. It’s clear. They said we had this kind of cultural awakening a couple of years ago. They’re like, “This is what we want.” You’re right, it’s going to take a while to get there, right?
That fact that it has gotten so far. If you really look at the options that you have on Amazon, for example, to build a Serverless architecture, it’s very few. The fact that people are making so many use cases with these few options are like, I guess we’ll try and use DynamoDB for that. Again, even though this is like the worst database to use for it, they’re still trying to make it work because they want this stuff so bad. This is how it starts always, people are just using it for these one little, one small tasks or something like that. This is exactly how we see adoption happen. A developer brings it into their org and they’re like, “I’m just going to automate this one thing that I don’t want to do anymore with a function.” As soon as they do that and they feel that magic moment, then their mind expands and think, “What else can I do with this stuff?”
But you’re right, it’s early. We’ve got a long way to go here. But at the end of the day, I think developers have spoken, like this is what we want. I do think containers could help a lot of that experience. The idea of these things are antagonistic, I don’t quite understand. I’m looking forward for them to just kind of get along and get on with it.
Astasia: Well, I think it’s clear that there is appetite in the market. All the different open source solutions, on the landscape that’s provided, is an example of that. I think I really enjoyed the community survey piece that Serverless Inc. put out last week. It showed that businesses are most interested in operationalizing serverless. The main pain points were around debugging, testing, and observabilities. It sounds like there is momentum, and we’re starting to see that by the new pain points that are coming to fruition now.
Switching gears a little bit, one thing that’s been interesting to me is the co-evolution of HashiCorp and the Kubernetes community. As Kubernetes is being championed by the CNCF. Preeti, it’d be great to have you speak about the work HashiCorp is doing with its solutions suite, and reconcile that with the parallel work around Kubernetes.
Preeti: Yes, sure. I think it is definitely worth reiterating that HashiCorp’s roots are in open source. The co-founders are practitioners that have been part of the open source community and really for me coming in and being here for six months, the focus on solving customers’ problems today from a point of view of workflows, not technology, resonated.
If you look at our website and internally, what we live and breathe is we’re creating tools that solve workflows that customers need, and we deliver in a multi-cloud hybrid world. We know every single enterprise customer is multi-cloud, whether you define cloud as something that somebody else is running or something that you were running as a Kubernetes cluster or physical or whatever. For us, it’s about as customers are moving to this cloud operating model, cloud-native model, what are the challenges? How do you think about security in a dynamic world? You can no longer think about security as your perimeter and nobody enters it and my machines are all secure. You have to think about your applications moving and security being dynamic. How do you think about provisioning and being able to apply some of the policy and governance pieces as your team of provisioning?
Again, for us, it’s about tools that help solve those problems and work across a variety of technologies that exist within the customer’s base. More concretely with Kubernetes, for instance, we have a telephone provider that does provisioning for Kubernetes. We’re doing some work where Walt and Consul can be used with Kubernetes more seamlessly. So it’s another proof point of listening to the community, understanding what they need, and then being able to solve those problems.
Astasia: Great. Lots of work around workflows. Within the cloud-native community, we’re also hearing a lot about Gitops and Git-centric workflows. Matt, are there changes to the SDLC or processes that businesses need to go through to adopt cloud-native?
Matt: Yes, for sure. That’s a tough one. We’re moving towards this whole idea of doing “DevOps”. I do that in quotes because that means different things to different people. What it typically mostly means is that we have a lot of developers who aren’t quite as experienced operating systems who now are being told to go on and they have to deploy software and they have to monitor software and do that type of stuff. It’s been interesting to see that the whole Git-approach to software, and for those that don’t know that’s just the idea that you can keep your configs in Git and it’s version controlled and all that stuff.
To be perfectly honest, using Git in that way, it’s a hack because we don’t have better systems. By better systems, I mean, I’m a firm believer that we need custom built user experiences for people who are doing deploys and monitoring and all of the tools that people have to do for doing DevOps. We don’t have those today because historically in infrastructure, we have not hired or invested in the design skills to actually develop those tools. People are stuck using git and GitHub because it’s the second best thing that we have, which is something that’s a version control has a UI, I can go and do code review.
In the future, people’s experience with editing config files, whether be YAML or JSON or something else, is really quite bad. It’s quite error-prone. They’re just using GitHub because it has a code review UI and it has a version control and you can revert things because the only reason that’s being used. People are basically hacking around it. They’re using slack bots or they’re using GitHub. These are all hacks. In the future, we will hopefully have better custom UIs that people can use to do their actual DevOps experiences.
Astasia: In line with some of the advancements that we’ve seen in SDLC, one component of that is feature flagging. I’d love Edith to talk about feature flags, the motivation behind why people use them, and start to dig into whether feature flags are same as service meshes.
Edith: Yes, well, there’s a bunch of questions there. Let me start at the beginning. Feature flags are an industry best practice where you are turning sections of code on and off after deployment. Martin Fowler really popularized this. You can push out code and then selectively turn it on for people. The real gain then is that it opens up this whole new universe. You can ship something, if it’s doing poorly, you can just turn it off. I’ve talked to people all over the world about how fast their deployments are. I’ve heard everything from an agonizing 14 hours, which is awful. That’s the number they’ll meet to me. I know it’s actually longer so even like a super fast deploy takes a couple of minutes. That’s a lifetime if something’s going wrong in the field.
People love feature flags because they could say this isn’t working. Let’s turn it off. The alternative that I lived through without feature flags was this isn’t working. Let me figure out in a very tight time pressure how to fix it, test it, deploy it, and then find out that I actually made this far worse, which has happened to me many, many times when under stress.
Then there’s also this whole other world of, okay, once you have this freedom, you can selectively deploy something, you can then do a lot of really interesting in software development lifecycle things. We’re saying, like, okay, developer, you built it. Product manager, you get to pick who gets it, which takes away, I’d say 90% of fights in software development, not 100%, but 90%. Feature flagging lets developers focus on building. If somebody’s not working, give somebody else the control to do it. Then also let product and marketing and sales really control the access level controls.
We don’t see this as same as a service mesh at all. We see service mesh as a really complementary thing. A service mesh is much more about discovery and making sure that stuff is deployed correctly.
Matt: I think that’s right. I think the only thing I would add is that I think that the service mesh you can implement feature flags really in one of two ways. You can put them in your application code, or you can have the service mesh actually act on them. I think there’s cases for both. We had actually talked to you all I think at some point about having Envoy actually talk to LaunchDarkly so that we could do that directly. I definitely see that happening more in the future. Where again today it comes back to what I was saying previously that people are not using the right UI for doing what they are trying to do. What we see today is that people use Envoy and service mesh for feature flagging. But they’re doing that by committing something into a config file like running some CLI or something.
No, I would like to go to the LaunchDarkly UI and have an experiment like move things around and be able to revert it, and we can’t do that today. We just don’t have the right tool integrations that we actually need, I think.
Edith: Well, I’d love to work with you on that. I completely agree that they’re serving different steps of a continuum. You want to make sure that it’s getting out to the right boxes, at the right times, at the right Kubernetes cluster. Then you have business users who want to enable it and the UI are different.
Astasia: Preeti, Hashi, a few weeks ago, announced Consul Connect, which is kind of their take on the Service mesh. What are some key decision criteria practitioners that think about when picking a service mesh?
Preeti: Thank you. What we launched a couple of weeks ago was Consul Connect, which is essentially a number of features that are within the Consul product. The product is Consul and this is another use case. Essentially, the problem that we’re trying to solve is the problem of, how do you enable microservices to connect securely with each other and manage those intentions in a simple to use way? Today, and I lived this personally when I was at Yahoo, a lot of the policies are encoded in the network. When you want to change these things, you have to go talk to your network admin. Where do all the delays in terms of deployment happen? It’s not the fact that you can’t get the code on the box. It’s the fact that you can’t get traffic securely to the code of the box, and that code to talk to other downstream services that it needs to be functional.
What Consul Connect is trying to do is essentially solve this problem of being able to securely communicate between services, and delegate the response ability of who can talk to whom, to someone who is essentially at the application level. We call it intentions. We can layer on top off whatever network security you might have in place, and this is a key factor in terms of when you’re looking at service meshes, what is your decision criteria? One key aspect of a lot of the HashiCorp tooling is, we don’t require operators to rip and replace. Our focus is on how do we give you immediate value in the environment that you have running today, and then help you move to a model where you’re running more effectively? Consul Connect will, out of the box, be able to talk with legacy applications as well. It’s out in open source. I definitely encourage all of you to take a look at it, and it’s something that we’re really excited about and looking forward to getting more feedback on.
Astasia: Something that we’ve been thinking through is the co-evolution of serverless and service mesh. Austen, this is to you, is the work around functions actually compatible with what we’re seeing in service meshes? Can you run them together?
Austen: Is the work around service meshes compatible with functions on a public cloud provider? Yes, absolutely. Service discovery is pretty easy I’d say with a lambda function. You can put a simple end-point in front of it. With respect to the service mesh platforms that are being built on Kubernetes, I believe some of them are designed to have Envoy integrations kind of out of the box. I’m not sure if you’ve seen work on that.
Group: Yes, there are a couple solutions going after this right now like Gloo from Solo.io
Astasia: Have you seen anyone that’s running your framework on Lambda and have a front end of a service mesh or anything like that?
Austen: They’re not doing a lot of that right now, but we see it coming. Especially given the maturity of some of the new tools in this space. We’re hearing more and more customer conversations talk about this.
Astasia: Well, that’s pretty exciting. Great.
Preeti: If I may just add. The Solo folks behind Gloo.io actually have done an integration with Envoy and Consul Connect too.
Astasia: Maybe check out Gloo, the project, to see if you’re interested in doing that. Switching gears, one thing that I don’t think the DevOps and cloud-native ecosystem has focused enough on is really talking about the impact on business and culture. Edith, how does DevOps culture help the broader business and them being successful?
Edith: It’s an interesting question because in my opinion, it started with agile, which is all about moving faster. Then agile permeated back until like if you’re shipping more often, you need processes in place like DevOps to help you do that. The real business impact that I have seen to our customers is huge. We’ve seen customers move from deploying once a year to deploying once a month, which is actually a huge difference when you talk about how much value can deliver and how that shifts the way they think about features, like it just gives them more at-bats. I think and I go to a lot of DevOps conferences. Everybody has a different idea about what they mean from DevOps. I think the business drive is just like, how can we reduce risk? How can we move faster? How do we build a culture around that?
I think that’s much more than tools and just about a mindset. I visited a customer this week. They said speed is a habit. Not speed is an emergency death match, which I remember that the project I didn’t like it. But just, how can we just get to this culture where we ship all the time, we figure out what works? Not everything is going to work, but we’re going to ship something else very quickly after that, that will work. That’s much bigger than any tool, that’s more just a mindset.
Astasia: Interesting. Matt, being on one of these platform teams, are there any organizational changes that need to occur for DevOps or cloud-native.
Matt: In fact, I just wrote a blog post on this.
Astasia: Can you give us the cliff notes?
Matt: Sure. This is very top of mind for me right now. First, I think DevOps just means different things to two different people. I think for companies that used to deploy once per year and are learning some agile techniques, great, like let’s do DevOps. I think for hypergrowth companies like Lyft and companies in that space, I think the idea of DevOps is frankly it’s pretty broken right now. It’s broken in the sense that companies like Lyft are hiring so many product engineers without any real plan on how to educate them of how to do DevOps. We just expect people to know, and the tooling is still very immature. Really the crux of the post is that I think because a lot of our infrastructure tooling today is still quite primitive, I really believe that some of these hypergrowth companies, you need some team. Call them whatever you want, whether it’s production engineering, SRE something else. But you need some bridge that can bridge between the infrastructure team and the product team, and that’s kind of the only way of making it so that these two groups of people don’t end up just hating each other.
I think that’s what ends up happening in a lot of these companies, and it leads to a lot of burnout. I think from like the original question from a people perspective is, I think, especially for some of the hyper hypergrowth companies, I think really considering just the human costs and how to educate people, how to do documentation, like these tend to be afterthoughts and they’re really important that people don’t think about them enough.
Astasia: Sounds like there’s still a lot of work that can be done from a team and culture perspective. We touched on a few trends today, including serverless, service meshes and some of the workflow tooling that needs to go into place. Just as a parting question to each of you, is there any one big trend that the audience should be looking for in the upcoming year? Preeti, do you want to kick it off?
Preeti: What we’re seeing is really what’s playing out in enterprises. It’s not super trendy from the point of being fashionable, but I think for us what we’re seeing is multi-cloud is certainly real. Time and again, we’ll be talking with customers and they do not want to put all their workloads in one cloud. They definitely want to go multi-cloud.
How do we make sure that we can enable that developer agility, productivity with the appropriate security cost in a multi cloud environment? DevOps, call it whatever, but it’s really how does this play out in terms of real adoptions for enterprise customers this year.
Astasia: Multi-cloud. Edith?
Edith: It sounds like a cliche, but far more people use software than I think anybody including me ever realized. About 10 years ago, there was this thought that software was a cost center. Let’s make software as cheaply as possible. Let’s ship once a year, that’s fine.
The high growth startups are scaring the old line companies, everybody wants to be Warby Parker. Nobody wants to be Lens Crafter. There’s just this absolute tidal wave of we need to move faster.
Matt: I don’t think there’s any one thing over the next year. The general trend though that we’re seeing is that anytime that people are working on infrastructure for most companies, it’s basically overhead. It’s basically useless. Companies don’t care. They want people writing business logic. I think that what we’re really seeing now is a trend where the fewer people that companies can employ to do infrastructure, those solutions will end up becoming popular.
Those are better public cloud solutions that offer better container services or better Serverless platforms. Or startups that are able to do a SaaS service that can take over the work that three people were doing at a company that they don’t have to do anymore. There’s no one area but it’s all the things I have been talking about before, which is why we need to make things easier to use. More UI, more UX like less YAML, less GitHub, less Slack. I think that people that do that, they will sell product. Whether that’s a startup or that’s cloud, that’s a separate conversation, but that’s really the trend, is just make things simpler to use.
Austen: Great question. I might reiterate what all three of these panelists said. Number one, multi-cloud. Absolutely. We see this a lot, and it’s not for traditional reasons like optimizing cost or fail over scenarios. In the service community especially, we see vendor choice. We see Google has that new thing over there, can we just stick a function over there and bring that in architecture somehow. We see big public cloud providers bending over backwards coming out with more and more serverless options.
More and more managed services that are higher levels of abstraction that really focus on producing outcomes faster without having to think about the underlying infrastructure. We see more and more demand for vendor choice and ways to just take advantage of the tool that best solves the problem rather than think within the limitations of a specific platform. That’s on the rise, I’d say.
With respect to moving faster, absolutely. We’re still trying to solve all these ancient problems of getting stuff out to the market, finding product-market fit. All this stuff is so hard that is the thing that I think businesses especially should be focused on. First and foremost, it is such a challenge.
Then we have all these new problems that are coming in as a result of just technology invading our human reality in every regard. With IoT devices everywhere, we’ve got AI systems, autonomous systems integrating into our lives. How are companies going to build the integrations with these systems have public facing kind of assets or presences on all the systems? Anything that helps them move faster increase innovation I think absolutely it’s a must. I just don’t see how companies are going to keep up with all this. It’s a concentrated change out there. Then lastly, making these things simpler. Absolutely. Then just building in the safety right into that.
Developers, they should not think about infrastructure, and they shouldn’t have to think about organizational policies and general compliance. This should be baked in. The guardrail should be in there and they should just focus again on business logic, and then see some type of error or something when they’re doing it wrong. Then I love the idea of just making these tools easier because we see this all the time in the service communities. New people who are traditionally developers being able to take this stuff and build all types of code use cases and stuff. I think that’s a powerful thing, especially for companies who want to do more. They’re having a hard time finding talented people.
As these tools get more accessible and software developing has democratized, I think we’re going to see big changes there. Especially when you think about just the creative class in general. I think there’s a lot of people out there who would be great software developers who just a bit too intimidated today. I think that’s another big thing to look out for.
Astasia: I like that. Make it multi-cloud, make it fast, make it easy. If only it was that easy to build solutions that support these principles.