Going Serverless for Event Driven Applications: Insights from Adobe I/O’s Sandeep Paliwal

Server side applications have grown from the days of supporting just one type of client: the browser. Now, there are mobile browsers, native mobile apps, and a variety of IoT devices ranging from home automation to health monitoring devices. Development has also evolved from monolithic applications to microservice architecture, and now an even more basic unit, a function, deployed on the server.

Sandeep Paliwal is a computer scientist working on the Adobe I/O Runtime, an environment that allows you to quickly deploy custom code to respond to events and access other Adobe services. The technology is built on Apache OpenWhisk, an open source project incubating at the Apache Software Foundation.

I had a chance to chat with Sandeep about why going serverless is the way to go for event driven applications.

A Serverless Future

“There’s a logical move towards a serverless environment which I think is a very natural fit for event driven applications,” said Paliwal.

“Your server side applications are not something that is only serving web pages. It has become a lot more. Moving towards a more granular form of an architecture on the server side makes it easier to cater to all these kinds of requirements.”

Some good use cases, according to Paliwal, are basically any applications where generating and processing a lot of events is required such as IoT devices or an analytics platform.

While the term ‘serverless’ can be a little misleading, since there are still servers running somewhere, using a serverless platform means the developers don’t need to manage or maintain those servers. Sandeep thinks this is the biggest advantage. By putting less focus on maintenance, you’ll be able to spend more time building your apps, going to market faster, and focusing on your business.

Another important advantage is being able to pay per execution, when your function is being invoked. This can reduce cost against running your own VMs which may not get used efficiently if the application remains idle. With serverless, you don’t have to worry about scaling the environments.

Sandeep highlighted another advantage of going serverless, which is the ability to write in different languages. “Typically, if you write your own application and deploy a server, you are stuck with one or two languages,” he explained. “With serverless, you can write your functions in any language and execute. It makes it a lot more flexible.”

Avoiding Vendor Lock-In and Security Threats

While there are advantages, Sandeep also mentions that there are some points to keep in mind when moving platforms.

“One of the important things when going serverless is you are basically going on the cloud to a vendor, so you need to make sure you don’t get into a vendor lock-in,” he cautioned. “Because sometimes you have to write code which is vendor specific. So one needs to be aware of that and manage their code.”

Sandeep also recommends breaking down functions into logical units when writing applications, rather than as one function. The key is making your code flexible.

Another point to keep in mind is not to neglect security. While going serverless does reduces risks at the server level, there are still other threats to consider such as managing permission policies, monitoring app dependencies, and thinking about what data you’re sharing with third-party services. Mark Boyd goes into more detail in his article, Security in Serverless: What Gets Better, What Gets Worse?

“What happens is people think that since serverless is on the cloud they don’t have to worry about security,” said Paliwal, adding that best practices need to be applied to ensure you reap the benefits, without exposing yourself to any security concerns.

For more insights from Sandeep Paliwal, check out the slides from his talk, Creating Event Driven Serverless Applications, originally given at Serverless Summit in Bangalore, and check out more of his work on GitHub.