CIO’s are being bombarded with “education” about “serverless” cloud native computing, but is it right for you right now?
The idea of serverless computing is not new, but is now gaining real buzzword steam. In our minds it seems like something currently suitable for prototyping from an enterprise perspective; particularly when staffing for infrastructure is thin. For example, the movement advocates for microservices which at least in the healthcare enterprise market, is a last resort. The main purpose of implementing microservices is to help in the refactoring of a previously monolithic app. The security profile for Microservices is also not ideal from a health system perspective.
According to Chris Aniszczyk, CTO and COO of the Cloud Native Computing Foundation (CNCF), the introduction of AWS Lambda in 2014 recently popularized the concept of serverless. AWS Lambda was followed by the announcements of IBM OpenWhisk on Bluemix, Google Cloud Functions and Microsoft Azure Cloud Functions, along with the launch of a number of other serverless frameworks such as Terraform, Fission and Fn.
However, today there is a lack of standardization and interoperability between cloud providers that may lead to vendor lock-in, a key cloud buying criteria for CIO’s of enterprises. There also does not currently exist quality documentation, best practices, and more importantly, tools and utilities.
The CNCF defines serverless computing as “the concept of building and running applications that do not require server management. It describes deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment,” the foundation wrote in this whitepaper. The benefits include zero server ops and no compute costs when idle.
The product marketed as “serverless” does not remove the need to use servers to host and run code though. It does remove the tasks of server provisioning, maintenance, updates, scaling and capacity planning.
“Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams. As a result, developers focus on writing their applications’ business logic. Operations engineers are able to elevate their focus to more business-critical tasks,” the CNCF wrote.
Notes of Security, Risk, and Compliance:
- The risk is still yours
You could end up sharing your workload in the same memory space and virtual environment in the same space as a commercial app. As CIO you’re still responsible for HIPAA and security. You don’t know where the cloud server infrastructure happens to be at that ephemeral moment. Do you have the level of control, understand your risks, and are able to mitigate them? Is it possible to comply with HIPAA? Even if someone signs a BAA, you still have to verify that they comply with HIPAA.
2. What about flow-down terms from insurers and the controls that they require?
How do you know what antivirus is running?
How are you doing role based access control to the server and underlying memory?
3. Try running a GDPR application
As CIO, you’re still responsible for knowing who has access to your data but in this serverless paradigm, you don’t really know who has real access.
The top use cases for serverless in our view include workloads that are asynchronous, infrequent, in sporadic demand, demonstrate unpredictable variance in scaling requirements, are stateless, ephemeral, and highly dynamic. Aniszczyk explained that serverless is not a good option for users that are looking at startup time and performance or for customers trying to avoid being locked into a specific cloud platform provider.
AWS Lambda, was the first FaaS offering by a large public cloud vendor, followed by Google Cloud Functions, Microsoft Azure Functions, IBM/Apache’s OpenWhisk (open source) in 2016 and Oracle Cloud Fn (open source) in 2017.
Deploying serverless based “services” does NOT include managed service markups that you’d likely need to implement too — for example public cloud RDS is routinely marked up 73% to 84% on top of base hosting.
Disclaimer: Our language and framework of choice, Erlang isn’t supported in an enterprise-grade way by the Serverless paradigm. It is designed for distributed and fault-tolerant systems, so it’s easier to scale than a language like Go, an example of one language that’s currently supported on a Serverless paradigm.
We really wanted to leverage the ideas behind concurrency and behind the power of our machines. The Erlang schedulers have been battle tested on 64 cores and more. This contributes to the development of massively distributed systems. The plumbing of connecting independent actors is already implemented and battle tested for reliability.
It leverages the so-called supervision trees useful for building fault-tolerant software in those languages. Performance is based on “shared nothing” — no shared memory, no locks, no remote procedure calls, and most importantly, no shared state. Shared nothing.
Further, some language compilers also lack features needed to ensure threads can be reliably paused quickly, meaning that whether pause times are actually low or not depends heavily on what kind of code you’re running. Erlang processes don’t share memory, each one has its own heap and they can be garbage-collected independently.
If you care about memory performance characteristics that much you really should be using a language that provides control over them. If you care about mobile performance, you should make memory management a killer feature which is what we aim to do at Medigram.
We hope this helps you discern where in your portfolio of applications that serverless as a tool could fit. One of our favorite sayings at Medigram is, let’s pick the right tools for the right jobs!
By: Sherri Douville CEO & Board Member & Eric Svetcov CTO/CSO at Medigram https://www.medigram.com/