The State of OpenWhisk

Serverless computing frees developers from infrastructure management. Instead, developers focus on their business value and managing the business critical assets — the code. Infrastructure security, scalability, maintenance, availability, monitoring and the like are managed by the cloud provider.

Functions-as-a-service or simply functions is one of the building blocks for serverless computing: a developer can deploy a REST API by deploying a single function to the cloud. The function is reactive, and runs automatically in response to events. Moreover, the number of function instances scales automatically whether handling 1 event or 1000.

Functions and more generally, serverless computing, are disruptive technologies. Today, companies large and small are investing and innovating rapidly in this space. For those of us already immersed it, we are at the fast moving frontier for a new model of computing.

Apache OpenWhisk is a functions platform, it is open source, and is noted [1,2,3] for its wide support for language runtimes and its event-integration model which enables a rich ecosystem of event sources that are essential for building valuable cloud applications.


Whisk. The project was borne out of IBM Research. The project started in February of 2015 with a small team of researchers. It was code-named whisk, as in “move nimbly and quickly” to reflect the developer agility afforded by serverless computing and how functions are whipped together to run cloud native applications. The “open” prefix was added when the code was open sourced a year later on GitHub — nearly a year to the day of the first commit. At the same time IBM announced the availability of its functions offering on the IBM Cloud. In December of 2016, IBM Cloud Functions was generally available and OpenWhisk became an Apache Software Foundation Incubator.

Stats. The project on GitHub as of this post has 115 contributors, more than 550 forks and just shy of 3000 stars. The effort is backed by partners at Adobe, IBM, and Red Hat. Many significant contributions have come via on-prem deployments worldwide. The OpenWhisk Slack community has over 770 participants and is a good source of support and information for developers interested in this space. The project topped Hacker News in February with many insightful comments. It powers cloud offerings from IBM and Adobe.

Kubernetes. The full OpenWhisk stack may be deployed in several ways. For local development, the project supports quick start options via Docker Compose (60 seconds quick start), Docker for Mac, and Vagrant. It may also be deployed on top of virtual machines, DC/OS (Mesos) and of course, Kubernetes. An upcoming talk at KubeCon in May will cover some of the architectural changes we’re working on to take advantages of the resource management in Kubernetes (as well as Istio service mesh in the future).

Apache. OpenWhisk is an Apache incubator project and is gearing up for its first official Apache release. This is an important milestone on the road to graduating from the incubation phase. OpenWhisk is all in on Git for development, reporting defects and issues, pull requests, CI/CD automation with Travis CI, and Slack bot integration for a modern devops experience.

Bespoke container management. The OpenWhisk architecture consists of an API controller for managing authentication and authorization, CRUD operations, and function activations. A load balancer assigns function activations to a container pool called an invoker. The invoker in turn assigns a container to execute the function. The load balancer and invoker are critical components of the architecture, and OpenWhisk implements custom heuristics for managing function activations. The architecture is plugable, further allowing experimentation and research into alternate scheduling heuristics and resource management techniques.

While Kubernetes (and other container orchestration systems) are designed to manage containers, they cannot yet handle the scale at which containers are created and destroyed in a functions-as-a-service model. The container pool manager in OpenWhisk manages speculative provisioning of resources, and is designed to enhance container locality (i.e., reuse a resource for a given function) to reduce system overhead. These techniques help to reduce the cold start latency, which is a way of measuring the serverless system overhead.

The invoker also relies on container suspend and resume operations through lower level protocols that bypass the Kubernetes controller. These also shorten the critical path when invoking a function. In a production setting, OpenWhisk will spawn, resuse, and destroy millions of containers a day. The bespoke resource scheduling allows for a system overhead of 10 ms or less on average.

Composition. The functions paradigm of computing is rapidly moving into function composition. It is a natural and necessary evolution of the programming model. By composition, I mean allowing functions to produce results that are consumed by other functions.

Deployments. OpenWhisk offers a pluggable architecture and deployment configurations to tailor an offering for an organization. This includes integration with IAM, datastores, and logging services. Furthermore, a particular deployment can offer specialized runtimes with integrated SDKs that are suitable for an organization (see IBM Cloud Functions for examples). The architecture may be deployed in a highly-available configuration and with encryption for all data in motion making it attractive for organizations that are affected by GDPR.

Events. Events are essential as they provide the data that functions consume. There are already event integration services with Kafka, CouchDB, and an Alarm service as part of the Apache OpenWhisk project. Members of the OpenWhisk community are also engaged with the Cloud Native Computing Foundation (CNCF) Serverless Working Group and the ongoing work to realize an open event specification.

Functions. The platform supports polyglot functions, and offers runtimes for Node, Python, Swift, Java, PHP, Go, and even Docker containers. The ability to author functions in a wide range of languages creates agility and empowers developers to use the best language to solve a particular problem.

Go on, join usWe are a welcoming community and looking to build the best, versatile, and production ready functions platform there is. Our technical roadmap includes improvements to the load balancer, ability to integrate multiple data and object stores, enhanced support for Golang, and advanced performance optimizations to reduce cold starts for function compositions. You can reach us on Slack and GitHub, let us know if you want to contribute new features, suggest new capabilities, or join our GitHub stargazers.