Cloud-native WebAssembly in Service Mesh

Michael Yuan
Wasm
Published in
3 min readOct 11, 2021

A holy grail of modern enterprise architecture is the separation of concerns between infrastructure and business logic code. Ideally, application developers want to focus on business logic and are not concerned about how to manage and scale applications. That is the “serverless” paradigm.

In cloud-native environments, a commonly used approach is service mesh. The service mesh allows application units, or microservices, to run in distributed runtimes or containers. These runtimes are also called sidecars. The service mesh also provides a data plane, or API proxies, to manage data in and out of the sidecar microservices, as well as a control plane to configure policies for the data plane.

As microservices become more modular and lighter on one hand, and take over more complex application logic (eg AI inference) on the other hand, there is a growing need for lightweight runtimes for both sidecars and data plane proxies to improve system performance and reduce resource consumption by the growing infrastructure. After all, it is absurd to spin up a Docker container and guest OS just to execute a microservice function that has 10 lines of code.

The WasmEdge Runtime is a CNCF hosted WebAssembly runtime project. It is optimized for cloud-native applications and use cases. It could be a lightweight and high-performance runtime for service mesh sidecars and proxies. Based on the WebAssembly standard, WasmEdge supports multiple programming languages, including Rust and even JavaScript, and offers a safe runtime sandbox. It could be 100x faster and lighter than application containers like Docker, especially at startup time.

Lightweight runtime for sidecar microservices

For sidecar frameworks that support multiple application runtimes, we could simply embed WasmEdge applications into the sidecar through its C, Go, Rust, or Node.js SDKs. A good example is Dapr. We have a template project that showcases how to run WasmEdge microservices as Dapr sidecars.

Similarly, you can also embed WasmEdge applications and functions in public clouds’ serverless services. Here are examples for AWS Lambda, Tencent Cloud, Vercel, and Netlify.

However, widely used service meshes, such as CNCF’s Linkerd, uses Kubernetes to manage sidecars. To that end, WasmEdge provides an OCI-compliant wrapper for its WebAssembly runtime and supports network sockets in WebAssembly to listen for incoming API requests directly. That allows WasmEdge applications to be managed directly by container tools and act as sidecar microservices.

Lightweight runtime extension for the API proxy

The API proxy is another crucial component in the service mesh. It manages and directs API requests to sidecars in a manner that keeps the system scalable. Developers need to script those proxies to route traffic according to changing infrastructure and ops requirements.

Envoy Proxy, another CNCF project, is the first API proxy to support WebAssembly as an alternative to the LUA scripting language.

The Easegress proxy published an excellent hypothetical case study on how a WebAssembly-based extension can orchestrate traffic spikes for an e-commerce giant’s one-day flash sale.

Seeing widespread demand for this type of application, the community came together and created the proxy-wasm spec. It defines the host interface WebAssembly runtimes must support in order to plug into the proxy. WasmEdge is a WebAssembly runtime that supports proxy-wasm, enabling it to be used as extensions for Envoy, MOSN, and other proxies.

What’s next

WebAssembly in service mesh is a fast-growing field in the cloud-native landscape. Just in the past 6 months, the CNCF accepted 3 WebAssembly projects into its sandbox — WasmEdge, wasmCloud, and Krustlet — with more on the way! If you are interested in these new technologies and their applications, meet us at the upcoming KubeCon and CloudNativeCon North America and Cloud Native Wasm Day. You can meet us in person in Los Angeles or online!

--

--