Announcing Knative v0.5 Release
Once again, we are excited to announce a new release of Knative: a platform to help developers build, deploy, and manage modern serverless workloads on Kubernetes.
While the more frequent and predictable releases of Knative gives us an opportunity to collect faster feedback from real-world use-cases, they also mean smaller and more incremental features. Well, that’s not always the case. Knative v0.5 delivers an exciting set of updates in eventing. Introducing Trigger and Broker objects which further improve and simplify the developer experience building event-driven systems on Knative.
In addition to eventing, this release of Knative also improves a number of metrics and the overall observability of autoscaling, queue proxy, and Istio telemetry. Let’s review these and a few other changes in more depth:
With the introduction of Trigger and Broker objects into the Eventing architecture, developers can easily build robust, complex, event-driven systems. By decoupling Producing and Consuming services there is no longer the need for complex wiring or routing configuration. We are excited to see what new types of events and innovative solutions the community will develop using this new capability!
Trigger: developers no longer need to manually provision transport for their events and route them to downstream knative services. They simply define an event trigger that selects the source events (with any desired filtering) and sends them to the consuming service. This greatly simplifies the developer experience.
Broker: the events Broker serves as the central events hub to which all messages are sent. Developers and Users simply write services or configure Sources that emit events to the Broker, which handles the rest. Consuming services need only create Triggers to receive the events in which they’re interested in from the Broker.
New event Source: this release of Knative adds support for the Kafka event source, which brings the power and richness of the Kafka ecosystem to Knative and Kubernetes.
Autoscaling added improvements which makes autoscaling under a variety of workloads a smoother motion as well as being more efficient. Expansion of autoscaling metrics were added for additional visibility over time-frames.
In this release, named sub-routes now surface their URLs in the status of Service and Route resources, so there’s no more guesswork in how to target one fork of your traffic split. This is one of the first changes to result from our “v1beta1 task force” which has been discussing the next iteration of the Serving API. Expect to see lots more changes in the coming releases.
In addition, several of the default values populated by our webhook are now configurable through a new ConfigMap called config-defaults. We have also increased the visibility into system errors by surfacing more Kubernetes events when our controllers suffer internal errors. Last but not least, we have expanded our conformance testing to include securityContext and metadata.generateName.
Bulk of the work in the networking space this sprint focused on fixing bugs and improving overall cold-starts for gRPC services as well as further improving client default authority header handling.