On Amazon EKS and Delivery Pipelines

Dirk Michel
20 min readApr 29, 2023

--

One of the fun things we get to do when building cloud-native applications is to create and extend software delivery pipelines for them. These pipeline structures can grow in complexity as the number of application components and micro-services increases. Building applications out of significant numbers of loosely coupled independent components and our ambition of shifting work into earlier stages of the software development life cycle (SDLC) tend to require more elaborate software delivery pipelines. The build platforms and tool-chain systems that underpin those pipelines have evolved into business-critical enablers.

Our expectations of software delivery pipelines continue to grow. They are a fly-wheel of accelerating developer productivity and improving our ability to ship software with increased security, quality, and resilience.

Commencing at the point of feature branch merges or approved pull requests, we look to the initial build stages of software delivery pipelines to retrieve the source code and its 3rd party dependencies, build and containerise the application, package Kubernetes manifests, and deliver them to registries. The practicalities of extending delivery pipelines beyond their foundational build stages may not always be obvious, as we aim to secure and evaluate release candidates against a set of additional criteria to establish their viability.

We can choose from many effective and popular systems, tooling, and projects to extend our pipeline capabilities, and the Cloud-Native Computing Foundation (CNCF) and the Continuous Delivery Foundation (CDF) ecosystem can help meet new requirements and expectations. The CNCF landscape is famously large, continues to grow, and offers various ingredients, reference frameworks, and best practices we can use.

Adopting cloud-native reference frameworks and maturity models to identify and prioritise objectives can be crucial to extend software delivery pipelines in a structured and incremental way.

Reference frameworks such as the Supply-chain Levels for Software Artefacts (SLSA), the CNCF Security TAG’s Supply Chain Security Assessment, and the Center of Internet Security (CIS) Supply Chain Security Guide, define intermediate milestones and maturity levels. The Kubernetes Pod Security Standards (PSS) define graded security levels for deployment configuration packages. The CIS Kubernetes Benchmark and the Amazon EKS security best practices can also be an essential reference when extending pipelines and build system infrastructure with “pipeline clusters”. Other AWS cloud security and resilience reference frameworks are relevant when applications interact with AWS APIs to provision AWS cloud resources they depend on.

This blog explores opportunities for extending software delivery pipelines for cloud-native applications. The following diagram illustrates the frame of reference.

Reference view for a delivery pipeline that leverages CNCF projects and AWS cloud services

For those on a tight time budget: The TL;DR of the following sections is to show a way of extending software delivery pipelines for cloud-native applications with CNCF projects and AWS cloud services to improve supply chain security, deployability, and resilience. Secure 3rd-party materials, protect and sign artefacts, generate and distribute metadata. Deploy release candidates into increasingly production-like environments where additional assessments are applied to help establish confidence in their “release-ability”. Combined with standards and reference frameworks, we can structure and prioritise pipeline extensions. Creating pipelines that achieve a robust security and resilience state takes time, and working with frameworks that define milestones is essential.

Let’s consider a set of assumptions before we begin. Notice that build platforms implement the coordination functions, provide build execution environments, and chain together the steps and stages that pipelines are made of. A rich ecosystem of such platforms is available as hosted or self-managed options, including GitHub-Actions, GitLab-CI, Circle-CI, and cloud-native build systems like Tekton. The assumption is that you have made a choice already.

We also assume that self-managed build infrastructure is categorised and managed like any other critical production system, where security, monitoring, event and incident response systems are actively operated. The various toolchain system components — such as build workers and clusters, build images, artefact stores and registries, identity and access management, and other auxiliary services — are important assets to protect, as we depend on them to produce and distribute our applications. In many ways, build infrastructure and system security have become as critical as application security.

Finally, the blog focuses on extending software delivery pipelines: Therefore, we won’t be spending time on the instrumentation of Integrated Development Environments (IDE), Source Code Management (SCM) and areas such as pre-commit hooks, code review workflows, unit and integration tests.

Now we’re all set: Let’s do it.

1. Security

There are many features that continuous integration/continuous delivery (CI/CD) pipelines can incorporate to help improve cloud-native application security. As software producers, extending pipelines incrementally with specific and targeted security features can help generate the visibility and automation we need to achieve compliance objectives. Adding foundational pipeline stages that, on the one hand, help us assess application components that we control and, on the other hand, help us systematically update application dependencies we don’t own, are part of a minimum viable setup: Being intentful about the 3rd party “materials” we incorporate in our applications is one of the aspects of securing our supply chain. Then we can add pipeline stages that help us produce and publish security-hardened, signed, and verifiable artefacts, which can be valuable for consumers with security compliance requirements and restrictive production environments.

1st-party scans: From a pipeline perspective, one of the earliest things to do with our language-specific application packages — such as wars, jars, wheels, and gems — is to run Static Analysis (SA) or Static Application Security Test (SAST) scans over them. Systems such as VeraCode, Snyk, and SonarQube use different techniques, such as code smells and taints, to identify problematic code and apply language-specific rules that validate best practices.

The scanner tools can be stand-alone utilities that the pipeline can run locally within the build execution environment as “pipeline scans” or trigger an “upload and scan” to a trusted remote scanning service and obtain the results. The pipeline can then assert quality and security gating criteria on our packages early on. Release candidates that pass the stage are then pushed into a package registry, which may be directly integrated with your build platform or a separate system such as the Nexus Repository Manager.

Package registries can also be an internal “clearing house” for externally sourced application dependencies. We systematically clear these 3-party “materials” by validating their signatures and checksums, for example, before adding them to the internal registry for later use. Build processes that need to resolve dependencies can then access the 3rd party materials from the package registry.

Add SA assessment stages to the pipeline and secure 1st-party materials. Create value by gating for secure development practices and reduce the amount of toil we may incur later in the software release cycle. The Open Source Security Foundation (OpenSSF) provides additional references and resources for cloud-native application developers.

3rd-party dependency scans: The 1st-party application code we write tends to leverage and incorporate 3rd-party packages and libraries that originate from outside our organisation and are published and maintained by others. We consume and ingest dependencies as 3rd-party materials at build time, ideally from an internal “clearing house”, resulting in application artefacts composed of 1st-party and 3rd-party components. We then use Software Composition Analysis (SCA) tools to analyse artefact composition and scan their 3rd-party material contents for known common vulnerabilities and exposures (CVE): The scan results help us identify and plan for version updates or replacements. The “dependency update frequency” indicates a good security culture and software delivery pipeline.

Alternatively, SCA scans can be run once the application packages are containerised. Container image scanners often incorporate SCA capabilities and inspect the contents of the image layers, including the root file system of the container as well as our application packages. When run within the build execution environment, the “pipeline scan” can locally generate the scan results that the gating mechanism needs to make the “push decision” to a destination OCI registry.

The container registries themselves often provide image-scanning features as well. Use a self-hosted cloud-native OCI registry such as the CNCF project Harbor, which pre-integrates with open-source container image scanners such as trivy. Or use a managed OCI registry service such as Amazon ECR, which integrates with Amazon Inspector and the open-source scanner Clair to run SCA scans across arriving container images. The delivery pipeline can query Amazon ECR scan results asynchronously as part of its candidate evaluation flow.

Add SCA stages to help secure 3rd-party materials. Create value by reducing the probability of our applications leaking CVEs into customer production systems. In an environment where transparency requirements are rising, adding SCA gates help systematise dependency updates.

Infrastructure as Code scans: Application developers can write AWS Cloud Development Kit (CDK) stacks directly, for example, and package them into containers for deployment and execution by workloads such as Kubernetes Jobs. With this pattern, application components can bundle and “autonomously” provision cloud resources they depend on.

The software delivery pipelines can be extended to support such cases and help provision cloud resources securely and in accordance with best practices such as the AWS CIS Benchmark and the security pillar of the AWS Well-Architected Framework. Tools such as checkov can scan Infrastructure as Code (IaC) stacks to find misconfigurations before they’re deployed. The CDK stacks can be synthesised into AWS CloudFormation (CF) files that are validated against policies and rules. Checkov provides a set of extensible pre-defined policies for CF that can be used to implement gating logic within the pipeline stage.

Add IaC security assessment stages when the application provisions AWS resources. Create value by helping consumers retain or achieve cloud security conformance and supports audited enterprises that must demonstrate AWS cloud security compliance.

Additionally, the Kubernetes Control Plane can be leveraged by cloud-native applications that need to interact with AWS services and their APIs. Projects such as AWS Controllers for Kubernetes enable this pattern, where Kubernetes custom resources are provided to provision cloud resources declaratively, as illustrated in this blog post.

Signing Container Images: We can protect container images the pipeline produces by adding cryptographic signatures. These digital signatures enable authenticity verification, integrity protection, and non-repudiation. We add pipeline stages that sign the base images we build and our application container images that use them. In both cases, the image promotion stage is often a convenient point to do this, as we can combine the image re-tag and signing steps. As producers of containerised applications, we can create a signature file alongside every container image with CNCF projects such as sigstore. The sigstore cosign utility can be added as a standalone binary to our pipeline build environments to sign container images and “attach” the resulting .sig file to our container repository in the OCI registry. The signing process is generally very fast as it pulls the container manifest file that uniquely and immutably identifies the container image to create the signature, not the image tarballs themselves.

With sigstore cosign, we can sign container images keylessly, avoiding the challenges of storing and distributing permanent keys ourselves. Cosign v2 uses the keyless option by default. It generates ephemeral key pairs for every singing action, exchanges them for short-lived signing certificates from the sigstore certificate authority, creates the image signature file, and pushes it to the OCI registry. The signing process concludes with an entry of the public certificate to the append-only ledger of the sigstore transparency log, making the signatures and public keys discoverable and verifiable beyond the expiration and deletion of the short-lived signing certificate and private key.

Cosign can also work with and generate “traditional” permanent key pairs or use key pairs from a key management system that supports asymmetric keys, such as Amazon KMS. With permanent keys, we’d typically need to store, rotate, secure, and make the private key available to our build environment for signing and then manage the public key infrastructure (PKI) for the verifier clients.

Protect container images by signing them. Create value for consumers that need to verify image authenticity as part of their security workflows and ensure that only verified images are run within production environments.

Signing SBOMs: Software producers are increasingly required to accompany their application components with a software bill of material or SBOM, which are nested inventory files for software artefacts. For cloud-native applications, SBOMs can be generated to catalogue the contents of container images, including file system packages and applications, their dependencies, versions, and licences. SBOMs are helpful for consumers that need to evaluate software security and supply chain risks for the software they use. We, as software producers, can help customers do so more efficiently at scale by making SBOMs available alongside our software artefacts so they don’t have to scan all the layers of all the images and generate them themselves. We can create SBOMs in the pipeline with utilities such as syft or trivy and generate machine-readable deep catalogue SBOM files for each container image, one SBOM file per container image. As a side note, trivy can append a supplementary section containing SCA vulnerabilities to its generated SBOM files.

Once the pipeline generates a given SBOM file, we can push and sign the SBOM files to OCI registries in ways analogous to pushing and signing container images: The cosign utility can “attach” the SBOM file to our container image hosted on our OCI registry as a .sbom artefact and then subsequently sign it like any other OCI-compliant artefact.

Cosign also supports attestations: As the signer, we attest that the SBOM we generated accurately represents the contents of our container image. With cosign attest we ingest the SBOM file, create an attestation file from it, and then cryptographically sign the attestation: Cosign attest generates an in-toto attestation file, which includes the SBOM itself, signs it, and “attaches” the signed attestation .att file to a container image. To enable verification, the attestation event is also saved into the immutable sigstore transparency log: Now, the SBOM can’t be deleted or modified in some way as this would surface upon verification.

Generate and sign SBOMs for container images. Create value for consumers that use them as input for automated workflows, including verification checks, provenance checks, license checks, as well as vulnerability scanning, and conformance checks.

Helm Chart Security: With Amazon EKS clusters as our application deployment target, we must accompany our container images with deployment and configuration instructions that Kubernetes understands. This is where Helm comes in. Helm is the de-facto standard package manager for Kubernetes applications and provides us with a templating engine and packaging facilities. Providing secure application container configuration creates value by adding another layer that can help protect runtime environments and reduce the risk of new, yet undefined threats.

We can define CICD pipeline gates that analyse Helm Chart packages and validate them against established best practices, focusing on production readiness and security. This kind of analysis is often implemented as pre-commit hooks to the source code repositories and can also play a role as part of a pipeline gate. Utilities such as Kubelinter can be added as a self-contained binary to our pipeline build environments and provide structured results and error output to evaluate release candidates. Kubelinter has pre-defined rules covering many error sources, such as identifying dangling resource definitions, duplicate environment variables, minimum replica counts, invalid target ports, and mismatching selectors. Rules specific to our linting requirements can also be defined as custom checks and included in the pipeline‘s gating process.

Helm Chart security configurations such as Kubernetes Pod and container securityContext can also be validated with KubeSec or Kubescape, which ship with pre-defined security rules we can readily use. Adding such utilities to our build images can help obtain security risk analysis scores and use them as part of our pipeline gates.

Add pipeline gates to sustain Helm Chart compliance. Create value by improving the quality and general security hygiene of the Charts we release to consumers. Being conscious of the Kubernetes resources we create on our customer environments helps create trust and enables kube-admins to operate clusters more efficiently and securely.

Pod Security: Kubernetes defines Pod Security Standards, and recent Kubernetes releases have made it easier for kube-admins to audit and enforce them through Pod Security Admission. Kubernetes application developers must be conscious of meeting these standards as their workloads are increasingly deployed into security-aware clusters that enforce them.

Therefore we extend our CICD pipelines with gates that evaluate Kubernetes manifests and Helm Charts for compliance with PSS levels. Adding utilities such as the kyverno-cli can validate PSS compliance as part of the pipeline. We can obtain early feedback on PSS compliance as part of the pipeline’s evaluation and gating process. The community maintains Kyverno policies that implement PSS requirements, and we can use them with the kyverno-cli. Depending on your approach to build images and the use of kustomize, we can opt for using stand-alone kyverno-cli binaries or install it as a kubectl plugin with Krew. Adopting a policy-as-code more generally or within the context of CICD pipelines can contribute to consistency and help reduce the overhead of aligning compliance tests at build-time and admission control at run-time. Kyverno can also run as an admission controller in Kubernetes clusters, as we will see later in this blog.

Add Baseline PSS conformance gates and then progressively work towards Restricted PSS. Customers increasingly run Kubernetes clusters enforcing pod security, and application producers can streamline the deployment process by meeting them up-front.

Signing Helm Charts: We can package and cryptographically sign Helm Charts once the release candidate passes our static analysis pipeline gating gates. We have Helm Chart signing and verification options depending on how the Helm Charts are stored for distribution and consumption. Helm natively defines Helm Repositories as the target storage system and supports OCI registries as a storage option. The Helm CLI can push/pull into both registry types.

With Helm repositories, we can use built-in native provenance tools which generate and verify signature files based on PKI and GnuPrivacyGuard (GPG). Helm can be extended and integrated with the sigstore ecosystem using the Helm Sigstore Plugin. The sigstore plugin acts on Charts that are already signed with the built-in Helm GPG feature and then uploads an entry to the sigstore transparency log, including the public key, the Chart provenance signature of the Chart, and the hash of the Chart itself. The Helm Sigstore Plugin can be added to the build image, and we can add a Helm Chart signing step to our preferred pipeline stage before pushing Charts to our target Helm Repository.

Alternatively, Helm can push Charts to OCI-compliant registries, enabling us to use cosign for signing and verification. Akin to the pipeline stage for container signing, we add a cosign signing step to our pipeline that signs Helm Charts stored in OCI registries. At this point, consumers of the Chart can now verify the Chart through cosign. When we, as software produces, store signed Charts in OCI registries, consumers can use tools that integrate the cosign library, such as FluxCD, to automate the verification of Helm Chart signatures at scale before applying them to Kubernetes clusters. Co-locating both Helm Charts and container images in OCI registries can help harmonise the signing process for producers and improve the end-user verification experience around cosign.

Protect Helm Charts by signing them. Create value by storing container images, Helm Charts, and their respective signatures in OCI registries to reduce complexity for consumers implementing automated verification workflows. As always, the value of singing is their verification.

2. Deployability

Thus far, the pipeline stages we added have focused on static analysis, metadata creation, and signatures for our cloud-native application artefacts. As micro-services pass and complete these stages, the delivery pipelines push the Helm Charts and container images to pre-release repositories. Each micro-service may have its dedicated pipeline that delivers candidates into registries.

The next set of pipeline stages pivots towards assessing deployment criteria. For this, we need to run Kubernetes clusters to deploy our release candidates to. This is where “pipeline clusters” come in. Incorporating pipeline clusters at different points in the pipeline can help separate distinct verification stages, as clusters become progressively hardened and production-like.

Pipeline clusters can be configured to identify the arrival of new release candidates in the registries across the various micro-services and deploy them. GitOps plays an effective role by continuously consuming release candidate Helm Charts from the pre-release repositories and deploying them into participating pipeline Kubernetes clusters. Pipeline clusters can be permanent and receive release candidates from all delivery pipelines. With graduated CNCF GitOps projects such as FluxCD, we can configure the continuous deployment of new release candidates that minimises deployment latency and avoids schedules.

Pipeline clusters create value by further increasing our confidence in release candidates by installing them into running Kubernetes environments. They also play a role when assessing impacts on other collaborating application components.

Incorporating clusters into the pipeline extends the build platform infrastructure, which means they need to be provisioned, maintained, secured and treated as critical infrastructure. With options such as Amazon EKS, we can offload some of the undifferentiated heavy lifting to AWS, as the Amazon EKS service team secures and manages the Kubernetes Control Plane components on our behalf.

Admission Control: Admission controls are typically fast-acting enforcement points when deploying workloads into Kubernetes clusters. GitOps controllers such as FluxCD can play an early role as a perimeter security control by automating Helm Chart signature validation prior to proceeding with the Chart installation, for example. Upon successful signature validation, FluxCD renders and submits manifest deployment requests to the Kubernetes API server.

Once deployment requests to the API server are authenticated and authorised, the Kubernetes admission controllers evaluate them further against a definable set of criteria. The built-in Kubernetes Pod Security Admission (PSA) can establish Pod Security Standards (PSS) compliance during deployment that can function as an enforcement point. The pipeline gating logic can then use the response from the cluster when the PSA mode is set to enforce.

The pipeline clusters can also be equipped with cluster add-ons to validate release candidates further. The Kyverno project — from which we used the kyverno-cli in earlier pipeline stages — can also be installed as a cluster add-on. Kyverno installs as a Kubernetes admission controller extension and can be very effective when applying additional validating policies. In particular, Kyverno can apply and enforce container image signature validation policies as part of our deploy-time security controls.

Pipeline clusters with GitOps operators and Kubernetes admission controllers create value by implementing signature validation and admission control enforcement points.

Runtime validation: Establishing deploy-time success for a release candidate is very useful: Still, we can extend beyond the point of deployment and include other assessments that run afterwards and over a time window. Using longer-running or “slow test” pipeline stages increases the overall duration of pipeline runs but opens up a variety of additional pipeline extensions: We can assess release candidates against application runtime stability and security criteria.

Kubernetes probes, for example, can be useful during stability validation over a longer time window, similar to a soak period, as we can catch failure modes that may not surface directly at deployment time, such as insufficient resource settings for Pods and containers.

Runtime security cluster add-ons, often found in production environments, can be used in pipeline clusters and create security events consumed by the pipeline. Adding such cluster add-ons can make the pipeline cluster more production-like: CNCF projects such as Falco deploy as a Kubernetes DaemonSet onto worker nodes and use eBPF probes or kernel modules to tap into the stream of Linux kernel system calls against which rules are asserted. Falco can also support data sources other than syscalls, such as AWS CloudTrail logs and emit events towards the pipeline.

Another benefit of adopting pipeline clusters is related to runtime evaluation through AWS managed services such as AWS Security Hub and Amazon GuardDuty. On the one hand, the managed services can help validate release candidates during runtime with AWS Security Hub standards and controls for CIS AWS Foundations Benchmark and detect changes in compliance ratings. This can be particularly when evaluating candidates that bundle IaC stacks. The Amazon GuardDuty EKS Runtime Monitoring managed service could be an alternative to self-managed runtime monitoring solutions. On the other hand, AWS managed services can contribute to general security management of the build infrastructure, including pipeline clusters and other build systems and servers. For example, Amazon ECR enhanced image scanning can continually assess container images for a configurable duration, which helps manage vulnerabilities that are published after the initial image scan date and deployment.

Post-deployment runtime validation across longer time durations provides value by helping identify increasingly subtle failure modes. Kubernetes probes can indicate stability, runtime security agents can assert rules on application behaviour, and compliance ratings can be tracked against provisioned AWS cloud resources.

3. Resilience

Adding pipeline stages that focus on resiliency can provide another layer of confidence in our candidates as they progress through the pipeline and prove to be resilient against a range of pre-defined stress and failure injection events. Resilience stages can be defined with foundational failure modes we most care about and wish to ensure the release candidate can withstand. These stages fall into the “slow test” category as they need longer time windows to complete and are most meaningful alongside the other application components on the pipeline clusters. Evaluating the resilience of individual release candidates can make sense but may not be sufficient to identify downstream impacts on collaborating micro-services.

Infrastructure failure: Extending pipelines with a pipeline cluster where common infrastructure failures are implemented can be an effective place to start. Such a pipeline cluster is often the most production-like environment in the pipeline: They are security hardened and subjected to recurring failures and stress to create an environment one might expect in production.

One way to achieve this is to leverage the AWS Fault Injection Simulator, which is a managed service that can create disruptive events so that we can observe if and how they impact our release candidate or application. AWS FIS can inject failure scenarios into a pipeline cluster on a recurring schedule, creating an environment in which worker nodes are intermittently terminated, Availability Zones become temporarily unavailable, and Kubernetes pods are randomly deleted. We can also apply stress to AWS resources: For example, we can exhaust compute resources on worker nodes with custom actions and SSM documents or pause i/o towards attached Amazon EBS volumes. AWS FIS can also interact with Kubernetes native chaos engineering projects such as LitmusChaos or ChaosMesh to create more complex experiments.

Pipelines clusters typically use lower experiment complexity because results may otherwise become harder to interpret. This blog illustrates more elaborate fault injection scenarios that are used when validating new hypotheses in a chaos-engineering context.

Create value by selecting for release candidates that pass foundational infrastructure resiliency conditions.

Application stress: Short of complete application performance load testing scenarios, resilience stages can provide a valuable opportunity to include some form of “tightly scoped” application load assessment. A practical way to achieve this is to have the pipeline clusters equipped with or connected to “load generators” that create application-specific stress.

These can be front-end stress, where tools such as K6 can be used to create traffic on web servers, for example. Application-specific “data generators” are also useful, especially for data-intensive applications that handle data ingestion and processing. Pipeline gates can then be configured to identify workload failures such as pod restarts and more application-specific performance indicators depending on the monitoring setup and available observability data. AWS FIS, in combination with cloud-native chaos operators, can also play a role here, as they can inject stress at the application container level. Analogous to infrastructure failure injection, covering foundational stress injection use cases can help maximise “return on investment”.

Create value by selecting for release candidates that pass foundational application resiliency conditions.

IaC Resilience:

Cloud-native applications that bundle IaC to provision their cloud dependencies can be challenging to assess for resilience: Static analysis for IaC early on in the pipeline can be useful. We can build on that by analysing the deployed release candidate, evaluating compliance to cloud resilience best practices, and the likelihood of meeting resilience policies for recovery time objectives (RTO) and recovery point objectives (RPO).

The AWS Resilience Hub can provide a central place for defining, validating, and tracking the resiliency of our provisioned AWS services. We can use resilience policies that define resilience targets, run assessments, and obtain resilience scores. Assessments can be executed on a schedule or via an API call and configured to emit notifications that delivery pipelines can process and use as part of their gating process.

Extend the pipeline by triggering AWS Resiliency Hub assessments. Assessment score results can create value by indicating the likelihood of meeting resiliency objectives.

Conclusions

Software consumers are increasingly likely to select suppliers and applications with verifiable security, deployability and resilience characteristics. For software producers, software delivery pipelines are a critical element in achieving and maintaining such requirements. Extending software delivery pipelines with enforcement points and measuring progress towards compliance, deployability, and resilience goals can provide the necessary structure to achieving targeted outcomes that align with business priorities and evolving software consumer requirements.

Many enterprises heavily invest in protecting the perimeter of run-time production environments, and improving the security posture of applications that run on them is a ubiquitous objective: CVE management and supply chain security fundamentals, such as signing software packages and providing SBOMs, are swiftly becoming table stakes. Actively managing supply chain security helps producers maintain focus on security issues introduced by third-party dependencies and technologies used to write, build, and distribute our software.

Equally, software delivery pipelines are a key enabler for assessing application deployability and resiliency goals. Extending pipelines with “pipeline clusters” can be an effective strategy for selecting release candidates that sustain increasingly production-like assessments: A range of CNCF projects and AWS cloud services are helpful in achieving that efficiently.

--

--

Dirk Michel

SVP SaaS and Digital Technology | AWS Ambassador. Talks Cloud Engineering, Platform Engineering, Release Engineering, and Reliability Engineering.