Platform Engineering in action: Deploy the Online Boutique sample apps with Score and Humanitec

Mathieu Benoit
Google Cloud - Community
11 min readApr 22, 2024

Lately, with KubeCon Paris 2024 and Google Cloud Next 2024, I have been preparing and delivering quite a good amount of demos about Score and Humanitec. One of these demos is with the Online Boutique sample apps provided by Google Cloud. You can deploy it in any Kubernetes clusters (not just GKE, I do that with Azure/AKS too), with plain Kubernetes manifests, its Helm chart or Kustomize overlays. Very convenient.

Let me show you another way to deploy the Online Boutique sample apps with Score and Humanitec. We will see the “how” but most importantly we will illustrate the “why”. Specifically around two main personas:

  • Developers
  • Platform Engineers

In more details, here are the use cases we will go through:

As Developer, I want to:

  • Define my workloads and their dependencies;
  • Deploy my workloads locally;
  • Deploy my workloads to the IDP;
  • See my workloads in my portal.

As Platform Engineer, I want to:

  • Build my own IDP on top of my existing tools and platforms;
  • Centralize the governance and security best practices;
  • Configure databases dependencies for my Developers.

Let’s do it!

Developer

As Developer, I want to focus on my apps, without dealing with Infrastructure and Kubernetes.

In this section, for the Developers point of view, we will see how the developer experience is improved, the velocity increased, and the cognitive load reduced.

As Developer, I want to define my workloads and their dependencies.

As Developer, I have my code and a way to containerize my workload. Now what I want to define is what my workload needs to be properly deployed. I don’t know the technical details or where it’s hosted and how to resolve the dependencies. I will let the IDP resolves all of this for me.

Let’s take an example, here is the Score file of the cartservice workload:

apiVersion: score.dev/v1b1
metadata:
name: cartservice
containers:
cartservice:
image: gcr.io/google-samples/microservices-demo/cartservice:v0.10.0
variables:
REDIS_ADDR: "${resources.redis-cart.host}:${resources.redis-cart.port},user=${resources.redis-cart.username},password=${resources.redis-cart.password}"
resources:
limits:
memory: "128Mi"
cpu: "300m"
requests:
memory: "64Mi"
cpu: "200m"
resources:
redis-cart:
type: redis
service:
ports:
grpc:
port: 7070
targetPort: 7070

In this first example, we could see that the Developer of the cartservice workload defines on which port the container is exposed, the resources limits and requests (if they know them), and also the dependency to a Redis database. Where this Redis database is and what’s the connection string, that’s another story. It’s abstracted to the Developer at this stage.

Another example, here is the Score file of the frontend workload:

apiVersion: score.dev/v1b1
metadata:
name: frontend
containers:
frontend:
image: gcr.io/google-samples/microservices-demo/frontend:v0.10.0
livenessProbe:
httpGet:
path: /_healthz
port: 8080
httpHeaders:
- name: Cookie
value: shop_session-id=x-liveness-probe
readinessProbe:
httpGet:
path: /_healthz
port: 8080
httpHeaders:
- name: Cookie
value: shop_session-id=x-readiness-probe
variables:
AD_SERVICE_ADDR: "${resources.adservice.name}:9555"
CART_SERVICE_ADDR: "${resources.cartservice.name}:7070"
CHECKOUT_SERVICE_ADDR: "${resources.checkoutservice.name}:5050"
CURRENCY_SERVICE_ADDR: "${resources.currencyservice.name}:7000"
ENABLE_PROFILER: "0"
PAYMENT_SERVICE_ADDR: "${resources.paymentservice.name}:50051"
PORT: "8080"
PRODUCT_CATALOG_SERVICE_ADDR: "${resources.productcatalogservice.name}:3550"
RECOMMENDATION_SERVICE_ADDR: "${resources.recommendationservice.name}:8080"
SHIPPING_SERVICE_ADDR: "${resources.shippingservice.name}:50051"
CYMBAL_BRANDING: "false"
FRONTEND_MESSAGE: ""
ENABLE_ASSISTANT: "false"
SHOPPING_ASSISTANT_SERVICE_ADDR: "${resources.shoppingassistantservice.name}:8080"
resources:
limits:
memory: "128Mi"
cpu: "200m"
requests:
memory: "64Mi"
cpu: "100m"
resources:
dns:
type: dns
route:
type: route
params:
host: ${resources.dns.host}
path: /
port: 80
adservice:
type: service
cartservice:
type: service
checkoutservice:
type: service
currencyservice:
type: service
paymentservice:
type: service
productcatalogservice:
type: service
recommendationservice:
type: service
shippingservice:
type: service
shoppingassistantservice:
type: service
service:
ports:
http:
port: 80
targetPort: 8080

In this second example, we could see that the Developer of the frontend workload defines more information like the livenessProbe and the readinessProbe, but also other resources like the dependencies with the other workloads as well as the need to expose this workload by a DNS.

Again, all these dependencies are abstracted from the Developer, the associated placeholders will be replaced by the Platform Orchestrator, once this Score file will be deployed.

As Developer, I can focus on my code while using the well supported golden paths provided by the IDP provided to me:

Authoring Score files with the Humanitec VS Code extension for more productivity, interacting with the IDP, seeing the available resources Developers can use as dependencies, etc.

As Developer, I want to deploy my workloads locally.

As Developer, I need to test my workloads and their dependencies locally before pushing them into my Git repo and associated CI/CD pipelines. For sure, I can run the code locally from my IDE, but what if I want to reuse the definition of my workloads their dependencies and deploy them locally. Here enters score-compose, one Score implementation to transform the Score files in a compose.yaml file:

score-compose init

score-compose generate score.yaml

docker compose up --build -d

The init command, will create the score-compose’s workspace with the default resources provisioners (for examples: redis, dns, amqp, postgres).

The generate command will create the compose.yaml file depending on the Score files of the workloads and by resolving the resources dependencies.

Then docker compose up can be used, and the containers can be tested locally:

[+] Running 13/13
✔ Container onlineboutique-demo-redis-iVXhMw-1 Running
✔ Container onlineboutique-demo-routing-Rv5fqb-1 Running
✔ Container onlineboutique-demo-wait-for-resources-1 Started
✔ Container onlineboutique-demo-emailservice-emailservice-1 Started
✔ Container onlineboutique-demo-cartservice-cartservice-1 Started
✔ Container onlineboutique-demo-checkoutservice-checkoutservice-1 Started
✔ Container onlineboutique-demo-frontend-frontend-1 Started
✔ Container onlineboutique-demo-recommendationservice-recommendationservice-1 Started
✔ Container onlineboutique-demo-shippingservice-shippingservice-1 Started
✔ Container onlineboutique-demo-paymentservice-paymentservice-1 Started
✔ Container onlineboutique-demo-productcatalogservice-productcatalogservice-1 Started
✔ Container onlineboutique-demo-currencyservice-currencyservice-1 Started
✔ Container onlineboutique-demo-adservice-adservice-1 Started

Pretty neat, isn’t it?!

As Developer, I want to deploy my workloads to the IDP.

As Developer, I now need to deploy my Workloads to the IDP in order to promote it to staging and production environments. With the Humanitec CLI, here is how you can accomplish this with the exact same Score files defined and used ealier:

humctl score deploy \
--app ${APP_ID} \
--env ${ENVIRONMENT_ID} \
-f score.yaml

In a real world, your CI/CD pipelines will take care of this deployment part as soon as you commit in main branch or any feature branches. Often, these CI/CD pipelines are provided as templates by your Platform Team and the Developers can use them from their own Git repository.

Such CI/CD pipelines could facilitate the creation of Ephemeral Environments to test feature branches. Here is an example of a Pull Request showing you the information of a deloyment in an Ephemeral Environment:

At this stage what’s interesting is that we could look at the graph of the resources for this deployment. That’s where you could see that the Platform is resolving and abstracting a complex dependencies graph on behalf of the Developers:

As Developer, I want to see my workloads in my portal.

Once my workloads are deployed in a specific environment, I want to be able to see their deployment status, their dependencies, etc. Here enters the notion of Developer Portal. Humanitec provides by default the Humanitec Portal to allow the Developers to be more autonomous with these aspects:

I can also get more information about the dependency of a specific workload, the container logs, etc. without interacting directly with Kubernetes:

This section was all about the Developer persona. We saw how the cognitive loads on the Developers has been reduced, having them more focused on describing what they want rather than trying to think about all the technical details implementation in the Cloud provider and in Kubernetes. In the following section, we will see how this level of abstraction is possible thanks to the Platfrom Engineers configuring and wiring up all the components and recipes in the Platform Orchestrator.

Platform Engineer

As Platform Engineer, I want to centralize a consistent and secure platform to make Developers more autonomous.

In this section, for the Platform Engineers point of view, we will see how the governance, the security and the observability could be standardized and abstracted down to the Platform.

As Platform Engineer, I want to build my own IDP on top of my existing tools and platforms.

Like illustrated with this reference architecture below, you build your own Internal Developer Platform (IDP) based on your own existing tools. Like highlighted in yellow, you then integrate the Humanitec Orchestrator in your CI/CD pipelines and you ask your Developers to define how they want to deploy their workload thanks to Score (see previous section). Finally, the different personas, depending on their role, can visualize all the Humanitec resources throught the Humanitec Portal (Backstage could be used here with the associated Humanitec plugin). The important part here is that your IDP is based on a combination of tools and teams. The Security, Observability, SRE, Cloud teams will contribute to the IDP with their recipes, and this, in collaboration with the Platform Engineering team.

Your own toolchain + Humanitec = Your successful IDP

Note: on this image you can see the new Google Cloud App Hub service, just announced at Google Cloud Next 2024. We had the opportunity to collaborate with Google in order to show how Humanitec could easily onboards projects and workloads in this new App Hub service. If you want to know more about App Hub you can watch these two sessions: OPS105 and OPS100.

Projects and Workloads onboarded in Google Cloud App Hub via Humanitec

You can create your own platform engineering reference architectures. You can also provision the Humanitec reference architectures on Azure, AWS, Google Cloud, etc.

As Platform Engineer, I want to centralize the governance and security best practices.

One of the issues reported by the Platform Teams (Ops, Observability, Security, etc.) is the complexity to make the Kubernetes manifests (Helm, Kustomize, etc.) by the Developers because they most of the times own them and host them in their own Git repositories and should maintain them by themselves. That’s where the Platform Orchestrator comes into play, where Platform Engineers are able to define a list of centralized recipes they want to standardize. It could be Kubernetes manifests, Terraform modules, etc.

Here below is an example of recipe that will guarantee that any Namespace in Kubernetes will enable Istio Service Mesh and enforce Pod Security Standard (PSS):

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
name: ${context.app.id}-${context.env.id}
manifests: |-
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
istio-injection: enabled
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}

That’s where you could also add more labels and annotations for your Observability or FinOps tools.

Once deployed via the Humanitec Orchestrator we could see that the workloads are now transparently onboarded in the Google Cloud Service Mesh enabled on the GKE cluster:

The Google Cloud Service Mesh Topology of the workloads deployed via the Humanitec Orchestrator.

But now, should you ask your Developers to add the securityContext on their Workloads to make them more secure? Nop! Certainly not. As Platform Engineer, you should abstract this from them too. Here is yet another recipe that will ensure and standardize this:

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/automountServiceAccountToken
value: false
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
criteria:
- {}

Your clusters and workloads are now secure by default and by design, they are automatically monitored in your GKE Security Posture dashboard:

GKE Security Posture dashboard

As Platform Engineer, I want to configure databases dependencies for my Developers.

When a deployment of a workload is coming and requests a database dependency, how to answer this? As Platform Engineer, it’s yet another recipe and golden path that I can support.

A simplified version to configure an in-cluster Redis database could be configured like this in the Humanitec Orchestrator:

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: redis-in-cluster
entity:
name: redis-in-cluster
type: redis
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |-
name: redis
port: 6379
username: ""
password: ""
manifests: |-
deployment.yaml:
location: namespace
data:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .init.name }}
spec:
selector:
matchLabels:
app: {{ .init.name }}
template:
metadata:
labels:
app: {{ .init.name }}
spec:
containers:
- name: {{ .init.name }}
image: redis:alpine
ports:
- containerPort: {{ .init.port }}
service.yaml:
location: namespace
data:
apiVersion: v1
kind: Service
metadata:
name: {{ .init.name }}
spec:
type: ClusterIP
selector:
app: {{ .init.name }}
ports:
- name: tcp-redis
port: {{ .init.port }}
targetPort: {{ .init.port }}
outputs: |
host: {{ .init.name }}
port: {{ .init.port }}
secrets: |
username: {{ .init.username }}
password: {{ .init.password }}
criteria:
- {}

In this case we are using the humanitec/template to deploy any Kubernetes manifests. It’s one of the Humanitec driver types you can use for your resource definitions (i.e. recipes). Another driver type that you can use to interact with your Cloud provider is humanitec/terraform. Here below is a simplified version of what you can define if you want to provision a Google Cloud Memorystore (Redis) database:

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: redis-memorystore
entity:
driver_type: humanitec/terraform
name: redis-memorystore
type: redis
driver_inputs:
values:
script: |-
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
provider "google" {
}
resource "google_redis_instance" "memorystore" {
name = "redis-cart"
memory_size_gb = 1
redis_version = "REDIS_7_0"
region = "REGION"
auth_enabled = true
}
output "host" {
value = google_redis_instance.memorystore.host
}
output "port" {
value = google_redis_instance.memorystore.port
}
output "username" {
value = ""
sensitive = true
}
output "password" {
value = google_redis_instance.memorystore.auth_string
sensitive = true
}
criteria:
- {}

In this example, the Memorystore (Redis) instance is privately exposed in the same network as the GKE cluster, and accessible by authentication with password.

Again, at this stage, when a new deployment of a workload will happen it will seemlessly take this new implementation of redis , no need to change anything for the Developers.

That’s a wrap!

On top of your own Platform(s) with your own tools, we saw how you can increase the velocity of your Developers while abstracting and improving the stability and security of your Internal Developer Platform (IDP). And this, with three key components: a Workload Specification, an Orchestrator and a Portal.

3 key pillars for your successfull IDP implementation and adoption.

But Platform Engineering is not just about the tools. You need to embark in a Platform-as-Product journey. This will guarantee that your Developers feel listened, that they will use your Platform because it actually helps them, you simplify their day-to-day job, you provide them a concrete Return On Investment (ROI), you expose well supported golden paths, etc.

Platform Engineering is not just about tools, it’s critical to embark in a Platform-as-Product journey as early as possible too.

Resources

Happy platforming, happy sailing, cheers!

--

--

Mathieu Benoit
Google Cloud - Community

Customer Success Engineer at Humanitec | CNCF Ambassador | GDE Cloud