Dapr with Score and Humanitec — Improving the Developer Experience of your Platform, on steroids!

Mathieu Benoit
11 min readDec 15, 2023

--

Update on June 13th, 2024 — This blog post has now its recorded walkthrough I delivered during PlatformCon 2024: Dapr + Score: Crafting and improving your developers experience — Mathieu Benoit | PlatformCon 2024 (youtube.com).

Update on June 9th, 2024 — This blog post has been revisited to reflect the new updates and features from Score, with score-compose and score-k8s. 🎉

Update on January 10th, 2024 — This content has been presented with the associated demos during the Dapr Community Call on Jan 10th 2024, enjoy! 🎉

Usually, as soon as I come back from an in-person conference, I have plenty of ideas for proof of concepts that I want to conduct and new technologies and projects that I want to give a try. That’s so inspiring to me!

With KubeCon NA 2022, my focus was on Security, for example with Kyverno or Sigstore.

The last KubeCon NA 2023 was no exception. I had a blast there! My focus was on Platform Engineering (and still on Security too, because we shouldn’t stop learning in this area!).

One of the projects that I wanted to try for the first time is Dapr (Distributed Application Runtime).

More precisely, Dapr with Score and Humanitec. Let’s do it! And see why this fit is perfect!

In this blog post we will:

  • Introduce Dapr
  • Install Dapr on Kubernetes
  • Deploy the Dapr Hello World apps “as-is” via kubectl
  • Define the Dapr Hello World apps with Score
  • Deploy the Dapr Hello World apps via Score, score-compose and docker compose
  • Deploy the Dapr Hello World apps via Score, score-k8s and kubectl
  • Deploy the Dapr Hello World apps via Score, humctl (i.e. Humanitec Orchestrator)

Throughout this content, we will demonstrate how to:

  • Describe Workloads definition with Score, to simplify the Developer Experience and avoid to deal with Kubernetes manifests and instead use a abstracting workload specification
  • Standardize and shift down to the Platform the technical details implementation of Dapr and associated infrastructure.
The Golden Path do Cloud Native Apps with the right levels of abstraction.

Here is the recorded walkthrough of this blog post I delivered during PlatformCon 2024:

Dapr + Score: Crafting and improving your developers experience (platformcon.com)

Why Dapr?

Recently Dapr got some talks during the first ever AppDeveloperCon at KubeCon NA 2023 (videos of the talks available here).

Dapr provides integrated APIs for communication, state, and workflow. Dapr leverages industry best practices for security, resiliency, and observability, so you can focus on your code.
Dapr — Distributed Application Runtime

Yet another building block on top of Kubernetes that you may want to add in your toolchain for your platform. The promise of Dapr is quite interesting:

  • As Developer, you add the Dapr SDK in your code calling Dapr API abstracting actual external services. For dependencies resiliency, security, for state store, pub sub messaging, etc.
  • As Platform Engineer, you install Dapr in your cluster and you configure the Dapr sidecar injection for any workloads in your cluster using Dapr.
  • As Platform Engineer, you can easily swap Dapr components (YAML files with resource connection metadata) to switch from one component pointing to Azure, AWS, etc.
Welcome + Opening Remarks — Mark Fussell

If you want to see some demos of Dapr, here are 3 great contents I recently leveraged for my own learning about Dapr:

After this high-level introduction, let’s see Dapr in action!

Install Dapr in your cluster

First thing first, let’s deploy Dapr on a Kubernetes cluster:

helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
helm upgrade \
dapr \
dapr/dapr \
--install \
--create-namespace \
-n dapr-system

Deploy Dapr Hello World apps “as-is” with kubectl:

Image updated from quickstarts/tutorials/hello-kubernetes at master · dapr/quickstarts (github.com)

As Platform Engineer, create a dedicated Namespace in Kubernetes:

kubectl create namespace dapr-hello-world

As Platform Engineer, deploy an in-cluster Redis database in Kubernetes:

kubectl create deployment redis \
--image=redis:alpine \
-n dapr-hello-world

kubectl expose deployment redis \
--port 6379 \
-n dapr-hello-world

Note: For brevity, we are deploying a simple Redis database. As best practice you should use Helm charts like Bitnami offers in order to have more reliability and security with your in-cluster Redis database setup. Or you should even deploy your Redis database in your Cloud provider of choice.

As Platform Engineer, deploy the Dapr State Store for this Redis database in Kubernetes:

cat << EOF | kubectl apply -n dapr-hello-world -f -
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: redis:6379
EOF

As Developer, deploy the Node app in Kubernetes:

kubectl apply \
-f https://raw.githubusercontent.com/dapr/quickstarts/master/tutorials/hello-kubernetes/deploy/node.yaml \
-n dapr-hello-world

Same for the Python app:

kubectl apply \
-f https://raw.githubusercontent.com/dapr/quickstarts/master/tutorials/hello-kubernetes/deploy/python.yaml \
-n dapr-hello-world

Because these Deployments have the dapr.io/enabled: "true" annotation, we could now see that we have the Dapr Sidecar injected for both nodeapp and pythonapp:

NAME                            READY   STATUS    RESTARTS   AGE
pod/nodeapp-d64b57477-v92pl 2/2 Running 0 6m39s
pod/pythonapp-8454cc66c-h7hvn 2/2 Running 0 60s
pod/redis-78d4b8b77c-ncj6v 1/1 Running 0 16m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nodeapp LoadBalancer 10.28.174.216 34.48.51.179 80:31894/TCP 6m40s
service/nodeapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 6m40s
service/pythonapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 60s
service/redis ClusterIP 10.28.161.133 <none> 6379/TCP 14m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nodeapp 1/1 1 1 6m40s
deployment.apps/pythonapp 1/1 1 1 60s
deployment.apps/redis 1/1 1 1 16m

NAME DESIRED CURRENT READY AGE
replicaset.apps/nodeapp-d64b57477 1 1 1 6m40s
replicaset.apps/pythonapp-8454cc66c 1 1 1 60s
replicaset.apps/redis-78d4b8b77c 1 1 1 16m

NAME AGE
component.dapr.io/statestore 10m

We can even see that the apps are working successfully with kubectl logs -l app=node -n dapr-hello-world:

Defaulted container "node" out of: node, daprd
Got a new order! Order ID: 192
Successfully persisted state for Order ID: 192
Got a new order! Order ID: 193
Successfully persisted state for Order ID: 193
Got a new order! Order ID: 194
Successfully persisted state for Order ID: 194
Got a new order! Order ID: 195
Successfully persisted state for Order ID: 195
Got a new order! Order ID: 196
Successfully persisted state for Order ID: 196

Great, success! You just used Dapr in your Kubernetes cluster. Let’s now see how Score could improve the experience of you Developers on top of that.

Define the Dapr Hello World apps with Score

So what if I don’t want my Developers defining how to deploy their apps via Kubernetes manifests? What if I need an Ingress, a TLS, security context, NetworkPolicies, etc.? Should they be experts on that? Nop!

So let’s allow them to just define the strict minimum about their Workload specification and not deal with the Kubernetes world by themselves, allowing them to only focus on their code. For that we will use Score.

The driving principle is that you want your developers to describe their workload and related dependencies in abstract terms, using a workload specification, like Score.

As Developer, define the Score file for the Node app:

cat <<EOF > score-node.yaml
apiVersion: score.dev/v1b1
metadata:
name: nodeapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
containers:
nodeapp:
image: ghcr.io/dapr/samples/hello-k8s-node:latest
variables:
APP_PORT: "3000"
STATE_STORE_NAME: "statestore"
service:
ports:
tcp:
port: 3000
targetPort: 3000
EOF

Same for the Python app:

cat <<EOF > score-python.yaml
apiVersion: score.dev/v1b1
metadata:
name: pythonapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "pythonapp"
containers:
pythonapp:
image: ghcr.io/dapr/samples/hello-k8s-python:latest
EOF

Deploy your Score workloads with Docker Compose

Now that we have the Score files, let’s deploy them locally with Docker Compose, for this we will use score-compose.

As Developer, initialize my local workspace with score-compose (with the default provisioners):

score-compose init \
--no-sample

As Developer, generate the Compose file for the two workloads:

score-compose generate score-node.yaml
score-compose generate score-python.yaml

And finally, deploy the generated Compose file by the score-compose generate commands:

docker compose up --build -d

We can see that the two workloads and their dependencies are deployed:

[+] Running 10/10
✔ Network hello-world_default Created 0.2s
✔ Volume "redis-dYMC9Q-data" Created 0.0s
✔ Container hello-world-routing-BdZ2zG-1 Started 2.4s
✔ Container hello-world-redis-dYMC9Q-1 Started 2.4s
✔ Container hello-world-placement-1 Started 2.4s
✔ Container hello-world-wait-for-resources-1 Started 2.6s
✔ Container hello-world-pythonapp-pythonapp-1 Started 3.2s
✔ Container hello-world-nodeapp-nodeapp-1 Started 3.2s
✔ Container hello-world-nodeapp-nodeapp-sidecar-1 Started 3.4s
✔ Container hello-world-pythonapp-pythonapp-sidecar-1 Started

We could also check the logs of the apps running:

Node App listening on port 3000!
Got a new order! Order ID: 1048
Successfully persisted state for Order ID: 1048
Got a new order! Order ID: 1049
Successfully persisted state for Order ID: 1049
Got a new order! Order ID: 1050
Successfully persisted state for Order ID: 1050
Got a new order! Order ID: 1051
Successfully persisted state for Order ID: 1051
Got a new order! Order ID: 1052
Successfully persisted state for Order ID: 1052
Got a new order! Order ID: 1053
Successfully persisted state for Order ID: 1053

And that’s it, congrats! You just abstracted the notion of Docker Compose file and the deployment method from your Developers by using Score and score-compose.

Deploy your Score workloads in a Kubernetes cluster

With these exact same Score files, let’s now deploy them locally in a existing Kind cluster (this could work with any Kubernetes cluster).

As Developer, initialize my local workspace with score-k8s (with the default provisioners):

score-k8s init \
--no-sample

As Developer, generate the Kubernetes manifests for the two workloads:

score-k8s generate score-node.yaml
score-k8s generate score-python.yaml

And finally, deploy the generated Kubernetes manifests by the score-k8s generate commands:

kubectl apply -f manifests.yaml

We can see that the two workloads and their dependencies are deployed:

secret/redis-zq1fhc created
statefulset.apps/redis-zq1fhc created
service/redis-zq1fhc created
component.dapr.io/redis-zq1fhc created
httproute.gateway.networking.k8s.io/route-nodeapp-68517169 created
service/nodeapp-svc created
deployment.apps/nodeapp created
deployment.apps/pythonapp created

We could also check the logs of the apps running:

Defaulted container "nodeapp" out of: nodeapp, daprd
Got a new order! Order ID: 25
Successfully persisted state for Order ID: 25
Got a new order! Order ID: 26
Successfully persisted state for Order ID: 26
Got a new order! Order ID: 27
Successfully persisted state for Order ID: 27
Got a new order! Order ID: 28
Successfully persisted state for Order ID: 28
Got a new order! Order ID: 29
Successfully persisted state for Order ID: 29

And that’s it, congrats! You just abstracted the notion of Kubernetes manifests and the deployment method from your Developers by using Score and score-k8s.

Deploy the Dapr Hello World apps via Score, humctl (i.e. Humanitec Orchestrator)

We can do more from a Platform Engineering perspective.

What if you want to automate and standardize the creation of the Redis database and its associated Dapr State Store? Also, what if you don’t want to deal with Kubernetes to deploy your app, and you want this part abstracted too? What if you want to centralize them and make them more dynamic and generic at scale?

For that we will use the Humanitec Orchestrator.

Humanitec Orchestrator takes the workload and related dependencies in abstract terms (Score) and the relevant context and matches it to baseline resource definitions to create or update the associated infrastructure. It then generates the config files and wires everything up, to eventually deploy them in a targeted Kubernetes cluster.

As Platform Engineer, define a Redis database resource in Humanitec. This Redis database could be in cluster, AWS, GCP, Azure, etc. You could bring and define your own databases and providers by reusing the Humanitec Resource Packs.

As Platform Engineer, define a Dapr State Store (Redis) component resource in Humanitec:

cat <<EOF > dapr-state-redis.yaml
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: dapr-state-redis
entity:
name: dapr-state-redis
type: dapr-state-store
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |-
name: statestore
manifests: |-
statestore.yaml:
location: namespace
data:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: {{ .init.name }}
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: ${resources.redis.outputs.host}:${resources.redis.outputs.port}
outputs: |
name: {{ .init.name }}
criteria:
- {}
EOF

humctl create \
-f dapr-state-redis.yaml

We are defining that any request asking for a Dapr State Store will have a Redis State Store linked to any Redis resource defined in the same context.

As Developer, deploy the Node app via Humanitec:

humctl score deploy \
-f score-node.yaml

As Developer, deploy the Python app via Humanitec:

humctl score deploy \
-f score-python.yaml

As Developer, see that my apps are working successfully via my Developer portal (not via kubectl commands ;)):

Humanitec Portal — Active deployment of a Workload

You can also generate the associated resource graph and see all the dependencies resolved for the two workloads:

That’s it! Congrats! We abstracted any notion of Kubernetes from our Developers while standardizing the Dapr configurations down to the Platform!

That’s a wrap!

Score Workloads were deployed to 3 different runtimes:

  • Docker Compose with score-compose
  • Kubernetes with score-k8s
  • Kubernetes via Humanitec Orchestrator with humctl

The Golden Path do Cloud Native Apps with the right levels of abstraction:

To select the building blocks of your Platform, it’s always about how tools complement each other. But to make your Platform more mature at scale, it’s also about meeting with the right level of abstraction for both your Developers and your Platform Engineers.

  • Dapr abstracts the communication between an app and its external service dependencies.
  • Score is a developer-centric and platform-agnostic workload specification.
  • Humanitec Orchestrator abstracts and standardizes the resources of the platform and infrastructure underneath.

Hope you enjoyed that one! Happy sailing, happy platforming!

--

--

Mathieu Benoit

Customer Success Engineer at Humanitec | CNCF Ambassador | GDE Cloud