Setting up Gravitee API Management behind Consul Service Mesh

Kamiel Amadpour
graviteeio
Published in
10 min readMay 31, 2022

In this blog post we’ll show you how to combine all of the features of Gravitee’s API Management platform alongside Consul as a service mesh.

Luke van Zyl on Unsplash

Introduction

Companies are increasingly adopting microservices, containers, and Kubernetes. This brings new requirements regarding security, communication policy and observability. A service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between services or microservices. It can provide some features for managing the East-West traffic, on top of all the features that API management solutions like Gravitee can provide for handling the North-South traffic. In this article we will install Gravitee APIM behind Consul, a service mesh solution providing a fully featured control plane, with service discovery, configuration, and segmentation functionality. We will first configure and install Consul for a given namespace, then install and integrate the Gravitee APIM with the rest of services in the mesh.

Architecture overview

As shown in the diagram above, each node in the cluster will get its own consul-agent, which acts like a bridge between Consul servers and Consul-connect (a side car proxy which will be installed in each pod).

Installing and Configuring Consul

You can install Consul using Helm or consul-k8s, but in this tutorial we will use Helm. The exact same configuration can be used for the consul-k8s . Full configuration settings for Consul can be found in here. In our example we are enabling Consul connect-inject globally on any namespace that is labelled with connect-inject: enabled annotation. This way we don’t need to modify any exiting resource definitions.

Since we are running this on a local Kubernetes instance, we only specified the number of replicas to one. As a result, we won’t have any issue for selecting the leader amonst the different Conul servers in the cluster. Also, a cluster with one node and one Consul server is enough for our purpose. In production environments you don’t need to set this, you can use the default value provided by Consul itself.

The last part of the configuration is just for enabling the Consul UI component, so we can see the existing status of the cluster in the browser. We will also use this for testing “Intentions” in Consul later on. Since we will use NGINX as our ingress controller, we’ll add kubernetes.io/ingress.class: nginx annotation to the configuration:

global:
name: consul
connectInject:
enabled: true
default: true
namespaceSelector: |
matchLabels:
connect-inject: enabled
controller:
enabled: true
server:
replicas: 1
ui:
enabled: true
service:
enabled: true
ingress:
enabled: true
pathType: Prefix
hosts:
- host: consul.example.com
paths:
- /
tls:
- hosts:
- consul.example.com
annotations: |
kubernetes.io/ingress.class: nginx

Save this configuration to a file called config.yaml. Now we can install the Consul in our cluster using the following commands:

helm repo add hashicorp https://helm.releases.hashicorp.com

helm install consul hashicorp/consul --create-namespace --namespace consul --values config.yaml

This will create a dedicated namespace for Consul in the cluster. You should see the Consul deployments running with the following commands:

kubectl get deploy -n consul

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
consul-webhook-cert-manager 1/1 1 1 1m
consul-controller 1/1 1 1 1m
consul-connect-injector 2/2 2 2 1m

kubectl get statefulsets -n consul

NAME            READY   AGE
consul-server 1/1 1m

kubectl get daemonset -n consul

NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
consul-client 1 1 1 1 1 <none> 1m

Installing Gravitee APIM using Helm

For this example we will install Gravitee APIM in the default namespace. Before installing APIM, let’s first enable Consul on default namespace:

kubectl label namespace default connect-inject=enabled

Then we will install APIM:

helm repo add graviteeio https://helm.gravitee.io
helm install graviteeio-apim3x graviteeio/apim3

Running kubectl get deploy will show us current deployments:

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
graviteeio-apim3x-api 1/1 1 1 1m
graviteeio-apim3x-ui 1/1 1 1 1m
graviteeio-apim3x-portal 1/1 1 1 1m
graviteeio-apim3x-gateway 1/1 1 1 1m

If you look at the pods, you should see the Gravitee pods with their Consul sidecars already injected:

Kubectl get pods

NAME                                      READY STATUS  RESTARTS AGE
graviteeio-apim3x-api-55994c5c6b-l94n4 2/2 Running 0 1m
graviteeio-apim3x-ui-66677875c-b9cvp 2/2 Running 0 1m
graviteeio-apim3x-portal-565b89f97b-82clk 2/2 Running 0 1m
graviteeio-apim3x-gateway-5f878756c7-4t4tf 2/2 Running 0 1m

Setting up NGINX Ingress Controller

Gravitee uses NXING by default. However, as mTLS Authentication is enabled by default for service to service communication inside the Consul service mesh. This means that Consul will not allow any traffic from NGINX to the back-end services. To resolve this, we will also need to deploy NGINX inside the service mesh.

If you haven’t installed NGINX in your cluster yet, it is best to first create a dedicated namespace for it. We will add the required label for Consul to inject its proxies automatically:

kubectl create namespace ingress-nginx
kubectl label namespace ingress-nginx connect-inject=enabled

Next, we can install NGINX using the following command. For more configuration, you can refer to this website.

helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx

If you have already installed NGINX in your cluster, you can just add the label to its namespace and restart its pod.

Unfortunately, Consul always requires a Kubernetes service if you’re deploying an application that needs to be on the Consul service mesh. As a result, we also need to add a service for NGINX deployment:

cat <<EOF | kubectl create -n ingress-nginx -f -
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx-service
service: nginx-service
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
name: nginx-ingress-controller
EOF

With all of these setting, NXING should now be a part of the service mesh, and the Envoy proxy should be running inside its pod:

kubectl get pods -n ingress-nginx

NAME                                    READY STATUS  RESTARTS AGE
nginx-ingress-microk8s-controller-nkjs2 2/2 Running 0 1m

Last but not least, do not forget to add the following two annotations in to your NGINX daemon set or deployment configuration. Without it, Consul will not allow any external traffic from end users to reach the NXING service:

annotations:
consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: '80, 443'
consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: 10.152.181.1/32

Run this command to get the right IP address for your cluster and put it in the annotation above

kubectl get svc kubernetes --output jsonpath='{.spec.clusterIP}'

The last setting we need for NGINX to work is to add a Consul ServiceDefault for each of our services that NGINX needs to connect to directly. Otherwise it will not be able to reach any of the pods, and Consul will block the traffic again:

cat <<EOF | kubectl apply -f -
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: graviteeio-apim3x-gateway
spec:
transparentProxy:
dialedDirectly: true
EOF
cat <<EOF | kubectl apply -f -
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: graviteeio-apim3x-api
spec:
transparentProxy:
dialedDirectly: true
EOF
cat <<EOF | kubectl apply -f -
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: graviteeio-apim3x-portal
spec:
transparentProxy:
dialedDirectly: true
EOF
cat <<EOF | kubectl apply -f -
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: graviteeio-apim3x-ui
spec:
transparentProxy:
dialedDirectly: true
EOF

This is all you need for configuring the NGINX, and if everything is working properly now we can add our first API to the Gravitee.

Adding a New API to Gravitee

Firstly, we will create a deployment for an existing sample Gravitee echo image in our cluster, and then later on we will define an API pointing to its service inside the Gravitee console:

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: gravitee-echo-api
spec:
replicas: 1
selector:
matchLabels:
app: gravitee-echo-api
version: v1
template:
metadata:
labels:
app: gravitee-echo-api
version: v1
spec:
containers:
- image: graviteeio/gravitee-echo-api
imagePullPolicy: IfNotPresent
name: gravitee-echo-api
ports:
- containerPort: 8080
EOF

If you look at the existing pods, there should be :

NAME                                      READY STATUS  RESTARTS AGE
graviteeio-apim3x-api-55994c5c6b-l94n4 2/2 Running 0 1m
graviteeio-apim3x-ui-66677875c-b9cvp 2/2 Running 0 1m
graviteeio-apim3x-portal-565b89f97b-82clk 2/2 Running 0 1m
graviteeio-apim3x-gateway-5f878756c7-4t4tf 2/2 Running 0 1m
gravitee-echo-api-58cfc58cbb-9x9lz 2/2 Running 0 48s

Now let’s create a service to access this new pod:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: gravitee-echo-api
labels:
app: gravitee-echo-api
service: gravitee-echo-api
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: gravitee-echo-api
EOF

Once everything is created successfully, we will login into the Gravitee Console and define a new API pointing to our local echo service, which is running behind Consul. It is also possible to do exactly the same things using our Kubernetes CRDs that can be found in here. If you are new to setting up an API in Gravitee APIM, you can follow this guide.

We will use “http://gravitee-echo-api.default.svc.cluster.local:80" for the backend:

To keep it as simple as possible, we just enable a Keyless (public) plan which doesn’t require any authentication to call this endpoint later:

Finally, we will create and start the API:

If we make a request to this newly created API, we should get a response similar to this one :

curl --insecure https://apim.example.com/gateway/lecho (replace apim.example.com with you own host name)

{
"headers" : {
"X-Request-ID" : "e495b38a4a31e95e99c61f8cb2b546f3",
"X-Real-IP" : "192.168.0.1",
"X-Forwarded-For" : "192.168.0.1",
"X-Forwarded-Host" : "apim.example.com",
"X-Forwarded-Port" : "443",
"X-Forwarded-Proto" : "https",
"X-Forwarded-Scheme" : "https",
"X-Scheme" : "https",
"user-agent" : "curl/7.68.0",
"accept" : "*/*",
"X-Gravitee-Transaction-Id" : "954c6d74-952c-424c-8c6d-74952c824cb5",
"X-Gravitee-Request-Id" : "954c6d74-952c-424c-8c6d-74952c824cb5",
"Host" : "gravitee-echo-api.default.svc.cluster.local",
"accept-encoding" : "deflate, gzip"
}
}

So as you can see, we could successfully reach the gravitee-echo-api from the Gravitee Gateways that are both running behind Consul. The whole request-response cycle is shown below:

Enabling Intentions in Consul

So far our cluster is secured only using the default mTLS authentication. However, Consul also provide another layer of authentication on top of that which is called Intentions. Intentions define access control for services via Connect, and are used to control which services may establish connections or make requests. Intentions can be managed via the API, CLI, or the user interface (UI).

In this example, we will configure an Intention using the Consul UI which should be accessible via the link that you were provided with when installing Consul (In our case it was https://consul.example.com).

Consul UI already provides you with information about the services registered inside its service mesh, as shown in the picture below.

We will choose the gravitee-echo-api service. As you can see, at the moment all the other services are able to connect to it because by default, all service are able to communicated with each other inside the service mesh. If you don’t want this to be the case, you can specify something else whilst installing Consul.

Firstly, we will go to the Intentions tab and create and new Intention to deny all traffic from all other services:

If we try to query our API one more time, we should get the following error:

curl --insecure https://apim.example.com/gateway/lecho

< HTTP/2 502 
< date: Fri, 13 May 2022 10:33:08 GMT
< content-length: 0
< x-gravitee-transaction-id: eac8196d-5e01-4b3c-8819-6d5e01bb3c7b
< x-gravitee-request-id: eac8196d-5e01-4b3c-8819-6d5e01bb3c7b
< strict-transport-security: max-age=15724800; includeSubDomains

This time Gravitee Gateway can’t reach the gravitee-echo-api back-end service, and that’s why we are getting 502 error. Lets now allow the traffic from graviteeio-apim3x-gateway to gravitee-echo-api:

We should eventually have these settings:

And this time around we will get the expected response again:

curl --insecure https://apim.example.com/gateway/lecho

{
"headers" : {
"X-Request-ID" : "8d7ca9180fec48e46b5106b72cb2589c",
"X-Real-IP" : "192.168.0.1",
"X-Forwarded-For" : "192.168.0.1",
"X-Forwarded-Host" : "apim.example.com",
"X-Forwarded-Port" : "443",
"X-Forwarded-Proto" : "https",
"X-Forwarded-Scheme" : "https",
"X-Scheme" : "https",
"user-agent" : "curl/7.68.0",
"accept" : "*/*",
"X-Gravitee-Transaction-Id" : "b1e8a139-2cd2-4158-a8a1-392cd29158c5",
"X-Gravitee-Request-Id" : "b1e8a139-2cd2-4158-a8a1-392cd29158c5",
"Host" : "gravitee-echo-api.default.svc.cluster.local",
"accept-encoding" : "deflate, gzip"
}
}

Wrapping up

In this tutorial, we have shown you how to combine all of the features of Gravitee’s API Management platform alongside Conul as a service mesh. We’ve shown you how to install and configure Consul and NGINX alongside Gravitee API Management for a given name space, as well as enabling Intentions for managing the traffic between registered services inside the service mesh. If you’ve got any questions, or to let us know how you get on, join us on the community forum.

--

--