In this blog we will explore how a Consul implementation benefits users and network teams in the very common cloud networking architecture paradigm hub and spoke. It’s very common to hear teams across IT and the broader organization blame the network for outages or why they can’t deploy as fast as desired. HashiCorp Consul and even more so the recently released managed offering in Azure cloud, can drastically improve the way services and users connect across a multitude of runtimes and multi-cloud environments.
HashiCorp Consul Service on Azure
From the HashiCorp Blog announcing the general availability of the Hashicorp Consul Service on Azure:
“In July 2020 Hashicorp announced that HashiCorp Consul Service (HCS) on Azure is now generally available. HCS on Azure enables a team to provision HashiCorp-managed Consul clusters directly through the Microsoft Azure portal. HCS on Azure clusters are preconfigured for production workloads, enabling a team to easily leverage Consul to secure the application networks within their Azure Kubernetes Service (AKS) or VM-based environments while offloading the operations to HashiCorp.
HCS enables easy access to a range of Consul use cases including service discovery, automated network configuration, and secure service-to-service communication with service mesh. Consul can be used as a platform to support modern application networking, progressive application delivery, zero-trust security, and service level observability.”
Azure Hub and Spoke Networks
As adoption of cloud infrastructure continues to expand, specifically in Azure, patterns are beginning to emerge for how large enterprises are architecting cloud networking. The hub and spoke topology from the ancient days of layer 2 network design has made a resurgence and is a very common deployment pattern. It’s often recommended by Azure solutions architects to simplify how we connect application, shared services, and users in Azure.
The hub is a virtual network in Azure that acts as a central point of connectivity to your on-premises network. The spokes are virtual networks that peer with the hub and can be used to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN gateway connection.
The benefits of this topology include:
Cost savings by centralizing services that can be shared by multiple workloads, such as network virtual appliances (NVAs) and DNS servers, in a single location.
Separation of concerns between central IT (SecOps, InfraOps) and workloads (DevOps).
Typical uses for this architecture include:
Workloads deployed in different environments, such as development, testing, and production, that require shared services such as DNS, IDS, NTP, or AD DS. Shared services are placed in the hub virtual network, while each environment is deployed to a spoke to maintain isolation.
Workloads that do not require connectivity to each other, but require access to shared services.
Enterprises that require central control over security aspects, such as a firewall in the hub as a DMZ, and segregated management for the workloads in each spoke.
HCS in a Hub and Spoke Azure Network Architecture
HashiCorp Consul Service for Azure Hub and Spoke Network Architecture
In the above diagram we are looking at a sample “payments” application consisting of a web frontend deployed in (AKS), a payments service deployed in a VM, both of which are in a spoke vnet called “frontend”. The backend services: API, Cache, and Currency are deployed in a separate AKS cluster within its own spoke vnet called “backend.” User connectivity to the infrastructure all happens through a “shared services” vnet representing the spoke. Within this vnet is a Virtual Network Gateway (VNG) providing routing services between spokes and to users in the enterprise WAN. (The application is exposed via an external load balancer within the frontend spoke in much the same way a DMZ would behave in a traditional network.)
The challenge for the network operators becomes how to securely enable spoke to spoke communication or spoke to shared services within the hub while maintaining the required isolation of each spoke. Typically this would have to be accomplished via firewall rules within the hub and routes on the VNG, leading to many of the same challenges faced in on-prem networking services- extended time to make changes to the network or implement new patterns.
HCS is deployed within this Azure subscription into a HashiCorp managed resource group containing a vnet which is then peered with hub vnet. Connectivity of the services within this sample application is enabled by Consul Connect, a service mesh made up of an Envoy proxy data plane residing alongside VMs and within AKS pods as sidecars with the HCS servers providing control plane functionality. The Envoy sidecar proxies are inserted dynamically as services are provisioned with a Kubernetes mutating webhook injector. This injector is deployed with the consul Helm chart as well as configured to use the HCS managed consul servers. Consul agent service and a separate envoy proxy are deployed alongside the VM-based “payments” service using Terraform for automation. Consul Connect enables service-to-service communication via mTLS, called intentions.
Because of the HCS implementation, the developers of the application can implement their own Consul Connect Intentions to enable communication between services in their application across spokes. Simplifying the deployment and testing process while not introducing any risk or exposure to the security posture of the hub and spoke network because of the mTLS between proxies. Further, HCS removes the need for operating the servers and infrastructure required to run the Consul servers because of the nature of this managed service offering from HashiCorp.
Please visit the HCS Learn Documentation to get started.