Continuous Configuration using AWS AppConfig for Microservices

Aritra Nag
Towards AWS
Published in
9 min readJan 10, 2023

--

Introduction

One of the most important security recommendations and best practices includes externalizing all the configurable properties outside of the boilerplates. Also, enabling feature flags to toggle functionalities inside an application is always important to improve the dynamic capabilities of the product. In the microservices, We normally tend to externalize the properties either in the .properties file(Java-based applications) or inject them while building the uber jar/tar during compilation and build tasks. In this blog, We will discuss how AWS AppConfig can be utilized to store the properties and update the configurations during runtime without any downtime.

Configuration Changes and Feature Flags

One of the existing ways of rolling out configurations is “push-and-pray deployment” where the applications with the changes are fully rolled out to the Production environment and then inferring the results based on the usage of new changes by the customers. Previously, while releasing a feature we have to first build the code, then test it in QA, and get ready to deploy it to Production. In the case of multiple developers working on the same code base, we need to align the code merge and deployment with the date and time when the business wants it released. On the release day, We push out our new code to Production and hope everything goes smoothly.

Here is an example of an external configuration of properties used in the application.

Using the above configuration in the application is done by the following.

In the modern way of development, “Feature flags” is one of the ways to release new functions with the advantage of separating the code from the configurations and also enabling a toggle feature to have the flip based on the customer and business needs. There are different use cases for and types of feature flags, depending on the need. As noted above, a release flag can be used to develop a new feature in our application. Once we develop the feature, deploy it while the feature is hidden behind a feature flag, and gradually make the feature available for customers, all while monitoring the application’s health.

Continuous Configuration

Nowadays, We roll out configurations slowly by multiple methods like A/B testing, and Canary deployments where a small number of traffic and changes are exposed to the customers for testing and ensuring all is working properly. Also, add guardrails with alarms for the payloads to check there are no unintended errors as a result of these configuration changes.

AppConfig is an AWS Service used to centralize the management of configuration data. It gives the ability to create, manage, and deploy configuration changes separate from code. It reduces deployment frequency owning to configuration changes. It provides us with rollback capabilities based on any AWS Cloudwatch alarms that can be preconfigured either on the AWS Managed service side or any custom metrics generated on the application side.

Benefits of AWS AppConfig

Some of the benefits of using AWS AppConfig are discussed below:

  1. One of the benefits is to configure AWS AppConfig to roll back to a previous version of a configuration in response to AWS CloudWatch alarms.

2. We can add validators on the configurations to avoid setting out any unintended values for the feature flags. These validators can be either done by the configuration of lambda while updating them or can also be JSON schema-based validations.

Sample configuration using JSON :

Here is an example of JSON schema validation of the above configuration:

3. We can also create a deployment strategy for rolling out the changes for environments. Some of the recommended deployment strategies from AWS can also be configured based on the frequency and importance of the configuration changes.

Components of AWS AppConfig

Usage of AWS AppConfig services starts with creating an application.

After creating the application, We have to create the configuration profile for the application which comes in multiple flavors:

We can either create “Feature Flags” to toggle any feature inside an application or make full-blown configurations setup using Freeform Configuration. We will demonstrate the Freeform Configuration in the following demo.

There are multiple providers for creating the configuration profiles in the AWS AppConfig application.

We can store the configurations in the AWS S3 Buckets and retrieve them while invoking the AWS AppConfig service inside the application. We can also use AWS SSM Parameters or System Manager document to store the configurations. We would showcase our demo using the self-hosted AWS AppConfig configurations in the later stage of this blog.

Once we have stored the configurations and enabled validators(either JSON schema-based or Lambda), we move on to create the environment for deploying the configurations.

These environments are logical isolations for setting up the monitoring alarms to roll back in case of any failures or unintended changes being pushed. It also helps to keep track of deployment frequencies for any configurations.

Apart from all the above, we also have multiple deployment strategies that can be configured with the environment and responsible for deploying the changes. We can also create custom deployment strategies to add them to the environment with custom bake time, deployment time, and step percentage.

We will showcase the power of AWS AppConfig in the following sections by creating a demo using the services in a microservices landscape:

Using AWS AppConfig — Solution Architecture and Demo

  1. Solution Architecture

In this simple demo, We will create a VPC with public and private subnets. In the private subnet, we will spin up an ECS Cluster which will ingest the dynamic configurations changes from AWS AppConfig in the runtime without any downtime. We will send the payload via AWS ALB present in the public subnet of the VPC.

2. Technology and AWS Services Used

I. Java Springboot application to create microservice for creating REST APIS

II. Docker for encapsulating the service to run in AWS ECS Fargate container service

III. Infrastructure is deployed using AWS CDK

The ECS Service is written in Spring boot and consists of 1 API

/profile/details → This API is primarily for the AWS load balancers and AWS ECS Services to fetch the dynamic configuration values from AWS AppConfig. The Services fetches the stored value in the JSON format via AWS AppConfig SDK and reconstruct them into a String before displaying them in the browser.

Moving on to the AWS Services used with AWS CDK to create the infrastructure:

a. AWS VPC:- we are creating Multi AZ VPC with a CIDR range of 10.215.0.0/16 to host the microservice and load balancer.

b. AWS ECS:- We will also create ECS Cluster to deploy the microservice in the private subnet of the AWS VPC.

c. AWS Fargate Service:- We will create the ECS Service with the required roles and permissions to ensure it fetches the required values from AWS AppConfig and send out proper logs to AWS Cloudwatch. Here is the public gist for the Fargate Service.

The most important permissions to fetch the configurations from AWS AppConfig are as follows :

"appconfig:GetEnvironment",
"appconfig:GetHostedConfigurationVersion",
"appconfig:GetConfiguration",
"appconfig:GetApplication",
"appconfig:StartConfigurationSession",
"appconfig:GetLatestConfiguration",
"appconfig:GetConfigurationProfile"

d. AWS AppConfig:- For implementing the AWS AppConfig Service inside the Solution, We will use the L1 Construct of the current CDK version. As discussed in the previous section, we have built the IaaC in the same format starting from creating an application to forming a deployment strategy. Here is the link to the public gist for the fully constructed configuration setup.

The most important feature of this construct is to create a customized profile with schema validation and deployment strategy.

We would be using AWS CDK CLI for deploying the CDK construct. To deploy the solution, we need the following tools:

In the microservice, We need to include the following dependencies in the pom.xml to start using the AWS AppConfig SDK.

More details about AWS AppConfig SDK can be found in the link, in the demo we are using the methods: startConfigurationSession and getLatestConfiguration to fetch the latest configurations based on the application, profile, and environment variables.

Here is the sample snippet used in the demo to fetch the configuration values.

Finally, we are done with the setup. Once the application is deployed by using the following commands:

cdk synth
cdk deploy

CDK deploys both the application and AWS AppConfig service using Cloudformation stacks. Once the stacks have been completed deploying and we receive the AWS Application load balancer endpoint.

Using the endpoint in the browser, We get the following response:

This value is based on the initial setup we have created on the AWS AppConfig Service via AWS CDK deployment

Now, We would be creating a new change in the configuration in the console or via IaaC.

Once the change is done, We will deploy them using the previous deployment strategy or new strategy in the same environment.

As we can see, it has incorporated the changes and started deploying them with updated values. Once the deployment is done by changing the State from “Deploying” to “Baking” and finally to “Completed”.

Without redeploying the microservice, we can see the changes getting reflected in the browser.

The Advantage of using validators is to ensure we do not add any breaking inside the configuration. Here is an example of the error which is received when we try to add any changes which do not match the validation schema.

Apart from the validators, We can also add AWS Cloudwatch alarms on the metric like “HTTPCode_ELB_5XX_Count” and “ActiveConnectionCount” to check if the configuration changes are not breaking any current features or have introduced any unintended errors.

AWS AppConfig Pricing

Assume we have one application configuration that updates three times a day. Also assume that we have 5,000 targets in your fleet that are requesting configuration data via API, every 2 minutes, to check if an updated configuration is available. Each time an updated configuration is available, AWS AppConfig sends the updated configuration in response to the request for configuration. Over a month, our targets will receive a total of 450,000 (updated) configurations and the bill would be as follows:

Cost of configuration requests = 1 (configurations) * 5000 (servers) * 0.5 (calls per minute) * 60 (minutes) * 24 (Hours)* 30 (days) * $0.0000002 (price per request)
=$21.6

Cost of configurations received =1 (configurations) * 5000 (servers) * 3 (updates a day) * 30 (days) * $0.0008 (price per configuration received)
=$360

Total monthly cost = $381.6

Difference between AWS AppConfig and AWS Parameter Store

Conclusion

Finally, we came to the end of the demo and blog where we tried to showcase the power of AWS AppConfig to enable continuous configuration within the application and how it helps us to change any configs or enable/disable features based on a flag or incorporate allowlist/blocklist capabilities without having any downtime for the changes.

Reference:

  1. https://aws.amazon.com/blogs/mt/application-configuration-deployment-to-container-workloads-using-aws-appconfig/
  2. https://mng.workshop.aws/appconfig.html
  3. https://aws.amazon.com/blogs/mt/introducing-aws-appconfig-python-helper-library/
  4. https://docs.aws.amazon.com/cdk/v2/guide/cli.html

--

--