Logging and Searching Operations Using .NET 6 and AWS OpenSearch Service →Hands-On →Session 1

Emreiissever
DFDS Development Center Istanbul
31 min readDec 14, 2022

Hello to Everyone. I’ll discuss a logging project using AWS OpenSearch and a .NET Core application. In addition, I’ll utilize Azure DevOps to create a repository, execute build and release pipelines, and deploy my AWS services. In this section, only logging will be discussed. In the second part, I will explain the search processes.

Introducing the Azure DevOps system

Before I begin the essay, I should mention that; The services and environments to be utilized in this article will be realized by paying some fees, though not many. It is recommended to complete this project if it can be completed within the business or by an individual for this reason. I will give you information about where to pay in this article.

Open an account in Azure DevOps

I’ll help you choose which account to create on Azure DevOps before I begin the project’s coding.

https://azure.microsoft.com/en-us/pricing/details/devops/

You can easily manage all the Azure DevOps actions we’ll perform if you select “basic plan” under “user licenses” on this website.

Individuals who want to advance the project may also require a “basic + test plans” account, of course.

I will use the Azure DevOps organization with the full authority that my employer has granted me for this project for the time being.

You have the option to register with your personal or business email after selecting the start free button.

Then, using the project area of the organization you just created, you will create a project for yourself.

This screen will pop up once you enter the project you just made.

After that, you may manage all of your Azure DevOps operations from this place.

I will continue working on the repository I will establish in a project that has already been launched within the company’s Azure DevOps organization in this project.

I will continue all of my transactions using my own business mail at the same time.

Creating a new repo on the Azure DevOps

The “New Repository” will be opened by selecting the “Repos” option in the upper left corner of the screen.

The repository was opened as “Elasticsearch-.NET-Project”.

Once the repository has been opened, you need to use “git”, which should be installed on your computer, to clone it to the desired location.

After pressing the “clone” button above

Copy the HTTPS URL on the screen above

After specifying an empty file directory, right-click on that file and click the “git bash here” button as below.

Open a project in Visual Studio

Then, the URL copied on Azure DevOps is typed into the URL section in the “git clone URL” in the command prompt and clicked enter. You clone the repository to local as a result.

We must create a project in Visual Studio after cloning the repository. We will open a “class library” project after selecting “create a new project” in Visual Studio. To avoid confusion with the solution name, I gave the project the name “LoggingAndSearchingProject” instead of the solution name “LoggingAndSearchProject”. We will open this solution in the repository we pulled and set the path of the repository as the location.

I’ll now give you access to the directory containing the entire project, but we’ll go through each file separately as shown below first.

A deployment folder different than “loggingAndSearchingProject” will be made. The project and this deployment folder will both be in the same hierarchy. From the very beginning to the very conclusion, we will now move progressively and in an explicative way.

I didn’t need to construct any data models for this project. The data access object will be enough for us. As a data access object, the “LoggingDto” class will be created in the “Dtos” folder. The LogLevel, ExceptionDetails, TimeStamp, Message, and Username properties will be available in the “LoggingDto” class.

The Dtos class is visible in the image below.

In order to construct an Elasticsearch client, we must handle the Elasticsearch configuration processes in the second phase. In Elasticsearch, a client is required for each transaction. A client is actually an object that connects to the elastic server. Every client needs an index. In fact, each index is a separate database. The type is a table. Each record placed in it is a document in a new row.

The Nest library is used to create the Elastic Client, which will be produced just once during the duration of the project and be utilized consistently. I suggest reading this page if you want to gain more in-depth information on the Nest client.

Due to this, we will construct Elasticsearch services after writing Elasticsearch configuration procedures.

Now after opening a folder named “Configuration”, we will open a class “ElasticClientConfiguration” in this folder.

In this static class, we will first create a function called “ConfigureElasticClient” of type “IServiceCollection”. It will require the “services” and “configuration” parameters. Later, this function will be added to program.cs’s configuration files.

In order to connect with the elastic client within the scope after building the function, we will require 4 crucial components: config, Uri, pool, and client

The “SingleNodeConnectionPool” function from the “Elasticsearch.Net” package will be used initially. We will give a new Uri object as a parameter. We will utilize the value of the URL field key named “Host” of the “ElasticConnectionSettings” JSON key to be provided on the “appsettings.json” files for generating this Uri object. The Domain Endpoint URL that was given to us after the Elasticsearch Domain was built will be used as the value of this Host URL. The creation of our pool object follows.

Next, we’ll make a configuration file. The “Nest” package provides us with the function called “ConnectionSettings” that we will use in this case. Here, the first parameter will be our pool object. With the “AwsHttpConnection” function in “Elasticsearch.Net.Aws”, we will later specify the AWS region we’ll be using. At the same time, we will use “throw exception” to throw the relevant errors if there is a problem with the system name here. We will have produced our Config object as a result.

Third, we will create the client object. We will pass our newly formed “Config” object as a parameter to the “ElasticClient()” function in the ElasticClient class inside the Nest package. In the meantime, we’ll new the function and create an object called “client”. Therefore, after running this “ConfigureElasticClient” file, we will have once again established an Elasticsearch client.

The “ElasticClientConfiguration” code file is shown in the image below.

We will now create the service needed for transmitting the logs using this ElasticSearch client as seen below.

We will start by building an interface called “IElasticSearchService”. We’ll add a task function called “PostLog” to this interface that has two input parameters. When a function is created as a task, it will be executed asynchronously. An object that represents some work that needs to be done is called a “task”. The task gives us a result if a job or operation is finished or a result is returned.

The first parameter will be a string-type “index” (we will receive an index name with the request). We declare that the “loggingPostRequest” post request with the “LoggingDto” data type will be sent here as the second parameter.

The interface we just established will be implemented to a class called “ElasticSearchService” in the second step. We’ll make a private field called “_client” that uses the “IElasticClient” property from the Nest package as its data type. The client value will then be set in the constructor method. The parameters it needs will be added once we asynchronously create our “PostLog” method in the Task data type (which will only deliver us a result asynchronously).

Then, utilizing the post request content (JSON content) and index name, we will asynchronously create an index on this client.

Next, we will create the “Program.cs” and “FunctionHandler.cs” files. The “Program.cs” class will be used to perform configuration tasks, including configuring services and creating logging procedures.

The “kralizek.lambda.Template” package is needed for this project and needs to be installed first once the “program.cs” class is generated. This package, which formerly only worked with .NET Core 3.1, is now.NET 6 compliant. The abstract “RequestResponseFunction” class from this package will be inherited by the “program.cs” class. The “Amazon.Lambda.APIGatewayEvents” package will then be added to the project using the NuGet package. We will assign API Gateway Events to “Tınput” and “Toutput” types required for this abstract class (Tınput= APIGatewayProxyRequest, Toutput= APIGatewayProxyResponse).

Second, we’ll create 3 protected methods when we enter the scope. Before I discuss this, I’d like to give a quick overview of the protected class. This protected access specifier is only available to derived classes. Only the base class and its derived classes have access to protected functions marked with the protected specifier. A method that has been overridden provides a new implementation of a method that was inherited from the base class.

Now we can talk about these 3 protected methods.

In the first protected function, we will first build the application’s settings, which should be standard.

The services used in the second protected function will be configured

The third protected function will be configured with the logging tools needed to maintain logs. The AWS service we’ll use for this project is CloudWatch, which will allow us to view our logs simultaneously.

You can view the code of “Program.cs” below.

Let’s look at the “FunctionHandler” class right now. From the interface “ILogger” in the “Microsoft.Extensions.Logging” package, we will construct a field called “_logger.”

It is used to enable an activation called “ILogger” from Dependency injection.

There will also be new “IElasticsearchService” and “IMapper” fields. To represent a collection of keys and values, we will construct a field for the Dictionary class inside the “System.Collections.Generic” package.

The “HandleAsync” function will then be created to analyze the request after the fields have been added and their parameters have been assigned in the constructor method.

The “ILambdaContext” interface from the “Amazon.Lambda.Core” package and the “APIGatewayProxyRequest” request that is sent as a parameter in the “HandleAsync” method will be used to carry out this task function. “IlambdaContext = Object” that allows you to access useful information available within the Lambda execution environment.

Here, the task containing “APIGatewayProxyResponse” must be the return type of the “HandleAsync” function.

Open our try-catch scopes first, and then when mistakes occur, we will catch them in the catch scopes.

For instance, if the request body is empty or null, the first if condition will produce an error.

In this FunctionHandler class, we will now open a private function called “HandleRequest.”

With “await”, we’ll wait for the execution of this function.

If everything goes as planned, we will send back an “APIGatewayProxyResponse”.

If an issue occurs, it will be caught by the catch scope and certain needed particular errors will be returned. However, the error will also be visible in the local logs.

We will use CloudWatch to monitor these issues in the cloud.

Let’s look at our “HandleRequest” method right now. When a post request comes in, we will examine it and send it to the appropriate ElasticSearch service.

The body of the “APIGatewayProxyRequest” type request sent to the Handlerequest function will first be deserialized in accordance with the “LoggingDto” type.

Following that, we will receive the path parameter that was included in the request.

The index name parameter that will be provided to us in this path parameter will be extracted.

Then, we’ll create an if condition.

In this case, if the deserialized request data is given as null or empty, we shall issue an error.

If we skip this step, the “PostLog” function of the “_elasticSearchService” will receive request data as the second parameter and “indexName” as the first input.

And using “await,” we will wait for this asynchronous operation.

If this occurs during this process, our function was successful.

We will now create the code for “FunctionHandler.cs” that raises the “BadRequestException” problem.

Deployment on Azure DevOps

To develop this project on Azure DevOps, we will now create the “azure-pipeline.yml” file.

Then, using the open-source infrastructure as code software tool terraform, which enables us to write code and modify AWS services, we will develop some terraform codes.

Following that, we will write the Terragrunt codes.

You can specify your Terraform code only once with Terragrunt, and then promote a versioned, immutable “artifact” of that exact same code from one environment to another.

Now, we will create another deployment folder in the same hierarchy as this project file before creating the deployment folder on the “LoggingAndSearchingProject” that we have created.

This deployment folder will be referred to as the “Root Deployment”.

We will build the Pipeline, Terraform, and Terragrunt folders in this deployment folder after it has been created as seen below.

Because the ElasticSearch and Cognito services will function as external services, we are deploying them here.

since we want the ElasticSearch domain and users to be generated only once.

For instance, it won’t be efficient to continuously modify ElasticSearch while deploying through Azure for any code change in the project.

Because when we need to delete the project’s deployment codes, the deployment codes for ElasticSearchwill also be deleted, causing an unnecessary disruption in service.

We chose to write outside of the project because this is an outcome we do not desire.

Inside the Pipeline folder, we will create the “azure-pipeline.yml” file.

The codes contained in this file will be used in the pipeline we design on Azure DevOps.

I’ll explain what these codes represent to you now, but since there is a ton of specific information, I’ll ask you to look over the paper and conduct some research to find out.

The “azure-pipeline.yml” codes I created are displayed below. Our pipeline can be set up into jobs. There is at least one job for each pipeline.

A job is made up of several steps that are executed one after the other. The smallest unit of work that can be scheduled to run is a job, to put it another way.

These steps involve performing specific jobs, each of which has a set of features.

Terraform and Terragrunt Codes (infrastructure as code software tool)

We’ll now discuss the Terraform codes. This link will provide you with instructions on writing Terraform codes.

The “main.tf” file will be written initially in Terraform codes. AWS will be identified as the provider in the “Main.tf” file initially, and you must specify the region for this provider. From the “variables.tf” file we will generate right away, we will use “var.aws_region” to retrieve the region information. We can define variables in the file named “variables.tf”.

The terraform scope necessary for writing terraform codes will then be unlocked. To keep the backend codes in the S3 bucket in AWS, “backend” code is written.

Version and providers that are necessary will be presented.

Some features will be written in local scope that we will use across several Terraform files.

Name suffix = The name of the environments is inserted after the terraform codes that will make advantage of this feature with a dash (such as -non-prod or -prod).

application_name = we will specify this.

I combined constructed_name = source environment and app name.

Clients = will be written to use the clients in “var.clients” one by one.

Common_tags = we will include who created the project, what was created and the application name.

This is how the necessary codes for “main.tf“ will be written on “Variables.tf”.

Now we will write the “resource.cognito.tf” file. First, we need to create the “aws_cognito_resource_server” resource. You can access the information on how to fill in the required or optional information in this document. For detailed information;

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_resource_server

I won’t get too far into details. Send me a message if you’d like additional information or have any questions. Next, we will create a simple user pool client with aws_cognito_user_pool_client. For detailed information;

Another resource is that we will create a pool with aws_cognito_user_pool and give the required pool name. For detailed information;

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user_pool

Then we will create domain with “aws_cognito_user_pool_domain” resource. For detailed information;

Let’s look at the “resource.elasticsearch.tf” file right now.

For the ElasticSearch domain, we will open an “aws_cognito_identity_pool” and set an “identity_pool_name” for it. For detailed information;

We will write the IAM roles and policies required for ElasticSearch into “resource.Elasticsearch.tf”. To start, we’ll segregate the authorized and unauthorized users into two distinct “aws_iam_role” resources.

An IAM role resource for ElasticSearch Cognito will be created concurrently. For detailed information;

After that, we’ll create an IAM policy document data for the ElasticSearchCognito. For detailed information;

Data Resource → Data sources are used to bring information from the outside world into the configuration; they should be regarded as read-only.

Resource → The most significant component of the Terraform language is resources. Each resource block describes one or more infrastructure items, including computing instances, virtual networks, and higher-level elements like DNS records.

We will write the “aws_iam_role_policy_attachment” resource for the ElasticSearch Cognito role.

For Authenticated and unauthenticated we will write “aws_iam_policy_document”.

es:ESHttpGet”, ”es:ESHttpPost”, ”es:ESHttpPut” has been added to the actions on this policy document written for Authenticated. For detailed information;

We will create an IAM role policy resource for authenticated and unauthenticated users on Cognito. For detailed information;

We will write a resource for attaching identity pool roles. For detailed information;

We’ll create an ElasticSearch domain and “assume_policy” data resource. Writing the actions in the “esdomain” is essential here.

Now we will write the resource codes required for the creation of the ElasticSearch domain. The core features that must be included while building an ElasticSearchdomain are described in the codes given in this part. As an alternative to ElasticSearch, we’ll utilize OpenSearch version 1.3. The “instance_type” will be the “t3.small.elasticsearch” instance (Other instances can be quite costly but increase speed, performance and storage features). We’ll set the “instance_count” to 2 by default. For the time being, we will set the EBS storage capacity to 20 by default, but you are free to utilize more if you choose. The “node_to_node_encryption” and “encrypt_at_rest” functionalities will be enabled.

The “automated_snapshot_start_hour” variable will be set to 23 so that a snapshot is taken every 23 hours. “Cognito_options” will be enabled and the previously created Cognito user pool information will be included here. Because the clients on this Cognito will be given read and write authority on ElasticSearch. For detailed information;

Now I will share the remaining “variables.tf” codes for the ElasticSearch domain with you.

Now we will create the Terragrunt files. For this root deployment, we will just create the non-production and production environments. Because we do not need to set up ElasticSearch and Cognito settings for non-production environments for acceptance, test, and development. These environments will be used in the “LoggingAndSearchingProject.” The “prod” and “non-prod” folders will be opened as shown below.

Within the “non-prod” folder, we’ll open another “non-production” folder. In the same hierarchy, we will also create a “terragrunt.hcl” file. We will generate a “terragrunt.hcl” file in this “non-production”. The prod folder will undergo the same procedures.

Terragrunt → A lightweight Terraform module or unit that mediates access to another and offers additional tools for maintaining DRY (don’t repeat yourself) Terraform configurations. It is a unit that can operate as a remote state and work with various Terraform modules. The “terragrunt.hcl” file, which is in the same hierarchy as the non-production folder, will now be examined. We’ll create a “remote_state” connection there. On our AWS account, we will create a Dynamo DB table and an S3 bucket.

You can look at this document for further information.

We will specify the terraform source here if we search inside “terragrunt.hcl” in non-production. We’ll make it clear which route to take. We will input the inputs data simultaneously. The resource environment variable in our “variables.tf” will take whatever value is specified for the resource environment in the inputs when this Terragrunt is executed.

We will do the same operations in production as we did for non-production.

Now we will examine the Deployment folder in the “LoggingAndSearchingProject” project.

The Pipeline folder in the Deployment folder will be opened first. We will create the azure-pipeline.yml file and place it in the pipeline folder. Our project will be built and released using the steps listed in the steps section. If you’re interested in learning more, look here.

You can click the link to go there.

You can review the azure-pipeline.yml code below.

We’ll make the Terraform folder next. This folder will contain the files “main.tf,” “resource.apigateway.tf,” “resource.iam.tf,” “resource.lambda.tf,” and “variables.tf”. The creation of the “main.tf” file will come first. We will write the appropriate provider, Terraform, and local codes as indicated below, the same as we did in the root deployment Terraform folder. The “cognito_user_pool_name” and “cognito_oauth_scope” properties will be added, but their values must match those in Cognito. This is the crucial aspect.

Second, we will talk about “resource.apigateway.tf”. We will create a rest API on the API Gateway with the “aws_api_gateway_rest_api” resource. For detailed information;

Then we need to create two API gateway resources. We will create the “/LoggingAndSearching” endpoint as “path_part” in the first API gateway resource. In the other, we will create the “{indexName}” path parameter in the “LoggingAndSearching” endpoint. For detailed information;

We’ll now create an “aws_api_gateway_method”. On this API Gateway method, we will set the “post” value to be “http_method”. As “authorization”, we also prefer “COGNITO_USER_POOLS”. We will use the “cognito_oauth_scope” list in the “locals” scope in “main.tf” for the “authorization_scopes” value. For detailed information;

The AWS “api_gateway_integration” resource will be discussed next. Here, we’ll assign the “integration_http_method” argument to the value “POST.” We’ll assign the “type” argument to the value “AWS_PROXY.” Different types of these are also available. The document I’ll send, you may look at it in more depth. For our Uri argument, we’ll utilize the “invoke_arn” attribute from the “aws_lambda_function” resource. Write “{$…}” in the string in this manner if we intend to use an attribute or argument of one resource as an argument for another resource. For detailed information;

The next step is to write the “aws_api_gateway_deployment” resource. The important nuance here is that we will assign the “stage_environment” value in “variables.tf” to the “stage_name” argument. For detailed information;

Now let’s look at the “aws_api_gateway_usage_plan” resource. Here we will create the “api_stages” argument to be used. For detailed information;

An authorizer for the API Gateway needs to be specified. The pool of authorizer Cognito users will supply this. Making the proper Cognito authorizer settings will enable us to establish our “aws_api_gateway_authorizer” resource. For detailed information;

Since we will use the name of the “aws_cognito_user_pools” resource in “resource.apigateway.tf”, we will add the data resource here. For detailed information;

Now we will create one more API Gateway method resource. But here the example string will be written as “gateway_method_cors” and we will define it for cors operations. The value of our “http_method” argument will change to “OPTIONS” and our “authorization” argument will change to “NONE”.

Of course, writing an integration resource will be necessary for this “cors” method. Here, we will utilize the “http_method” value from the “gateway_method_cors” method as the “http_method” argument. We’ll choose “MOCK” for the “Type”. For detailed information;

We will write the response resource of the API Gateway method given above. When the correct request is thrown, we will want “Status_code” to return 200, so we will assign the value “200” to the “status_code” argument. To check if the response parameters are correct, we will assign an object to the “response_parameters” argument as shown below. For detailed information;

The “aws_api_gateway_integration_response” resource for our cors function will then be written as shown below. For detailed information;

We’ll look at our “resource.iam.tf” file in the next section of our article. Two “aws_iam_policy_document” data resources will first be created. In the resource defined by the example of “Lambda_policy_document,” we will define a statement with “AssumeRole” actions. There will be two specified statements in the second data source. We will define roles in the first that will permit get, post, and put operations on ElasticSearch. In the second, we’ll define the roles that will permit the actions necessary to access CloudWatch logs.

Then “aws_iam_role”, “aws_iam_policy”, “aws_iam_role_policy_attachment” resources will be written as shown below and those statements will be used as JSON files in roles and policies.

Now that the “resource.lambda.tf” file has been created, we may connect the Lambda service to the API Gateway. Here, we will set the “arn” attribute in the “aws_iam_role” resource beneath the “resource.iam.tf” file to the role argument, which is significant to us. The Handler argument will be set to our “FunctionHandlerAsync” method from “Program.cs”. The zip file we made for the Lambda function will be encrypted (encoded) using the “filebase64sha256” method and added to the “source_code_hash” document. For information in-depth;

In order for the lambda function to function in API Gateway, we must create the lambda permission resource. I will briefly discuss the “source_arn” option, which is a crucial component, in this section. In the API gateway rest API resource, we will obtain the “execution_arn” attribute. We shall obtain the “http_method” in the example referred to as “gateway_method” after appending /*/ to the end of this value. The path parameter will then be available to us in the API gateway resource. For information in-depth;

You can examine the variables in the “variables.tf” file where the variable values used in the Terraform resource files are kept.

Now I will briefly talk about the Terragrunt files of the “LoggingAndSearchingProject” project.

We will make the same settings as in the root deployment as shown in the folder above. The only difference is that we will also add acceptance, development, and test environments in non-prod here.

The key value of the “remote_state” scope in the “terragrunt.hcl” file under the “non-prod” folder will be different from the path in the root deployment.

Below is the “terragrunt.hcl” file under the Acceptance folder.

The “terragrunt.hcl” file under the development folder is below.

Below is the “terragrunt.hcl” file under the test folder.

The path in the root deployment will differ from the key value of the “remote_state” scope in the configuration in the “terragrunt.hcl” file inside the prod folder.

Below is the “terragrunt.hcl” file under the production folder.

Now we will write the “appsettings.json” files where the application settings required for each environment will be given as JSON data.

I will share the “appsettings.Acceptance.json”, “appsettings.Development.json”, “appsettings.Production.json”, “appsettings.Test.json” JSON files with you.

appsettings.Test.json” is the default file, and it is valid.

The “SecretName” and “TableSuffix” keys will be created for the testing environment, as can be seen above. For non-production environments, the “Host” key in the “ElasticConnectionSettings” key will be assigned, while for production environments, the endpoint URL of the ElasticSearch domain generated for production will be assigned to the “appsettings.json” files.

Below you will see that the development and acceptance environments also take the non-prod ElasticSearch domain endpoint URL as the host.

For the production environment, a different ElasticSearch domain endpoint URL will be needed. Because I did not release for production on Azure DevOps, I did not set up an ElasticSearch domain. The URL for the ElasticSearch domain endpoint URL will change if you advance far enough to join the production.

Before moving on to the Azure DevOps pipeline, I will talk about how to connect to the AWS capability with Saml. I will use the AWS capability that my company has allocated for me to use. Obviously, I will be using my AWS account.

I’m going to use Saml to authenticate my credentials while connecting to this capability. Because using Saml to connect is considerably safer than using credentials directly. If you are unable to connect via Saml, you can connect directly using the login information for an IAM user with admin permissions that you will create in the AWS root account.

The configuration file for Lambda Deployment is called “aws-lambda-tools-defaults.json”. At the same time, the “launcSettings.json” file found in the properties folder is the deployment configuration file needed for the “mock lambda test tool,” which enables us to test the project locally.

To obtain access authorization to the AWS feature that we will use with Saml, we will adhere to the following procedures. Since I installed it using the URL in my organization, I am unable to share this information with you. However, you can easily obtain the installation instructions for “saml2aws” on Google. I’ll give you a direct explanation of the procedure. To log into the AWS Capability, we will type the command “saml2aws login”.

You will write the e-mail you use as the username and enter your e-mail password.

You can log in to the capability that will appear later, with a certain expiration time. (The information here is not shared because it is internal to the company).

Executing the build and release pipeline on Azure DevOps

Now, in Azure DevOps, we commit and push all of our code to the main branch. On the pipeline that we open on Azure DevOps, we will do our build processes. Then we’ll carry out our release operations and set up our AWS services.

We’ll make a new pipeline after selecting the Azure pipeline tab. Where is your code here? We’ll go with the Azure Repos git YAML option. Then we will specify the location of the YAML file in the repository on Azure DevOps’ pipeline folder beneath the deployment files.

We will first release Cognito and ElasticSearch under root deployment after building the pipeline file under root deployment. The build procedure for Root Deployment was successful, as may be seen below.

We can proceed to the release procedure once this build process has been completed successfully without any issues. The new release pipeline option must be chosen, and then we must define one artifact and two stages. We will select the build pipeline as the artifact (ElasticSearch-.NET-Project). The “Non-Production” and “Production” stages will then be created by selecting the “empty job” option after clicking the New stage button. The release pipeline will be referred to as “Logging and Searching Root Deployment” in this instance.

In these steps, we will create a task and a job. The bash files required to install and use Terragrunt and Terraform will be the focus of this job. Of sure, we’ll make a second work group specifically for this assignment. But as it is internal company information, I am unable to discuss this task group with you. We will then add this task group to the release pipeline after it has been written. At the same time, we will select “Azure Pipelines” for the agent pool hosted in the agent job and select “ubuntu-20.04” for “agent specification”. We will fill in the Pipeline Variables section as follows.

To fill the Variable groups section, we will write two variable groups in the library in the pipelines tab. First, we will register the necessary versions for Terraform and Terragrunt.

Secondly, we will enter the value of the “access key ID” and “secret access key” information of the AWS account we will use for this project here.

Then we will include these two libraries in variable groups.

Then we will release only for non-production and we will see that our release process is approved.

The build and release pipelines for the project we opened in the solution will now be created. The build pipeline will follow the blueprint we established for root deployment. The build pipeline will now be named “ElasticSearch-.NET-Project (1).” (you can name it whatever you want).

We will make the release pipeline settings the same as the release pipeline in root deployment. We will name it “Logging And Searching Project Deployment”.

We will use the “Bash Install and Apply Terragrunt/Terraform” task group again in the task group settings. We will do the agent job settings in the same way again. Pipeline variables settings will be as follows.

The same variable groups in the release pipeline written for root deployment will be used as variable group.

It will have 4 stages. These will be Development, Test, Acceptance, and Production environments. We will run our project only in Non-Production and Test environments.

You can see that the project’s release pipeline is running successfully.

View of created AWS services

After all deployment procedures, we can see the AWS services we established. Below, you can view the AWS OpenSearch (ElasticSearch) service that we developed.

We can see the AWS Cognito User Pool service we have created below.

Below is a Cognito domain given to us by Cognito.

We’ll choose the Users and groups option and create a user. We will be able to access the Kibana dashboard using this user.

Let’s examine the API Gateway service we created with the Release pipeline.

Let’s examine the API Gateway service we created with the Release pipeline.

Let’s take a look at the IAM roles we created with the Release pipeline.

We have verified that every one of our services has been set up, and we will now check Kibana to see the logs received from our project and keep them in the index there.

Making POST requests and viewing logs on Kibana dashboard

With the user information we created in the “Users and Groups” option in Cognito user pools, we will now log in to the Kibana dashboard.

You will see the OpenSearch Kibana Dashboard as shown in the image below.

We will see the menu tabs after selecting the menu option in the left corner.

You will then find that no index has been established when you select the “index patterns” option under the “stack management” page.

But as we’ll see, the initial request test process that we’ll perform later creates the “logging-and-searching” index. The Invoke URL will have “indexName” added at the end, as displayed below.

An “index” will be created on the Kibana dashboard used for ElasticSearch by utilizing this “indexName” value in the “PostLog” function of the “ElasticSearchService” class.

Then, in order to verify that the “body” and “indexName,” which contain the log data supplied to us as JSON, function without errors, we will use the lambda Test Event to verify that the log data sent to Kibana is saved in the right index.

When creating a test event (a “POST” request), we will choose “API Gateway AWS Proxy” as the template and then provide the “Event JSON” data. It is shown in the picture below.

The AWS Mock Lambda Test Tool can also be used to test it locally to see if the Request process is operating correctly. View the images below to see examples of this.

You can see that the “POST” request is functioning properly after submitting it by looking at the response’s “statusCode”:200 value, as seen below.

As described in this post, we stored the logging data in the ElasticSearch index and displayed it on the Kibana dashboard by sending a “POST” request to our “LoggingAndSearchingProject” project, which houses the logging data.

I’ve completed the story I was going to deliver during this session. We will conduct various experiments using log searching in the upcoming session.

I’m looking forward to seeing you on the upcoming episode. I hope I was able to give you some relevant information.

Please feel free to get in touch with me via my email address or another way if you have any questions. With love, bye for the time being.

--

--