App2Container: Easily migrate Java Jboss web application to AWS EKS

Edoardo Favorido
Storm Reply
Published in
13 min readJul 31, 2024

Abstract

In this blog post, we will explore how to migrate a Java JBoss application from an on-premises server to AWS Elastic Kubernetes Service (EKS). We will utilize AWS App2Container, a tool that simplifies the containerization and deployment of existing applications, streamlining the migration process to the AWS cloud.

AWS App2Container is a command-line tool that helps you modernize your existing applications by migrating them to containerized applications running on AWS. It analyzes and transforms your on-premises applications into containerized applications, generating the necessary artifacts to deploy them on AWS container services like AppRunner, EKS, and ECS. App2Container simplifies the process by automating tasks such as generating Dockerfiles, Kubernetes manifests, and CI/CD pipeline configurations.

Use Case:

Migrating a Java JBoss application running on an EC2 instance (or on-premises server) to EKS, showcasing how App2Container can streamline the migration and deployment processes. By leveraging App2Container, we can easily containerize the application and deploy it to AppRunner, EKS, or ECS, ensuring a seamless transition to the cloud.

Hands-On

Steps:

  1. Download and install App2Container for linux
  2. App2Container initialization
  3. Analysis — Prepare for containerization
  4. Containerization
  5. Deploy your application on EKS
  6. Clean up
  7. Conclusion

NB: We are assuming that App2Container environment like Amazon linux 2 EC2 instance with JBoss 6.4.0 running, IAM role, instance profile, EKS cluster and other tools like AWS CLI , Docker, Kubectl are already created and configured.

Important

In case the procedure is performed on an on-premises virtual machine (VM), it will need to set up an AWS profile.

Prerequisites:

To follow along with this demonstration, you will need to:

  1. At least one t2.large EC2 instance and at least 20 GB of free space to allow the app2container agent to run.
  2. You have root access on the servers.
  3. Latest version of AWS CLI, Docker and Kubectl (compatible with App2Container) installed.

4. Configure the IAM roles and Instance profile used in the App2Container workflow (e.g., role CustomRole)

5. There are one or more Java applications running (we have installed Jboss 6.4.0 as mentioned above).

6. You can check App2Container compatibility here.

For more information click here.

Step 1: Download and install App2Container for Linux

1. Download the installation file:

· Use the curl command to download the App2Container installation package from Amazon S3.

$ curl -o AWSApp2Container-installer-linux.tar.gz https://app2container-release-us-east-1.s3.us-east-1.amazonaws.com/latest/linux/AWSApp2Container-installer-linux.tar.gz

2. Extract the package to a local folder on the server.

$ sudo tar xvf AWSApp2Container-installer-linux.tar.gz

3. Run the install script that you extracted from the package and follow the prompts.

$ sudo ./install.sh

Or upgrade to the latest version if already installed:

$ sudo app2container upgrade

Step 2: App2Container init:

Run the init command as follows.

$ sudo app2container init

Workspace directory path for artifacts[default: app2Container]:
Use AWS EC2 Instance profile 'arn:aws:iam::#######:instance-profile/CustomRole' configured with this instance? (Y/N)[default: y]: y
Which AWS Region to use?[default: eu-west-1]:
Optional S3 bucket for application artifacts[default: app2containerartifact]:
Report usage metrics to AWS? (Y/N)[default: n]:
Automatically upload logs and App2Container generated artifacts on crashes and internal errors? (Y/N)[default: n]:
Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]:
Configuration saved
All application artifacts will be created under app2Container. Please ensure that the folder permissions are secure.

You are prompted to provide the following information.

· A local directory where App2Container can store artifacts during the containerization process. The default is /root/app2container.

· AWS profile — Contains information needed to run App2Container, such as your AWS access keys. If App2Container detects an instance profile for your server, the init command prompts if you want to use it.

· You can optionally provide the name of an Amazon S3 bucket where you can extract artifacts using the extract command.

· You can optionally upload logs and command-generated artifacts automatically to App2Container support when an app2container command crashes.

· You can optionally allow App2Container to collect information about the host operating system, application type, and the app2container commands that you run. The default is to allow the collection of metrics.

· You can optionally require that images are signed using Docker Content Trust (DCT). The default is no.

We use all the default values, allowing the use of the EC2 instance profile.

Step 3: Analysis — Prepare for containerization

On the application server, follow these steps to prepare to containerize the applications.

1. Run the inventory command as follows to list the Java applications that are running on your server.

$ sudo app2container inventory

The output includes a JSON object collection with one entry for each application. Each application object will include key/value pairs as shown in the following example.

{
"java-jboss-98aa213e": {
"processId": 8628,
"cmdline": "java ... javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/opt/jboss -Djboss.server.base.dir=/opt/jboss/standalone ",
"applicationType": "java-jboss",
"webApp": ""
}
}

1. Locate the application ID for the application to containerize in the JSON output of the inventory command, and then run the analyze command as follows, with the application ID that you located (in our case “java-jboss-98aa213e”).

$ sudo app2container analyze --application-id java-jboss-98aa213e

✔ Created artifacts folder app2Container/java-jboss-98aa213e
✔ Generated analysis data in app2Container/java-jboss-98aa213e/analysis.json
👍 Analysis successful for application java-jboss-98aa213e

💡 Next Steps:
1. View the application analysis file at app2Container/java-jboss-98aa213e/analysis.json.
2. Edit the application analysis file as needed.
3. Start the containerization process using this command: app2container containerize --application-id java-jboss-98aa213e

The output is a JSON file, analysis.json, stored in the workspace directory that you specified when you ran the init command.

2. (Optional) You can edit the information in the containerParameters section of analysis.json as needed before continuing to the next step.

In our case examining the file we notice that we have to modify the application start command.

In our case we changed the cmdline option of the analysis.json file from:


"analysisInfo":{
"_comment2": "*** NON-EDITABLE: Analysis Results ***"
"processId": 21073,
"appId": "java-jboss-98aa213e",
"userId": "0",
"groupId": "0"
"cmdline": [
"java",
"-D[Standalone]",
"-server",
"-XX:+UseCompressedOops",
"-verbose:gc",
"-Xloggc:/opt/jboss/standalone/log/gc.log",
"-XX:+PrintGCDetails",
"-XX:+PrintGCDateStamps",
"-XX:+UseGCLogFileRotation",
"-XX:NumberOfGCLogFiles=5",
"-XX:GCLogFileSize=3M",
"-XX:-TraceClassUnloading",
"-Xms1303m",
"-Xmx1303m",
"-XX:MaxPermSize=256m",
"-Djava.net.preferIPv4Stack=true",
"-Djboss.modules.system.pkgs=org.jboss.byteman",
"-Djava.awt.headless-true",
"-Djboss.modules.policy-permissions=true",
"-Dorg.jboss.boot.log.file=/opt/jboss/standalone/log/server.log",
"-Dlogging.configuration=file:/opt/jboss/standalone/configuration/logging.properties",
"-jar",
"/opt/jboss/jboss-modules.jar",
"-mp",
"/opt/jboss/modules",
"-jaxpmodule",
"javax.xml.jaxp-provider",
"org.jboss.as. standalone",
"-Djboss.home.dir=/opt/jboss",
"-Djboss.server.base.dir=/opt/jboss/standalone",
],
"webApp": ""

to:

"analysisInfo": {
"_comment2": "*** NON-EDITABLE: Analysis Results ***",
"processId": 21073,
"appId": "java-jboss-98aa213e",
"userId": "0",
"groupId": "0",
"cmdline": [
"/opt/jboss/bin/standalone.sh",
"-b",
"0.0.0.0"
],
"webApp": "",

You can find the correct command for starting the application with the command below:

$ ps -aux | grep jboss
root 8550 0.0 0.0 119972 2808 pts/1 S 13:52 0:00 bash -x /opt/jboss/bin/standalone.sh

For more information on how to edit the analysis.json file click here.

Step 4: Containerization

We are now ready for containerization. The transform phase creates the containers that your application runs in after you deploy it to Amazon ECS, Amazon EKS, or App Runner, if eligible.

To containerize the application on the application server run the containerize command (with the same application ID that you located above) as follows.

$ sudo app2container containerize --application-id java-jboss-98aa213e

✔ Docker prerequisite check succeeded
✔ Tar prerequisite check succeeded
✔ Extracted container artifacts for application
✔ Entry file generated
✔ Dockerfile generated under app2Container/java-jboss-98aa213e/Artifacts
✔ Generated dockerfile.update under app2Container/java-jboss-98aa213e/Artifacts
✔ Generated deployment file at app2Container/java-jboss-98aa213e/deployment.json
✔ Deployment artifacts generated.
✔ Pre-validation succeeded.
👍 Containerization successful. Generated docker image java-jboss-98aa213e

💡 You're all set to test and deploy your container image.

Next Steps:
1. View the container image with "docker images" and test the application with "docker run --name java-jboss-98aa213e -Pit java-jboss-98aa213e".
2. When you're ready to deploy to AWS, adjust the appropriate fields in app2Container/java-jboss-98aa213e/deployment.json to generate the desired deployment artifact. Note that by default "createEcsArtifacts" is set to true.
3. Generate deployment artifacts using "app2container generate app-deployment --application-id java-jboss-98aa213e".

The output is a set of deployment files that are stored in the workspace directory.

Then if we run:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
java-jboss-98aa213e latest 191adb9fad14 About a minute ago 1.23GB

We can see the image of our application, and we can verify if it works:

$ docker run --name java-jboss-98aa213e -p 8080:8080 -d java-jboss-98aa213e

From the previous command we can see that to redirect to port 8080 we can make a request on port 8080

$ curl localhost:8080

We got the correct answer from containerized jboss:

<!DOCTYPE html>
<html>
<head>

<title>EAP 6</title>
<!-- proper charset -->
<meta http-equiv="content-type" content="text/html;charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE8" />
<link rel="stylesheet" type="text/css" href="eap.css" />
<link rel="shortcut icon" href="favicon.ico" />
</head>

<body>

<div id="container" style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px;">

<!-- header -->
<div class="header-panel">
<div class="header-line">&nbsp;</div>
<div class="header-top">
<img class="prod-title" src="images/product_title.png"/><span class="prod-version">6</span>
</div>
<div class="header-bottom">&nbsp;</div>
</div>


<!-- main content -->
<div id="content">

<div class="section">

<h1>Welcome to JBoss EAP 6</h1>

<h3>Your Red Hat JBoss Enterprise Application Platform is running.</h3>

<p>
<a href="/console">Administration Console</a> |
<a href="https://access.redhat.com/site/documentation/JBoss_Enterprise_Application_Platform/">Documentation</a> |
<a href="https://access.redhat.com/groups/jboss-enterprise-middleware">Online User Groups</a> <br/>
</p>

<sub>To replace this page set "enable-welcome-root" to false in your server configuration and deploy
your own war with / as its context path.</sub>

</div>

</div>


<div id="footer">&nbsp;</div>

</div>

</body >
</html>

deployment.json

When you run the containerize command, a deployment.json file is created for the application specified in the --application-id parameter. The generate app-deployment command uses this file, along with others, to generate application deployment artifacts. All of the fields in this file are configurable as needed so that you can customize your application container deployment before running the generate app-deployment command.

Important

The deployment.json file includes sections for both Amazon ECS and Amazon EKS. If your application is suitable for App Runner, there is a section for that too. Set the Boolean value deployment flag for the section that matches your target container management service to true. Set the other flags to false.

While all fields are configurable, the following fields should not be changed: a2CTemplateVersion, applicationId, and imageName. For key-value pairs that do not apply to your deployment, set string values to an empty string, numeric values to zero, and Boolean values to false.

For more information on customizing the deployment.json file click here.

Step 5: Deploy your application on EKS

When you run the generate app-deployment command, App2Container creates an Amazon ECR repository where it stores your application container artifacts for deployment. It also creates deployment configuration files that you can deploy as follows:

  • You can customize the deployment files, and have complete control over the deployment by running the AWS commands for your destination container management environment. When you run the generate app-deployment command without the — deploy option, App2Container returns instructions that you can use to deploy manually.
  • If you’re sure that you won’t need to customize your deployment files, App2Container can optionally deploy your application containers directly to the container management environment that you have configured. To choose this option, run the generate app-deployment command with the — deploy option.

We’ll see the deployment only on an existing EKS cluster.

We have also automatically deployed to AppRunner with — deploy option, but this is not covered in this article.

Eks deploy:

To deploy on EKS we have to follow these steps:
1. Generate configuration artifacts for EKS and image to ECR
2. Manually deploy to an existing cluster. We can customize the deployment by indicating various parameters in the deployment.json file, for more information click here.

Generate EKS artifact:

We set the deployment.json file by setting “createEksArtifacts” to true and disabling all other options as guide.
Here is an example of how to set the deployment.json file:

{
"a2CTemplateVersion": "1.0",
"applicationId": "java-jboss-98aa213e",
"imageName": "java-jboss-98aa213e",
"exposedPorts": [
{
"localPort": 8080,
"protocol": "tcp"
},
{
"localPort": 4447,
"protocol": "tcp"
},
{
"localPort": 9990,
"protocol": "tcp"
},
{
"localPort": 9999,
"protocol": "tcp"
}
],
"environment": [],
"ecrParameters": {
"ecrRepoTag": "latest"
},
"ecsParameters": {
"createEcsArtifacts": false,
"ecsFamily": "java-jboss-98aa213e",
"cpu": 2,
"memory": 4096,
"dockerSecurityOption": "",
"publicApp": true,
"stackName": "a2c-java-jboss-98aa213e-ECS",
"resourceTags": [
{
"key": "example-key",
"value": "example-value"
}
],
"reuseResources": {
"vpcId": "",
"reuseExistingA2cStack": {
"cfnStackName": "",
"microserviceUrlPath": ""
},
"sshKeyPairName": "",
"acmCertificateArn": ""
},
"gMSAParameters": {
"domainSecretsArn": "",
"domainDNSName": "",
"domainNetBIOSName": "",
"createGMSA": false,
"gMSAName": ""
},
"deployTarget": "FARGATE",
"dependentApps": []
},
"fireLensParameters": {
"enableFireLensLogging": false,
"logDestinations": [
{
"service": "cloudwatch",
"regexFilter": "^.*.$",
"streamName": "All-Logs"
}
]
},
"eksParameters": {
"createEksArtifacts": true,
"cpu": 1.5,
"memory": 3072,
"stackName": "a2c-java-jboss-98aa213e-EKS",
"reuseResources": {
"vpcId": "vpc-0450a920f83f86654",
"reuseExistingA2cStack": {
"cfnStackName": ""
},
"sshKeyPairName": "",
"rootDomain": "",
"acmCertificateArn": ""
},
"gMSAParameters": {
"domainSecretsArn": "",
"domainDNSName": "",
"domainNetBIOSName": "",
"createGMSA": false,
"gMSAName": ""
},
"ingress": "alb",
"dnsRecordName": "a2ctest.com",
"applicationPath": "/",
"resourceTags": [
{
"key": "example-key",
"value": "example-value"
}
],
"dependentApps": []
},
"appRunnerParameters": {
"createAppRunnerArtifacts": false,
"stackName": "a2c-java-jboss-98aa213e-AppRunner",
"serviceName": "a2c-java-jboss-98aa213e",
"autoDeploymentsEnabled": true,
"resourceTags": [
{
"key": "example-key",
"value": "example-value"
}
]
}
}

In this demo we decided to use a private hosted zone but we can also use a public hosted zone.

An ALB controller will then be created (you can also create a nginx controller by changing its parameter).

Now we can lunch the generate command (with the same application ID that you located above) as follows:

$ sudo app2container generate app-deployment --application-id java-jboss-98aa213e

⚠️ To enable HTTPS, please provide the ARN of an ACM certificate for your DNS record in the AcmCertificateArn field
✔ Docker prerequisite check succeeded
✔ AWS prerequisite check succeeded
✔ Processing application java-jboss-98aa213e...
✔ ECR repository 767398037520.dkr.ecr.eu-west-1.amazonaws.com/java-jboss-98aa213e already exists
✔ Pushed docker image 767398037520.dkr.ecr.eu-west-1.amazonaws.com/java-jboss-98aa213e:latest to ECR repository✔ Generated CloudFormation Master template at: app2Container/java-jboss-98aa213e/EksDeployment/amazon-eks-entrypoint-existing-vpc.yaml
✔ Generated CloudFormation Master template at: app2Container/java-jboss-98aa213e/EksDeployment/amazon-eks-application-workload.yaml✔ Uploaded CloudFormation resources to S3 Bucket: app2containerartifact
👍 CloudFormation templates and additional deployment artifacts generated successfully for application java-jboss-98aa213e

💡 You're all set to use AWS CloudFormation to manage your application stack.

Next Steps:
1. Edit the CloudFormation template as necessary.
2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command:

aws cloudformation deploy --template-file app2Container/java-jboss-98aa213e/EksDeployment/amazon-eks-entrypoint-existing-vpc.yaml --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND --stack-name a2c-java-jboss-98aa213e-EKS


3. Set up a pipeline for your application stack using app2container:
app2container generate pipeline --application-id java-jboss-98aa213e

We can find various EKS configuration files generated:

$ pwd
/root/app2Container/java-jboss-98aa213e

$ ll
total 76
-rw-r--r-- 1 root root 42288 Jul 11 13:57 analysis.json
drwxr-xr-x 3 root root 17 Jul 11 14:40 app2Container
drwxr--r-- 2 root root 133 Jul 11 14:10 Artifacts
-rwxr--r-- 1 root root 3837 Jul 11 14:51 deployment.json
drwxr--r-- 2 root root 79 Jul 11 14:52 EksDeployment
-rw-r--r-- 1 root root 867 Jul 11 14:52 eks_deployment.yaml
-rw-r--r-- 1 root root 533 Jul 11 14:52 eks_ingress.yaml
-rw-r--r-- 1 root root 812 Jul 11 14:52 eks_service.yaml
-rw-r--r-- 1 root root 1161 Jul 11 14:53 pipeline.json
-rw-r--r-- 1 root root 89 Jul 11 14:52 validate.json

An ECR image has been created and a copy of the configuration files has been pushed to the s3 bucket.

Eks Manual deploy:

After generating the artifacts we have to create the private hosted zone that we defined before in the dnsRecordName field of the deployment.json file.

Since the EKS cluster is on a different VPC than the one in which the EC2 is located, we associate the hosted zone with both VPCs as follows:

In this demo we will operate on a preconfigured EKS cluster directly from EC2 instance where EKS files were generated.

So we can move all the eks*.yaml files generated to a new directory and then run the kubectl apply command as we did below:

$ pwd:
/root/app2Container/java-jboss-98aa213e

$ mkdir eks

$ mv eks_*.yaml eks/

$ cd eks

$ ll

total 12
-rw-r--r-- 1 root root 867 Jul 11 14:52 eks_deployment.yaml
-rw-r--r-- 1 root root 533 Jul 11 14:52 eks_ingress.yaml
-rw-r--r-- 1 root root 812 Jul 11 14:52 eks_service.yaml

$ kubectl apply -f.

deployment.apps/java-jboss-98aa213e-deployment created
Warning: annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead
ingress.networking.k8s.io/java-jboss-98aa213e-ingress created
service/java-jboss-98aa213e-service created

Now all EKS resources and the ALB controller has been created.

For a quick check of the application we can log on the pod and send a curl on localhost:8080

Alternatively check the application behavior directly from your workstation following these steps:

  1. Set a record of type Alias to load balancer on our private hosted zone associated with the load balancer dns name as follows:

2. Temporarily insert an entry to the file /etc/hosts directly on our local workstation following these steps:

  • Perform the load balancer ip resolution:
nslookup <loadBalancer-address>

· Then modify the file /etc/hosts by entering the IP address obtained by the resolution of the load balancer and the domain name as follows:

· And finally we check if the application works from our browser by searching the domain name as follows:

CLEAN UP:

Make sure that you tear down any application stacks that might have been created, and verify that you have removed any artifacts that were created in the process.

To remove App2Container from your application server, delete the /usr/local/app2container folder where it is installed, and then remove this folder from your path.

CONCLUSION:

App2Container has proven to be a reliable tool for migrate and modernize legacy applications while standardizing the deployment and operations of your applications.

--

--