WebLogic modernization on Oracle Cloud Infrastructure — Part 1

Omid Izadkhasti
Oracle Developers
Published in
12 min readJul 15, 2023

In my previous blog post, I explained how to migrate a simple WebLogic domain to Oracle Kubernetes Engine (OKE).

In this series of blogs I will explain how to migrate and modernize existing on-premises WebLogic domains on Oracle Cloud Infrastructure (OCI) using cloud native and automated solutions.

This post is the first post from series where I will explain how to automate migration of WebLogic domain to OCI and a containerized environment.

As I explained in the previous post, here are steps to migrate existing domain to Kubernetes environment.

In this article, I will focus more on the automation of this process.

Here‘s a list of implementation steps:

  • Automate provisioning of infrastructure on OCI (include networking, OKE, client VM, etc).
  • Automate deployment of WebLogic operator and ingress controller on OKE Cluster.
  • Automate migration of on-premises WebLogic domain to OKE cluster.

Automate provisioning of infrastructure on OCI (to include networking, OKE, client VM, etc)

I have used Terraform to provision infrastructure. I am not going talk about terraform code in detail and only highlight important part of the code, as you can use these samples to create your code.

Here is a high level networking architecture of my implementation (private API endpoint, private worker nodes, private pods and public load balancer and using OCI native CNI). As you can see in the code you need to provision one VCN (Virtual Cloud Network) and four subnets inside the VCN (API endpoint, worker nodes, pods, load balancer subnets) and four network security groups (NSG) for each resource type before provisioning the OKE cluster (please follow this link for networking configuration).

Here is terraform code to provision OKE Cluster and node pool.

variable "kubernetes_version" {
description = "Kubernets Verson"
default = "v1.25.4"
}

variable "containerengine_node_pool_name" {
description = "Container Engine Node Pool name"
default = "app-node-pool"
}

variable "containerengine_node_pool_size" {
description = "Container Engine Node Pool size"
default = "2"
}

variable "max_pods_per_node" {
description = "Maximum number of pods per nodes in node pool"
default = "30"
}

variable "containerengine_node_shape" {
description = "Container Engine Node Shape"
default = "VM.Standard2.2"
}

variable "containerengine_cluster_pod_network_options_cni_type" {
default = "OCI_VCN_IP_NATIVE"
}

variable "enable_kubernetes_dashboard" {
description = "Enable Kubernetes Dashboard (true/false)?"
default = "true"

}

variable "enable_pod_security_policy" {
description = "Enable Pod Security Policy (true/false)?"
default = "false"

}

resource "oci_containerengine_cluster" "app_cluster" {
#Required
compartment_id = var.compartment_ocid
kubernetes_version = var.kubernetes_version
name = var.containerengine_cluster_name
vcn_id = var.vcn_id

cluster_pod_network_options {
cni_type = var.containerengine_cluster_pod_network_options_cni_type
}

options {
add_ons {
is_kubernetes_dashboard_enabled = var.enable_kubernetes_dashboard
}
admission_controller_options {
is_pod_security_policy_enabled = var.enable_pod_security_policy
}
kubernetes_network_config {
pods_cidr = var.pods_subnet_cidr
#services_cidr =
}

service_lb_subnet_ids = [var.lb_subnet_ocid]
}

endpoint_config {
is_public_ip_enabled = var.api_endpoint_subnet_type
nsg_ids = [var.api_endpoint_nsg]
subnet_id = var.api_endpoint_subnet_ocid
}

}

resource "oci_containerengine_node_pool" "app_node_pool" {
#Required
cluster_id = oci_containerengine_cluster.app_cluster.id
compartment_id = var.compartment_ocid
kubernetes_version = var.kubernetes_version
name = var.containerengine_node_pool_name
node_shape = var.containerengine_node_shape

node_source_details {
image_id = data.oci_core_images.oraclelinux-8.images[0]["id"]
source_type = "IMAGE"
}

node_config_details {
placement_configs {
availability_domain = data.oci_identity_availability_domain.ad.name
subnet_id = var.workernodes_subnet_ocid
}
size = var.containerengine_node_pool_size

node_pool_pod_network_option_details {
cni_type = var.containerengine_cluster_pod_network_options_cni_type

max_pods_per_node = var.max_pods_per_node
pod_nsg_ids = [var.pods_nsg]
pod_subnet_ids = [var.pods_subnet_ocid]
}
nsg_ids = [var.workernodes_nsg]
}

ssh_public_key = tls_private_key.app_cluster_ssh_key.public_key_openssh
}

data "oci_containerengine_cluster_kube_config" "app_cluster_kube_config" {
cluster_id = oci_containerengine_cluster.app_cluster.id
}

Also, we need to provision a compute instance to use as the Kubernetes client (to deploy resources inside the cluster).

We can use the following terraform code to configure a compute instance as Kubernetes client. This code connects to a compute instance using ssh and installs OCI client, kubectl, helm packages. Also use OCI CLI commands to create kubeconfig file inside the compute instance.

resource "null_resource" "create-kubeconfig-folder" {
provisioner "remote-exec" {

connection {
agent = false
timeout = "30m"
# var.bastion_public_ip public IP address of compue instance
host = tostring(join(",",var.bastion_public_ip))
user = "opc"
# var.ssh_key ssh private key to access compute instance
private_key = var.ssh_key
}

# ${oci_containerengine_cluster.app_cluster.id} OCID of OKE Cluster
# ${var.region} OCI Region
inline = [
"sudo dnf -y install oraclelinux-developer-release-el8",
"sudo dnf install python36-oci-cli",
"mkdir ~/.kube",
"export OCI_CLI_AUTH=instance_principal",
"echo export OCI_CLI_AUTH=instance_principal >> ~/.bash_profile",
"oci ce cluster create-kubeconfig --cluster-id ${oci_containerengine_cluster.app_cluster.id} --file $HOME/.kube/config --region ${var.region} --kube-endpoint PRIVATE_ENDPOINT",
"curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\"",
"chmod +x ./kubectl",
"sudo mv ./kubectl /usr/local/bin/kubectl",
"curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3",
"chmod 700 get_helm.sh",
"./get_helm.sh"
]
}
}

Moreover, you need to create a dynamic group and policy to allow this compute instance to manage the OKE cluster using instance principal.

Dynamic Group (app-dg) and policy code:

ALL { instance.compartment.id = '<OCID of compute instance compartment>'}
Allow dynamic-group app-dg to manage cluster-family in compartment id <Compartment OCID of OKE Cluster>

After executing your terraform code the infrastructure will be ready for next steps.

Automate deployment of WebLogic operator and ingress controller on OKE Cluster

In this step we will automate installation of WebLogic operator and ingress controller on OKE cluster.

I am using Ansible to install WebLogic operator and Traefik ingress controller in OKE Cluster, but you can use any other IT automation tools to automate executing of these steps.

Here is the Ansible script and variable file:

- name: install pre-requisites
pip:
name:
- kubernetes
- openshift
- PyYAML
state: present
become: true
- name: "Create WebLogic Operartor namespace"
k8s:
name: "{{ weblogic_operatopr_namespace }}"
api_version: v1
kind: Namespace
state: present
environment:
OCI_CLI_AUTH: "{{oci_auth}}"

- name: "Create WebLogic Operartor Service Account"
k8s:
state: present
definition:
api_version: v1
kind: ServiceAccount
metadata:
name: "{{ weblogic_operator_service_account }}"
namespace: "{{ weblogic_operatopr_namespace }}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Create Traefic Operartor namespace"
k8s:
name: "{{ traefik_namespace }}"
api_version: v1
kind: Namespace
state: present
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Add WebLogic Operator Helm repository"
kubernetes.core.helm_repository:
name: "weblogic_operator_helm_chart"
repo_url: "{{ weblogic_operator_helm_chart }}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Add Traefik Helm repository"
kubernetes.core.helm_repository:
name: "traefik_helm_chart"
repo_url: "{{ traefik_helm_chart }}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: Deploy WebLogic Operator
kubernetes.core.helm:
name: "weblogic-operator"
chart_ref: "weblogic_operator_helm_chart/weblogic-operator"
release_namespace: "{{ weblogic_operatopr_namespace }}"
wait: true
values:
serviceAccount: "{{ weblogic_operator_service_account }}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: Deploy Traefik
kubernetes.core.helm:
name: "traefik"
chart_ref: "traefik_helm_chart/traefik"
release_namespace: "{{ traefik_namespace }}"
wait: true
values:
ports:
web:
nodePort: 30305
ports:
websecure:
nodePort: 30443
kubernetes:
nmespace: "{{ traefik_namespace }}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"
oci_auth: instance_principal
weblogic_operatopr_namespace: weblogic-operator-ns
weblogic_operator_service_account: weblogic-operator-sa
weblogic_operator_helm_chart: https://oracle.github.io/weblogic-kubernetes-operator/charts
traefik_namespace: traefik
traefik_helm_chart: https://helm.traefik.io/traefik

You can execute the Ansible playbook as below:

ansible-playbook <Ansible playbook name>

Automate migration of on-premises WebLogic domain to OKE cluster

In our final step, we are going to migrate the on-premises domain to OKE cluster. First, I have used the following Ansible playbook to discover the source domain and generate on-premises domain model file, archive file and properties file.

Model file: include domain structure in yaml format.

Archive file: include all artifacts (ear, war and jar files) deployed in source domain.

Properties file: include all variable information that needs to update in destination environment like WebLogic credentials, data source configuration and etc.

- name: Copy WDT archive to on-premises server
copy:
src: ../files/weblogic-deploy.zip
dest: "/home/{{ onprem_user }}"
owner: "{{ onprem_user }}"
mode: '0755'

- name: Unzip WDT archive
unarchive:
src: "/home/{{onprem_user}}/weblogic-deploy.zip"
dest: "/home/{{onprem_user}}"
remote_src: yes

- name: Create on-prem domain folder
file:
path: "/home/{{onprem_user}}/onprem-domain"
state: directory
mode: '0755'

- name: Discover source domain
shell: ~/weblogic-deploy/bin/discoverDomain.sh -oracle_home "{{ ORACLE_HOME }}" -domain_home "{{ DOMAIN_HOME }}" -archive_file /home/"{{onprem_user}}"/onprem-domain/onprem.zip -model_file /home/"{{onprem_user}}"/onprem-domain/onprem.yaml -variable_file /home/"{{onprem_user}}"/onprem-domain/onprem.properties -domain_type "{{ DOMAIN_TYPE}}"
args:
executable: /bin/bash
environment:
JAVA_HOME: "{{ JAVA_HOME }}"

- name: Find all files in folder /home/"{{onprem_user}}"/onprem-domain
find:
paths: "/home/{{onprem_user}}/onprem-domain"
patterns: ".*"
use_regex: True
register: file_2_fetch

- name: Copy onprem domain files to local
fetch:
src: "{{ item.path }}"
dest: /tmp/
flat: yes
with_items: "{{ file_2_fetch.files }}"
ORACLE_HOME: /u01/app/oracle/middleware/
DOMAIN_HOME: /u01/data/domains/apps_domain/
DOMAIN_TYPE: WLS
JAVA_HOME: /u01/jdk
onprem_user: oracle

On-prem properties file:

AdminPassword=
AdminUserName=
SecurityConfig.NodeManagerPasswordEncrypted=

We need to update the value of these properties before creating the image.

In this sample, I am going to use auxiliary image. In the original implementation of WebLogic Operator for Kubernetes, we have only one Docker image that includes WebLogic binaries (depending on your WebLogic version, for example WebLogic 12.2.1.4) and domain files (mode, archive and properties file that generated as part of domain discovery using WDT tool).

However, because this image is quite big (because of size of WebLogic binaries), if you want to update the domain (for example add a new data source) you need to regenerate the image and update base image of Kubernetes pods and that takes some time. In new WebLogic Operator releases they added a new feature called auxiliary image that you can separate WebLogic binaries and domain docker images. In this case if you want to update the domain you don’t need to regenerate big docker image. Also, in case of updates to WebLogic version or path WebLogic you don’t need to touch the domain image. Instead, you can use WebLogic base images that provided by Oracle in public Oracle Container Registry (you first need to login and subscribe for specific domain you want to use).

Here is a sample Ansible script that I used to generate auxiliary image using WebLogic image tool, tag and import image to OCI Container Registry (OCIR), import WebLogic base image (in our case WebLogic 12.2.1.4) to OCIR (include pull image from Oracle Container Registry, tag image and import to OCIR), update domain.yaml file (replace variables like domain name, namespace, etc.), update on-premises domain model and variable files (we need to remove some of lines from on-premises domain file like Machine configuration if exists), create domain namespace, create domain secrets (WebLogic credential secret, WebLogic encryption secret and docker registry secret), create ingress route from sample application, upgrade Traefik helm release to include domain namespace and finally deploy WebLogic domain. Lots of tasks!!! But don’t worry — all will be handled by the Ansible automation script.

domain.yaml

apiVersion: "weblogic.oracle/v9"
kind: Domain
metadata:
name: ##domain_name##
namespace: ##domain_namespace##
labels:
weblogic.domainUID: ##domain_name##

spec:
configuration:
model:
auxiliaryImages:
- image: "##ocir_url##/##tenancy##/##ocir_repository##/##domain_name##:##image_version##"
imagePullPolicy: "Always"

runtimeEncryptionSecret: ##encryption_secret##

domainHomeSourceType: FromModel

domainHome: ##domain_home##

image: "##ocir_url##/##tenancy##/##weblogic_repo##:##weblogic_version##"

imagePullPolicy: "IfNotPresent"
imagePullSecrets:
- name: ##ocir_secret##

webLogicCredentialsSecret:
name: ##weblogic_credential_secret##

includeServerOutInPodLog: true

serverStartPolicy: IfNeeded

serverPod:
env:
- name: JAVA_OPTIONS
value: "##java_options##"
- name: USER_MEM_ARGS
value: "##mem_args##"
resources:
requests:
cpu: "250m"
memory: "768Mi"

replicas: ##number_of_nodes##

clusters:
- name: ##cluster_name##

restartVersion: '1'

introspectVersion: '1'

---

apiVersion: "weblogic.oracle/v1"
kind: Cluster
metadata:
name: ##cluster_name##
namespace: ##domain_namespace##
labels:
weblogic.domainUID: ##domain_name##

spec:
replicas: 2
clusterName: ##cluster_name##

A Python script to update source domain (remove unnecessary elements):

import yaml

def read_domain(filename):
with open(f'{filename}.yaml','r') as f:
output = yaml.safe_load(f)
return output

def write_domain(filename, domain):
with open(f'{filename}.yaml','w') as f:
output = yaml.dump(domain, f, sort_keys=False)

domain = read_domain('onprem')

servers=[]
for item in domain["topology"]["Server"]:
servers.append(item)

if "MigratableTarget" in domain["topology"]:
del domain["topology"]["MigratableTarget"]

if "UnixMachine" in domain["topology"]:
del domain["topology"]["UnixMachine"]

for server in servers:
if "Machine" in domain["topology"]["Server"][server]:
del domain["topology"]["Server"][server]["Machine"]
if "SSL" in domain["topology"]["Server"][server]:
del domain["topology"]["Server"][server]["SSL"]

write_domain("onprem", domain)

Ansible playbook:

- name: Copy Image Tool archive to the server
copy:
src: ../files/imagetool.zip
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'

- name: Copy JDK archive to the server
copy:
src: "../files/{{ jdk_filename }}"
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'

- name: Copy WDT archive to the server
copy:
src: "../../discover-source-domain/files/weblogic-deploy.zip"
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'

- name: Copy onprem domain files to the server
copy:
src: "{{ item }}"
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'
with_fileglob:
- "/tmp/onprem*"

- name: Copy ingress.yaml to the server
copy:
src: ../files/ingress.yaml
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'

- name: Copy domain.yaml to the server
copy:
src: ../files/domain.yaml
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'

- name: Copy Python script to update in-premises domain model file (remove MAchine, SSL, etc from model)
copy:
src: "../files/{{ python_script }}"
dest: "/home/{{user}}/"
owner: "opc"
mode: '0755'


#Login to oracle registry
- name: Login to Oracle Registry
shell: "docker login {{oracle_repo_server}} -u {{oracle_repo_user}} -p {{oracle_repo_credential}}"

#Pull WebLogic image to local repository
- name: Pull WebLogic Image
shell: "docker pull {{weblogic_image}}"

#Tag WebLogic image to push in OCIR
- name: Tag WebLogic Image
shell: "docker tag {{weblogic_image}} {{ocir_url}}/{{tenancy}}/{{weblogic_repo}}:{{weblogic_version}}"

#Push WebLogic image in OCIR
- name: Push WebLogic Image
shell: "docker push {{weblogic_image}} {{ocir_url}}/{{tenancy}}/{{weblogic_repo}}:{{weblogic_version}}"

#Update domain.yaml variables
- name: Update domian variables in domain.yaml
ansible.builtin.replace:
path: "/home/{{user}}/domain.yaml"
regexp: '{{item.itemName}}'
replace: '{{item.itemValue}}'
with_items:
- "{{domain_variables}}"

#Update ingress.yaml variables
- name: Update ingress variables in ingress.yaml
ansible.builtin.replace:
path: "/home/{{user}}/ingress.yaml"
regexp: '{{item.itemName}}'
replace: '{{item.itemValue}}'
with_items:
- "{{ingress_variables}}"

#Update onprem domain variables
- name: Update onprem domian variables
ansible.builtin.replace:
path: "/home/{{user}}/onprem.properties"
regexp: '{{item.itemName}}'
replace: '{{item.itemValue}}'
with_items:
- "{{onprem_domain_variables}}"

#Update onprem domain model file
- name: Update onprem domian model
shell: python3 "/home/{{user}}/{{ python_script }}"

- name: Unzip WebLogic Image archive
unarchive:
src: "/home/{{user}}/imagetool.zip"
dest: "/home/{{user}}/"
remote_src: yes

- name: Unzip JDK archive
unarchive:
src: "/home/{{user}}/{{ jdk_filename }}"
dest: "/home/{{user}}/"
remote_src: yes

- name: Add weblogic-deploy.zip to image tool cache
shell: /home/{{user}}/imagetool/bin/imagetool.sh cache addInstaller --type wdt --version latest --path "/home/{{user}}/weblogic-deploy.zip"
args:
executable: /bin/bash
environment:
JAVA_HOME: "/home/{{user}}/jdk{{ jdk_version }}"

- name: Create Auxiliary Image from onprem domain
shell: /home/{{user}}/imagetool/bin/imagetool.sh createAuxImage --tag "{{domain_name}}:{{image_version}}" --wdtModel "/home/{{user}}/onprem.yaml" --wdtArchive "/home/{{user}}/onprem.zip" --wdtVariables "/home/{{user}}/onprem.properties" --wdtHome "/auxiliary" --wdtModelHome "/auxiliary/models" --wdtVersion "latest"
args:
executable: /bin/bash
environment:
JAVA_HOME: "/home/{{user}}/jdk{{ jdk_version }}"

- name: Login to OCIR
shell: docker login -u "{{ocir_user}}" -p "{{ocir_password}}" "{{ocir_url}}"
args:
executable: /bin/bash

- name: tag image
shell: docker tag "localhost/{{domain_name}}:{{image_version}}" "{{ocir_url}}/{{tenancy}}/{{ocir_repository}}/{{domain_name}}:{{image_version}}"
args:
executable: /bin/bash

- name: Push image to OCIR
shell: docker push "{{ocir_url}}/{{tenancy}}/{{ocir_repository}}/{{domain_name}}:{{image_version}}"
args:
executable: /bin/bash

- name: Create domain namespace in OKE
k8s:
name: "{{ domain_namespace }}"
api_version: v1
kind: Namespace
state: present
environment:
OCI_CLI_AUTH: "{{oci_auth}}"

- name: "Lable domain namespace with weblogic-operator=enabled"
shell: "kubectl label ns {{ domain_namespace }} weblogic-operator=enabled"
ignore_errors: true
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: Upgrade Traefik deployment and add domain namespace
shell: helm upgrade traefik "{{traefik_helm_chart}}/traefik" --namespace "{{traefik_namespace}}" --reuse-values --set "kubernetes.namespaces={'{{traefik_namespace}}','{{domain_namespace}}'}"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Create ingress route for application"
shell: kubectl apply -f "/home/{{user}}/ingress.yaml"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Create WebLogic credential secret"
shell: "kubectl create secret generic {{weblogic_credential_secret}} --from-literal=username={{weblogic_user}} --from-literal=password={{weblogic_credential}} -n {{domain_namespace}}"
ignore_errors: true
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Create encryption secret"
shell: "kubectl create secret generic {{encryption_secret}} --from-literal=password={{weblogic_credential}} -n {{domain_namespace}}"
ignore_errors: true
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Create OCIR secret"
shell: "kubectl create secret docker-registry {{ ocir_secret }} --docker-server={{ocir_url}} --docker-username='{{ocir_user}}' --docker-password='{{ocir_password}}' --docker-email='{{oracle_repo_email}}' -n {{domain_namespace}}"
ignore_errors: true
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

- name: "Deploy WebLogic domain in OKE"
shell: "kubectl apply -f /home/{{user}}/domain.yaml"
environment:
OCI_CLI_AUTH: "{{ oci_auth }}"

ingress.yaml

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ##ingress_route_name##
namespace: ##ingress_route_namespace##
spec:
routes:
- kind: Rule
match: PathPrefix(`##ingress_route_path_prefix##`)
services:
- kind: Service
name: ##ingress_route_service_name##
port: ##ingress_route_service_port##

Ansible variable file:

jdk_filename: jdk-8u371-linux-x64.tar.gz
python_script: updateSourceDomain.py
jdk_version: 1.8.0_371
user: opc

traefik_namespace: traefik
traefik_helm_chart: traefik_helm_chart

oci_auth: instance_principal

ocir_url: mel.ocir.io
ocir_user: <OCIR user tenancy/username>
ocir_password: <OCIR password, user auth token>
ocir_repository: <OCIR Repository name>
ocir_secret: ocirsecret
tenancy: <Tenancy name>

domain_namespace: app-domain
domain_name: app-domain
domain_home: /u01/data/domains/app-domain
clustre_name: app-cluster
number_of_nodes: 1
java_options: -Dweblogic.StdoutDebugEnabled=false
mem_args: -Djava.security.egd=file:/dev/./urandom -Xms512m -Xmx512m
weblogic_user: weblogic
weblogic_credential: <WebLogic Credential>
encryption_secret: app-domain-runtime-encryption-secret
weblogic_credential_secret: app-domain-weblogic-credentials
image_version: v1

oracle_repo_user: <Oracle Continer Repositoy username>
oracle_repo_email: <Oracle Continer Email>
oracle_repo_credential: <Oracle Continer Repositoy Credential>
oracle_repo_server: container-registry.oracle.com
oracle_repo_secret: weblogic-repo-credentials
weblogic_image: container-registry.oracle.com/middleware/weblogic:12.2.1.4
weblogic_repo: <OCIR Repository name for WebLogic im age>
weblogic_version: 12.2.1.4

domain_variables:
- {itemName: "##domain_name##", itemValue: "app-domain"}
- {itemName: "##domain_namespace##", itemValue: "app-domain"}
- {itemName: "##domain_home##", itemValue: "/u01/data/domains/app-domain"}
- {itemName: "##cluster_name##", itemValue: "app-cluster"}
- {itemName: "##java_options##", itemValue: "-Dweblogic.StdoutDebugEnabled=false"}
- {itemName: "##mem_args##", itemValue: "-Djava.security.egd=file:/dev/./urandom -Xms512m -Xmx512m"}
- {itemName: "##encryption_secret##", itemValue: "app-domain-runtime-encryption-secret"}
- {itemName: "##weblogic_credential_secret##", itemValue: "app-domain-weblogic-credentials"}
- {itemName: "##oracle_repo_user##", itemValue: "<Oracle Continer Repositoy username>"}
- {itemName: "##oracle_repo_email##", itemValue: "<Oracle Continer Repositoy username>"}
- {itemName: "##oracle_repo_credential##", itemValue: "<Oracle Continer Repositoy Credential>"}
- {itemName: "##oracle_repo_server##", itemValue: "container-registry.oracle.com"}
- {itemName: "##oracle_repo_secret##", itemValue: "weblogic-repo-credentials"}
- {itemName: "##weblogic_image##", itemValue: "container-registry.oracle.com/middleware/weblogic:12.2.1.4"}
- {itemName: "##tenancy##", itemValue: "<Tenancy name>"}
- {itemName: "##ocir_secret##", itemValue: "ocirsecret"}
- {itemName: "##ocir_repository##", itemValue: "<OCIR Repository Name>"}
- {itemName: "##ocir_url##", itemValue: "mel.ocir.io"}
- {itemName: "##ocir_user##", itemValue: "<OCIR user tenancy/username>"}
- {itemName: "##ocir_password##", itemValue: "<OCIR Credential>"}
- {itemName: "##image_version##", itemValue: "v1"}
- {itemName: "##number_of_nodes##", itemValue: "1"}
- {itemName: "##weblogic_repo##", itemValue: "<OCIR Repository name for WebLogic im age>"}
- {itemName: "##weblogic_version##", itemValue: "12.2.1.4"}

onprem_domain_variables:
- {itemName: "AdminPassword=", itemValue: "AdminPassword=<WebLogic Password>"}
- {itemName: "AdminUserName=", itemValue: "AdminUserName=weblogic"}
- {itemName: "SecurityConfig.NodeManagerPasswordEncrypted=", itemValue: "SecurityConfig.NodeManagerPasswordEncrypted=<Encryption Password>"}

ingress_variables:
- {itemName: "##ingress_route_name##", itemValue: "sample-app"}
- {itemName: "##ingress_route_service_name##", itemValue: "app-domain-apps-server-1"}
- {itemName: "##ingress_route_service_port##", itemValue: "7003"}
- {itemName: "##ingress_route_path_prefix##", itemValue: "/sample-app"}
- {itemName: "##ingress_route_namespace##", itemValue: "app-domain"}

Finally, you can execute ansible playbook using following command.

ansible-playbook app-cluster-configuration.yaml

If everything goes as planned, you can see WebLogic admin server and managed server pods created inside the domain namespace.

kubectl get pods -n app-domain
NAME READY STATUS RESTARTS AGE
app-domain-apps-adminserver 1/1 Running 0 19h
app-domain-apps-server-1 1/1 Running 0 19h

Plus, you can access a sample application deployed in WebLogic managed server using OCI load balancer public IP address.

kubectl get svc -n traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.96.252.8 <OCI LB Public IP> 80:31451/TCP,443:30443/TCP 3d22h

Conclusion

In this blog post, I explained how to automate provisioning of OCI environment (Networking, Oracle Kubernetes Engine, etc.), deploy WebLogic Operator inside OKE cluster, discover source WebLogic domain, create/update all artifacts to deploy in OKE cluster, build target WebLogic domain images, push images to OCIR and finally deploy domain in OKE using Terraform, Ansible, and python scripts. This was a very simple domain, and you can extend it to more complex domains (include data sources for example).

In the next post, I will discuss to how automate updating the domain.

References

WebLogic Kubernetes Toolkit: https://docs.oracle.com/en/middleware/fusion-middleware/weblogic-server/kubernetes_toolkit.html

Oracle Kubernetes Engine: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengoverview.htm

OCI Terraform Provider: https://registry.terraform.io/providers/oracle/oci/latest/docs

--

--

Omid Izadkhasti
Oracle Developers

Principal Cloud Solution Architect @Oracle. The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.