WebLogic modernization on Oracle Cloud Infrastructure — Part 4

Omid Izadkhasti
4 min readJul 1, 2024

--

Photo by Erik van Dijk on Unsplash

Introduction

In this article I will focus on logging and how we can view WebLogic logs outside of the cluster. We will use Fluentd to collect WebLogic pod logs and OpenSearch (specifically, OCI’s managed OpenSearch service) to explore, enrich, and visualize log data.

Architecture

Here is the high-level solution architecture:

  • Fluentd will be deployed as a DaemonSet inside all worker nodes to collect container log files and ingest them into OpenSearch.

Implementation

We will use Ansible to install and configure the applications discussed in this article.

OpenSearch Cluster Provisioning

First, we need to provision an OpenSearch cluster in the OCI tenancy. Ensure you have the necessary policies created in OCI. You can use the OCI CLI or OCI console for provisioning.

Create prerequisite policies

Create the following policies in your tenancy (replace <NETWORK_RESOURCES_COMPARTMENT> with the compartment name that includes network resources and <CLUSTER_RESOURCES_COMPARTMENT> with the compartment name where you want to provision your cluster).

Allow group SearchOpenSearchAdmins to manage vnics in compartment <NETWORK_RESOURCES_COMPARTMENT>
Allow group SearchOpenSearchAdmins to manage vcns in compartment <NETWORK_RESOURCES_COMPARTMENT>
Allow group SearchOpenSearchAdmins to manage subnets in compartment <NETWORK_RESOURCES_COMPARTMENT>
Allow group SearchOpenSearchAdmins to use network-security-groups in compartment <NETWORK_RESOURCES_COMPARTMENT>
Allow group SearchOpenSearchAdmins to manage opensearch-family in compartment <CLUSTER_RESOURCES_COMPARTMENT>

Provision cluster

Update the data, leader, and dashboard node count, CPU count, memory, and storage amount accordingly. Also, update the compartment OCID, VCN OCID, Subnet OCID, and display name according to your environment.

export compartment_id=<Cluster compartment OCID>
export data_node_count=1
export data_node_host_memory_gb=16
export data_node_host_ocpu_count=4
export data_node_host_type=FLEX
export data_node_storage_gb=50
export display_name=<Cluster Display Name>
export master_node_count=1
export master_node_host_memory_gb=16
export master_node_host_ocpu_count=1
export master_node_host_type=FLEX
export opendashboard_node_count=1
export opendashboard_node_host_memory_gb=16
export opendashboard_node_host_ocpu_count=1
export software_version=2.11.0
export subnet_compartment_id=<Subnet Compartment OCID>
export subnet_id=<Subnet OCID>
export vcn_compartment_id=<VCN Compartment OCID>
export vcn_id=<VCN OCID>

oci opensearch cluster create --compartment-id $compartment_id --data-node-count $data_node_count --data-node-host-memory-gb $data_node_host_memory_gb --data-node-host-ocpu-count $data_node_host_ocpu_count --data-node-host-type $data_node_host_type --data-node-storage-gb $data_node_storage_gb --display-name $display_name --master-node-count $master_node_count --master-node-host-memory-gb $master_node_host_memory_gb --master-node-host-ocpu-count $master_node_host_ocpu_count --master-node-host-type $master_node_host_type --opendashboard-node-count $opendashboard_node_count --opendashboard-node-host-memory-gb $opendashboard_node_host_memory_gb --opendashboard-node-host-ocpu-count $opendashboard_node_host_ocpu_count --software-version $software_version --subnet-compartment-id $subnet_compartment_id --subnet-id $subnet_id --vcn-compartment-id $vcn_compartment_id --vcn-id $vcn_id

Deploy FluentD

We will use the Fluentd Helm chart to deploy Fluentd inside the Kubernetes cluster. Here is an Ansible playbook that we can use to deploy the Fluentd Helm chart.

Ansible Playbook

#Copy FluentD Custom Values to the server
- name: Copy FluentD Custom Values to the server
template:
src: ../templates/fluentd-values.yaml.j2
dest: "{{ target_folder_path }}/fluentd-values.yaml"

#Create logging namespace
- name: "Create logging namespace"
shell: |
export PATH=$PATH:/usr/local/bin \
&& kubectl create ns {{ logging_namespace }}
ignore_errors: true

#Install FluentD
- name: "Install FluentD"
shell: |
export PATH=$PATH:/usr/local/bin \
&& helm repo add fluent {{ fluent_helm_repo }} \
&& helm repo update \
&& helm install fluentd fluent/fluentd -f "{{ target_folder_path }}/fluentd-values.yaml" -n {{ logging_namespace }}
ignore_errors: true

You need to update Ansible variables in the following file:

#Logging Configuration
opensearch_endpoint: <OpenSearch Cluster API endpoint>
opensearch_port: <OpenSearch Cluster API Endpoint port>
opensearch_user: <OpenSearch username>
opensearch_credential: <OpenSearch Credentials>
opensearch_index: <Index name>
logging_namespace: logging
fluent_helm_repo: https://fluent.github.io/helm-charts

Fluentd Values File Template

image:
repository: "fluent/fluentd-kubernetes-daemonset"
pullPolicy: "IfNotPresent"
tag: "v1.15-debian-opensearch-1"

fileConfigs:
01_sources.conf: |-
## logs from podman
<source>
@type tail
@id in_tail_container_logs
@label @KUBERNETES
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
</parse>
emit_unmatched_lines true
</source>

# expose metrics in prometheus format
<source>
@type prometheus
bind 0.0.0.0
port 24231
metrics_path /metrics
</source>

02_filters.conf: |-
<label @KUBERNETES>
<match kubernetes.var.log.containers.fluentd**>
@type relabel
@label @FLUENT_LOG
</match>

# <match kubernetes.var.log.containers.**_kube-system_**>
# @type null
# @id ignore_kube_system_logs
# </match>

<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
skip_labels false
skip_container_metadata false
skip_namespace_metadata true
skip_master_url true
</filter>

<match **>
@type relabel
@label @DISPATCH
</match>
</label>

03_dispatch.conf: |-
<label @DISPATCH>
<filter **>
@type prometheus
<metric>
name fluentd_input_status_num_records_total
type counter
desc The total number of incoming records
<labels>
tag ${tag}
hostname ${hostname}
</labels>
</metric>
</filter>

<match **>
@type relabel
@label @OUTPUT
</match>
</label>

04_outputs.conf: |-
<label @OUTPUT>
<match **>
@type opensearch
@id opensearch
@log_level debug
include_tag_key true
type_name _doc
host "{{ opensearch_endpoint }}"
port {{ opensearch_port }}
user {{ opensearch_user }}
password {{ opensearch_credential }}
index_name {{ opensearch_index }}
scheme https
ssl_verify false
ssl_version TLSv1_2
suppress_type_name true
</match>
</label>

As you can see in the Fluentd values file, Fluentd collects container log files from /var/log/containers/*.log (you can update this path accordingly) and ingests them into OpenSearch (04_outputs.conf section).

Testing

After provisioning the OpenSearch cluster and deploying Fluentd inside the cluster, navigate to the OpenSearch dashboard and select the index you created. Now, you can search the logs.

Conclusion

Using this solution, we don’t need to connect to WebLogic pods to view logs. Additionally, we can use a combination of OCI Logging, Connector Hub, and OCI Log Analytics as a second solution to ingest WebLogic logs into OCI Log Analytics.

In the next article of this series, I will explain the second logging solution and describe how to deploy a JRF domain in Kubernetes.

--

--

Omid Izadkhasti

Principal Cloud Solution Architect @Oracle. The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.