Creating Applications In OpenShift

Munevver Onyay
turkcell
Published in
7 min readAug 16, 2024

Application

We have a common application development platform which is developed in-house. Different software applications can be developed vertically on this platform which the platform offers ready modules required for almost all enterprise software applications. Software applications can be divided into many different modules according to their responsibilities. Our application consists of approximately 20 sub-modules which are developed on .NET core 3.1. Moreover, all of these modules are deployed separately as a frontend and a backend code source. In business-oriented applications that will be developed vertically on the platform which work with their backend and frontend baseline, also they use same technology.NET Core. The platform modules are given as follows and runned as a microservice.

The sub-modules and microservices of application platform

Current Installation and Preparation Pre-Openshift

We are hosting our infrastructure consisting of many different microservices as applications on Microsoft IIS runned on virtual machines. For a more flexible load and resource management, we created dockers for each module on application. One of the most important steps in transitioning to the cloud world was dockerized .NET Core applications to us. You can read the Docker blog to learn the details of this topic. https://docs.docker.com/samples/dotnetcore/

Create a Dockerfile for an ASP.NET Core application:
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY ../engine/examples ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY - from=build-env /app/out .
ENTRYPOINT ["dotnet", "aspnetapp.dll"]

Pipeline Structure:

Our current pipeline is running through the Jenkins tool. Jenkins is a self-contained, open source automation server which can be used to automate all sorts of tasks related to building, testing, and delivering or deploying software. Jenkins can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. If you interest in Jenkins, you can read the https://www.jenkins.io/doc/. We made additions to our existing pipeline structure. In each pipeline run, we enriched it with the steps of creating our docker image and deploying it to Openshift environment given below yellow color steps.

Code sample for deploying to openshift

stage('Deploy to Openshift') {
when {
allOf {
expression{ "${env.BRANCH_NAME}" == 'dev' }
expression{ ( (publishDocker && publishExistPackage=='true'
|| builtUISuccess == true || builtApiSuccess == true
||builtWindowsServiceSuccess == true) ) ) }
}
}
steps {
script {
openshiftClient {
openshift.apply(openshift.process(readFile(file:
"openshift/deploy/configmap-${modulePublishName}-env-
appsettings.yml")))
def registryUrl = "${dockerRegistryBaseUrl}/${bitbucketProjectName}/${modulePublishName}:${currentArtifactVersion}"
openshift.apply(openshift.process(readFile(file: "openshift/deploy/deployment-${modulePublishName}.yml"), "-p", "REGISTRY_URL=${registryUrl}", "-p", "NAMESPACE=pars", "-p", "DEPLOYMENT_NAME=${modulePublishName}", "-p", "PULL_PUSH_SECRET=${imagePullSecret}" ))
def dc = openshift.selector('dc', "${modulePublishName}")
if(dc.exists()){
dc.scale(" - replicas=0")
dc.rollout().status()
dc.scale(" -
replicas=1")
}else{
dc.rollout().status()
}
}
}
}
post {
always {
echo "Openshift deployment is finished."
}
success {
script{
echo "Openshift deployment is completed successfully."
}
}
failure {
script{
echo "Openshift deployment is failure!"
}
}
}
}
}
}

OpenShift

Openshift is actually a Redhat product that has covered the deficiencies of Kubernetes and offered added value services to the user. It is an open source development platform, which enables the developers to develop and deploy their applications on cloud infrastructure easily. So that, many companies currently use Openshift in their production environments.

Openshift is a layered system. Each layer is related with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes.

Docker is an open source containerization platform.It is not virtual machine. So,it is different from virtual machines such as VirtualBox, VMware. Dockers doesn’t use hypervisors.Hypervisors play a major role in virtualization and it is used in all the systems where virtual machines are used. Almost all cloud computing services use hypervisors to manage their virtual machines. Dockers are used in places where microservices are used. Also, DevOps methodology is in place, it is better to use Dockers as this is faster and has container setup for different environments and to store different levels of code. In addition, dockers accesses the operating system through the Docker engine and uses system tools shared.This provides to consume less system resources than traditional VMs.

Virtual Machine
Docker

Kubernetes enables rapid and large-scale application development, deployment, and management as Openshift. Kubernetes offers more flexibility as an open-source framework and can be installed on almost any platform, but Openshift is limited only supporting RedHat platform. OpenShift has stricter security policies. For instance, it is forbidden to run a container as root. It also offers a secure-by-default option to enhance security. Kubernetes doesn’t come with built-in authentication or authorization capabilities, so developers must create bearer tokens and other authentication procedures manually. This information indicates that it closes Kubernetes shortcoming as increases security for Kubernetes and applications.OpenShift makes easier to manage Kubernetes.

OpenShift comes with web console which can be accessible by developers to browse and manage applications. OpenShift web console can be accessed only users who are authorized with OpenShift authentication. OpenShift also comes with built-in integration to manage source code. The source code repository has integrated to built-in CI/CD integration to build docker images and deploy into docker registry.

OpenShift enables to manage easier than Kubernetes layer with some tools.Let’s take a look at these tools:

· SCM (Source Code Management): It makes use of locally available resource management tools such as BitBucket,GitHub. It currently only supports Git solutions.

· Pipeline: It aims to abstract development and deployment processes and make them workable on all platforms.

· OCR (Openshift Container Registry): We perform a series of operations such as creating and storing images, downloading the images we need while creating a container, and managing the images.

· SDN (Software Defined Network): It makes it possible for containers to communicate with each other. It means from the overlay network.

· API: There are APIs on Openshift for management operations. These are REST-based tools used to manage both resources and processes.

· Governance: It is used to manage the access of individuals or teams to applications. In this way, unauthorized access is also prevented.

The Architecture of OpenShift

OpenShift Web Console provides cluster management.Users can perform transactions through this interface according to their authorization.

Developers store and version the projects they develop in the system via SCM.

It allows it to be stored as an image by being pushed to OCR using Pipeline.

When a container is to be raised, OCR is searched and if there is a relevant image, the image is taken from there and the container is created.

Containers are located inside Pods. A pod can technically contain multiple containers; but as best practice, it is recommended that a Pod only host one container.When we want to create a Pod, we do so through deployments.

We define service on deployment. Services represent each of our running applications, and we basically define and use multiple Services within the deployment.

IP and port information is added on Service page.

URL information is added on Routes page.

--

--