Experiences using the tWAS Docker container

John Alcorn
AI+ Enterprise Engineering
12 min readAug 2, 2019

When modernizing from WebSphere ND to Open Liberty isn’t feasible, consider using the new certified Docker image for the traditional WebSphere Application Server.

Generally, for Java-based microservices, we strongly recommend running them in the Open Liberty container, atop Kubernetes. Open Liberty is a modern, cloud-ready, open source Java application server that supports the latest Java Enterprise Edition (EE 7 and 8) and MicroProfile (MP 1, 2, and 3) standards, and is a great fit for creating new cloud-native microservices when your developers have deep Java programming skills. It is also a good target for modernization scenarios, as you migrate your traditional on-premises applications to a Docker/Kubernetes environment (whether that Kube cluster itself runs on-premises, or in the public cloud). That being said, a small percentage of applications are too difficult or time-consuming to migrate straight to Liberty, and for those, there is now an alternative.

IBM recently delivered a containerized version of its traditional WebSphere Application Server, which we often lovingly refer to as tWAS. This is a stand-alone profile of the app server you’ve known and loved for decades, not the full Network Deployment (ND) version. Kubernetes itself serves many of the roles that ND used to do for us, like clustering, configuring, high availability, scaling, and more. You simply define a Deployment that refers to the tWAS container hosting your application, and then let Kube determine things like how many pods to start, and how to route work to it (from within the cluster and from outside).

Such tWAS-based pods take a bit longer to start and require a bit more memory and CPU than Liberty-based pods. But they offer the full programming model that people have coded to for decades, making it easier to get your complex, legacy applications into an orchestrated containerized environment without code changes. For example, if the Transformation Advisor tool (or the version of it now built in to the tWAS 9.0.5.x admin console, that we’ll see here later) reports a large number of Severe errors and a large estimate for time to address them (such as if your legacy app was using outdated technologies like JAX-RPC), then the tWAS container may be a good choice.

To get some experience with deploying a microservice to the tWAS container, I decided to back-port my notification-twitter microservice from Liberty to tWAS. I picked this one because it didn’t happen to be using any MicroProfile technologies (for example, it uses Twitter4J to send tweets, rather than an mpRestClient), which aren’t supported in tWAS. Also, it is an optional microservice that many people don’t bother setting up when deploying my IBM Stock Trader sample (many choose the notification-slack version that posts to a Slack channel instead, or don’t bother configuring MQ messaging at all).

As a reminder, let’s review the Stock Trader architectural diagram, that we just saw in my recent blog entry on using an umbrella helm chart to deploy all of Stock Trader (that helm chart still works with this tWAS-based image, btw — just enter “twas” as the Tag, rather than “latest”, for it to grab that flavor of the image off of DockerHub). Usually, this diagram shows all of the Java-based microservices in a light blue color, and those all run on Open Liberty. But now we have an excuse to use WebSphere’s traditional dark purple color for this particular microservice.

We should point out that the caller of this microservice (a Liberty-based microservice calling it via an mpRestClient), and the thing it calls (Twitter), are completely unaware that we moved it to tWAS. It still responds to the same REST API call, expecting to be passed the same data and return the same data it always had. Said another way, the OpenAPI for this microservice is completely unchanged due to this alternate choice in app server (although, sadly, tWAS has no mpOpenAPI service, so you can’t hit the pod’s /openapi/ui endpoint to see its OpenAPI). And it still contains the same war file — mostly (we’ll discuss the few minor changes I had to make shortly).

First of all, let’s look at the Dockerfile used to construct this image. Just like when working with Liberty, we start from the Universal Base Image ( UBI) flavor of tWAS (previously, it was based on Ubuntu, but the UBI, from Red Hat, is a more strategic, and lighter-weight, flavor of Linux, based on a heavily pared-down version of RHEL). Then we copy in our app, and Jython scripts to install it and to load the SSL cert for Twitter into the trust store. See https://github.com/IBMStockTrader/notification-twitter/blob/master/Dockerfile.twas for the full Dockerfile, with comments, etc.

FROM ibmcom/websphere-traditional:latest-ubi

COPY — chown=was:root target/notification-twitter-1.0-SNAPSHOT.war /work/app/NotificationTwitter.war

COPY — chown=was:root installApp.py /work/config

COPY — chown=was:root registerTwitterSSLCertificate.py /work/config

ENV ENABLE_BASIC_LOGGING=true

RUN /work/configure.sh

The pattern here is to put your application(s) in the image’s /work/app directory, and any configuration scripts that it needs to /work/config. Then when you run configure.sh, it will run those scripts and install and configure your application(s). Note I just copied the installation Jython script from the Hello World sample for tWAS, and changed the name of the .war file to match my app’s name. The other script, to register the SSL certificate for Twitter into the app server’s trust store, we’ll discuss below.

In the spirit of full disclosure, let’s discuss a few issues I hit when initially doing a docker build with this. First off, I immediately realized that the tWAS image is so big (about 1.9 GB, compared to Liberty being about 0.2 GB), that I didn’t have enough of my disk allocated to the Docker engine on my Mac. I had to click on the Docker whale icon in my upper-right “tray” of system icons and go to the Disk tab of the resulting dialog and move the slider to the right.

With more disk space available to Docker, I got past the FROM statement, but hit problems when the configure.sh script ran. I hit a DeploymentDescriptorLoadException, which pointed to my web.xml and said it was invalid. I looked, and realized it didn’t like the (MicroProfile-related) MP-JWT in my login-config stanza. So I had to delete this part from my web.xml:

<login-config>

<auth-method>MP-JWT</auth-method>

<realm-name>MP-JWT</realm-name>

</login-config>

Of course, if you were coming from a legacy app that had been running on tWAS for years, you wouldn’t have had that in your web.xml, since it wasn’t an option there.

The second issue I hit was that I had to add the IBM proprietary deployment descriptor binding and extension files. I don’t need those in Open Liberty, but they were pretty much mandatory back in tWAS. Those too, you would already have if working with an app that has run on tWAS for years, so this is mostly an issue with me doing the unusual thing of back-porting from Open Liberty to tWAS. After some quick searching on Stack Overflow, I figured out I needed this in a new ibm-web-bnd.xml in my war’s WEB-INF directory:

<?xml version=”1.0" encoding=”UTF-8"?>
<web-bnd
xmlns=”http://websphere.ibm.com/xml/ns/javaee"
xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=”http://websphere.ibm.com/xml/ns/javaee
http://websphere.ibm.com/xml/ns/javaee/ibm-web-bnd_1_2.xsd"
version=”1.2">
<virtual-host name=”default_host” />
</web-bnd>

And I needed this in my ibm-web-ext.xml:

<web-ext>
<context-root uri=”notification”/>
</web-ext>

With those added, the Docker image built cleanly, and I was able to run it. Initially I just ran the Docker image directly on my Mac, via docker run -p 9080:9080 -p 9043:9043 notification:latest, and I was able to see things start cleanly. However, I remembered that my microservice needs four Twitter-specific environment variables, so I added those via -e params, to specify O-Auth things Twitter requires like the consumer key and its secret, and the access token and its secret, for the @IBMStockTrader account I created on Twitter for this sample.

Note the port numbers I specified above (and which will need to appear in the Kube yaml that we’ll see soon). I exposed port 9080 (for the default_host virtual host), so I can call my JAX-RS service. And I exposed port 9043 (for the admin_host virtual host), since that’s where the admin console lives. As a blast from the past, let’s hit that admin console URL (https://localhost:9043/admin) and see tWAS in its full glory, like many of us remember from our “past lives”.

The built-in ID (if you haven’t run Jython scripts to wire it up to an LDAP or whatever) is wsadmin, and the password — well, it’s in a file named /tmp/PASSWORD in the docker image. If you do a docker ps to see what images you have running, you’ll see the identifier for your image. Copy that to the clipboard, and then do a docker exec -it {image-id} bash to get into the image, then do a cat /tmp/PASSWORD to see the value (or you could do a docker cp to copy it off the image to your local disk). Enter those on the login page, and you’ll see the old-school administrative console.

I personally spent a lot of time here back in the day, working on products that “stacked” on top of tWAS, like IBM BPM and IBM Business Monitor. Expand the Applications section on the left and we’ll see our app that our Jython script installed.

Notice the Liberty Advisor in the console. This is a built in version of Transformation Advisor. This actually will analyze your applications and tell you how hard it would be to migrate each from tWAS to Liberty. Interestingly, if you view the report it generated for my app, it actually reports two severe issues, but they are just complaints about stuff used in the Twitter4J jar file in my war, and in this case aren’t actually an issue to be concerned about (false positives) — obviously, since I usually run this war file on Liberty!

Let’s look at one last thing in the console — how to get an SSL certificate into the trust store. In the past, I’ve always done this via the console. However, doing that via the console only makes the change in that running copy of the Docker image. If I stop the container and do another docker run of it (or in Kubernetes, kill the pod and let it start a fresh one), everything will be back to the state of just what the Dockerfile had done, with none of the remaining configuration that had been done on that earlier running instance. Therefore, the proper answer here is to write a Jython script that configure.sh will run during the Docker build, to do the import of Twitter’s SSL cert into the trust store.

The good news is, the tWAS admin console has the option to turn on recording of a script of what all you do in it. Go to System Administration->Console Preferences and check the “Log command assistance commands” checkbox and hit Apply.

Then just use the console like usual to import the SSL certificate for the api.twitter.com site on port 443.

Now, if you docker exec into your running container, like discussed earlier, you’ll find the log of all of the wsadmin commands that actually got run as you clicked on buttons in the console UI. It is under /logs/server1/commandAssistanceJythonCommands_wsadmin.log, which will contain statements like the following, that you can paste into a Jython script that you can have configure.sh run for you when you build your image:

# [7/31/19 19:27:41:676 UTC] SSL certificate and key management > SSL configurations > NodeDefaultSSLSettings > Key stores and certificates > NodeDefaultTrustStore > Signer certificates > Retrieve from port

AdminTask.retrieveSignerFromPort(‘[-keyStoreName NodeDefaultTrustStore -keyStoreScope (cell):DefaultCell01:(node):DefaultNode01 -host api.twitter.com -port 443 -certificateAlias twitter -sslConfigName NodeDefaultSSLSettings -sslConfigScopeName (cell):DefaultCell01:(node):DefaultNode01 ]’)

AdminConfig.save()

Now that the server will trust the Twitter SSL certificate, let’s actually try out our microservice. Just like when it runs on Liberty, we can test it via curl, passing in the 3-field JSON structure it expects, with values for the owner, old, and new levels (and having to escape each quote with a backslash). Note that I haven’t turned on security in the server, so I’m not having to pass any credentials here; if we were really deploying this into production, I’d need to figure out how to get the JWT support enabled in tWAS and would re-enable the Role-Based Access Control (RBAC) in my web app to only allow in properly authenticated users (perhaps as described here: https://www.ibm.com/support/knowledgecenter/SSEQTP_9.0.5/com.ibm.websphere.base.doc/ae/tsec_jwt_auth_conf.html).

Johns-MacBook-Pro-8:notification-twitter jalcorn$ curl -X POST -d “{\”owner\”: \”Scotty\”, \”old\”: \”Basic\”, \”new\”: \”Dilithium\”}” -H “Content-Type: application/json” http://localhost:9080/notification

{“message”:”On Friday, July 26, 2019 at 7:59 PM UTC, Scotty changed status from Basic to Dilithium. #IBMStockTrader”, “location”:”Twitter”}

Johns-MacBook-Pro-8:notification-twitter jalcorn$

As you can see, we got back a successful result, seeing Scotty reach Dilithium level (“sir, the engines can’t take the strain!” — lol). And sure enough, if I log into Twitter, I see the tweet, since I follow the @IBMStockTrader account.

The last thing I’ll point out is that the deployment yaml I usually use to deploy this microservice to a Kubernetes environment, such as the OpenShift Container Platform, needed to be updated a bit, since tWAS has higher memory and CPU requirements than Liberty. I also had to remove the stanzas for the readinessProbe and livenessProbe, since I was getting those for free by enabling the mpHealth (MicroProfile Health) feature in Liberty, but there’s no equivalent in tWAS (I would have to manually implement the /health endpoint myself). I also don’t have the /metrics endpoint available in tWAS, like I do via having the mpMetrics feature enabled in Liberty, so I had to remove the stanza that tells Prometheus to scrape that endpoint. See https://github.com/IBMStockTrader/notification-twitter/blob/master/manifests/deploy-twas.yaml for the full yaml.

Once we run that deployment yaml (which, to be clear, pulls the notification-twitter:twas image from DockerHub, not from the environment’s local Docker image registry), our tWAS-based container will be live and running in our Kubernetes environment. For example, I ran it on our Tribbles2 environment, which is an OpenShift 4.1 atop Amazon Web Services (AWS), and now we can see the results.

As you can see, it is there in our stock-trader namespace, alongside all of the other microservices that comprise this application. And other than the Jenkins pod, it is clearly the largest microservice in terms of memory usage, due to dragging along all of tWAS, whereas the others are all based on Liberty (which only enables features as needed by the apps installed to it). Let’s take a closer look at one of the pods, to better understand its resource utilization.

As you can see in the time-based graphs, the pod initially used up a LOT of memory and especially CPU. This is due to the way the tWAS server starts — it briefly consumes essentially all of the CPU that it can get, but then it settles down, once the server is fully up and garbage collection has run. Also, it starts faster if you give it more memory and CPU (if run at the lower settings I usually use for Liberty, it takes like 15 minutes to start, but with these settings, it will start in 2–3 minutes — still not the sub-minute start time of Liberty, but not too bad):

resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 250m
memory: 256Mi

To summarize — I’d always recommend Open Liberty over tWAS if possible. But if you need to start a modernization journey, and need to see some results quickly, with applications that heavily use legacy features of tWAS that aren’t in Liberty, then the tWAS container should be a consideration. Just remember to factor into your plans a second stage, where you complete your modernization journey and make it to Liberty (which often means re-coding old stuff like JAX-RPC to JAX-WS, for example, or uplifting old Java EE 6 or earlier to at least Java EE 7). But at least you’ll have seen some results of your legacy app in a Kubernetes-orchestrated containerized environment — whether that is in your own private cloud, or hosted in a public cloud.

Thanks again for following our blog, and feel free to leave us some feedback, ask questions, or suggest future blog entries.

Originally published at https://www.ibm.com on August 2, 2019.

--

--

John Alcorn
AI+ Enterprise Engineering

Member of the Cloud Journey Optimization Team at Kyndryl. Usually busy writing/testing code, or teaching others what I’ve learned.