https://commons.wikimedia.org/wiki/File:Knickerbocker_Hospital.jpg

Develop an Event Driven Application based on Kafka, Camel K and a Telegram bot on OpenShift step by step

Carlos Vicens
Jan 16 · 21 min read

Intro

Some months ago I started playing with Camel K, I instantly fell in love with it (and I didn’t have any previous experience with Camel), easy to deploy fast development, tons of components, you name it. As it usually happens (not just to me, right ;-) ) we tech people try a new good thing and if we love it… we tend to use it everywhere… ok, that was my case, I started to think of crazy scenarios, but one of them stood out and after some refinement ended up being this project.

NOTE: Ideally you will read and do and see results… and (hopefully) learn some basics to create your own EDA (Event Driven Application). This lab is not really about creating everything from scratch but more about having an example project up and running and then exploring possibilities while learning.

You can find the code and this guide here

The Idea

The idea came to me after remembering how many times I had to go with my youngest daughter to the Emergency Room (nothing serious but recurrent) and all those times waiting for results, moving to the next stage and informing my wife (who was with my other daughter). Wouldn’t if be nice to receive a message to my Telegram App every time the state of the patient changes? Well, that’s the aim of this project: serving as an example EDA (Event Driven Application) running on Kubernetes/OpenShift that sends out notifications via a Telegram Bot whenever the state/status of a patient changes.

In order to give our example EDA a bit more of (fictitious) context, let’s imagine there’s a Health Information System (HIS) at a hospital called Black Mountain Hospital. This application has a UI where you can change the status of a patient and every time a change happens a proper HL7 message is sent.

NOTE: this is an over simplified HIS, please health related professionals don’t get too mad at me, I know how complex a real HIS is ;-)

The scenario portrayed by this example application of the fictitious Black Mountain Hospital comprises these elements:

  • HIS frontend (Angular JS) where you can change the status of a patient
  • HIS backend (Spring Boot REST API) exposing the patient info API, data is persisted in a PostgreSQL Database. This piece sends out HL7 messages to a Kafka topic
  • Integration layer (Camel K) that translates HL7 events to plain events you can send to a human
  • Telegram Bot (Node JS) where you can signup with your ID, again data is persisted in a PostgreSQL Database

Prerequisites

You need access to an OpenShift 4.2+ cluster and be cluster-admin (or request your administrator to install a couple of elements for you). You can also run your own local 4.x cluster using CodeReady Containers.

Scope

In this guide we’ll cover:

  • the deployment of infrastructure elements in OpenShift: databases, kafka cluster, etc.
  • local development of UI, services and also the integration layer
  • the deployment of services on OpenShift

Get yourself ready

Git clone this repository and change dir… the usual.

Deployment of infrastructure

We need to deploy a couple of databases, Kafka and also Camel K… let’s get to it.

In order to ease the deployment of Kafka we’re going to use the Red Hat AMQ Operator (go here to learn more about operators).

WARNING: This task should be run by a cluster administrator

Log in as a cluster admin to your cluster and go to Operators->Operator Hub.

In the search field start typing amq. Then click on AMQ Streams.

Click on Install.

Leave the default values and click Subscribe, as you can see we're going to install the operator so that it's available to all namespaces. This will allow to use the operator as a normal user in any namespace.

Wait until Status changes to InstallSucceeded.

If status is InstallSucceeded you have installed the operator successfully in namespace openshift-operators.

Now you could start creating custom resources managed by the AMQ Streams Operator, such as Kafka, Kafka Connect, etc.

As we have explained before we need a couple of Camel integrations; to translate HL7 messages coming in to a Kafka topic and another one to send those translated messages to a Telegram Bot. Well, in order to run this Camel integrations (routes) we can do it manually in a Java project, or use Camel K. For all the reasons mentions before and more we’re going to use an operator the Camel K Operator.

WARNING: This task should be run by a cluster administrator

Log in as to your cluster as cluster-admin and go to Operators->Operator Hub.

In the search field start typing camel. Then click on Camel K.

Click on Continue.

Click on Install.

Leave the default values and click Subscribe, as you can see we're going to install the operator so that it's available to all namespaces. This will allow to use the operator as a normal user in any namespace.

Wait until Status changes to InstallSucceeded, if that is the case you have installed the operator successfully in namespace openshift-operators.

Now you could start creating custom resources managed by the Camel K Operator, such as Integration, Build, etc.

As we mentioned before we need a couple of Topics, one for HL7 events and another one for translated events… and of course we need a Kafka cluster to support them.

NOTE: This task and the next ones don’t require special permissions apart from being able to create namespaces, deployments, PODs, etc.

We have prepared a set of numbered shell scripts, please have a look to the one numbered 00 where some base environment variables are set. You may need to change the project name to be sure it's unique in your cluster...

Set the environment any time by doing this. Please run the next command, we’ll need to use $PROJECT_NAME environment variable later:

In this step we will run ./01-deploy-kafka.sh, please have a look to this script, there are a couple of important bits there.

First, it creates a project to hold all the elements. Next is the excerpt

oc new-project ${PROJECT_NAME}

IMPORTANT: if you run the scripts in order (and you don’t create another project in between), the default project will be automatically set to the project create, that is the one set by $PROJECT_NAME environment variable, see 00-environment.sh. If you have created another project or just want to be sure the default project is set correctly, please use: oc project. If you need to set the default project back to $PROJECT_NAME do this: . ./00-environment.sh && oc project $PROJECT_NAME

Second, it also creates a Custom Resource (CR) of type Kafka that defines a Kafka cluster with 3 replicas, three listeners, plain, secure and https based (external). And a couple of CRs of type KafkaTopic for each of the kafka topics we need.

NOTE 1: The AMQ Streams Operator reacts to the creation/update/delete of a set of Custom Resource Definitions, go here for further details

NOTE 2: The external listener is needed only while running the backend logic locally because the kafka cluster is running in OpenShift. In general this external listener is not needed when the logic run in the same cluster as the kafka cluster.

Now please run the script.

WARNING: Be sure you’re logged in, if unsure run oc whoami

To monitor the status of the deployment you can run the next command.

As you can see, there are 3 replicas both for the kafka cluster and for the zookeeper cluster, all of them are running.

$ oc get pod -n $PROJECT_NAME | grep state-machine-cluster
state-machine-cluster-entity-operator-55d6f79ccf-dckht 3/3 Running 16 5d15h
state-machine-cluster-kafka-0 2/2 Running 11 5d15h
state-machine-cluster-kafka-1 . 2/2 Running 11 5d15h
state-machine-cluster-kafka-2 2/2 Running 11 5d15h
state-machine-cluster-zookeeper-0 2/2 Running 8 5d15h
state-machine-cluster-zookeeper-1 2/2 Running 8 5d15h
state-machine-cluster-zookeeper-2 2/2 Running 8 5d15h

Another test you can run, this one to check if our topics were created properly.

$ oc rsh -n $PROJECT_NAME state-machine-cluster-kafka-0 bin/kafka-topics.sh --list --bootstrap-server localhost:9092
Defaulting container name to kafka.
Use 'oc describe pod/state-machine-cluster-kafka-0 -n state-machine-assistant' to see all of the containers in this pod.
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
__consumer_offsets
events-topic
hl7-events-topic

This step is quite easy… it only requires to download the Camel K CLI binary, kamel.

Please run this script.

NOTE: open the script 00-environment.sh if you want to change the cli version

./02-install-camel-k-cli.sh

This script leaves the kamel binary in the same directory where the script is run... so it's not in your PATH so be sure to execute it like this ./kamel or add it to you PATH.

We’re going to use oc new-app commands to deploy two single node PostgreSQL databases, one for the HIS backend and another one for the Telegram Bot.

If you have a look to script 03 you'll find the next couple of commands, these are normal oc new-app commands that create the needed descriptors to run an OCI image centos/postgresql-10-centos7.

Pay attention to the following:

  1. We are deploying the same image twice because we need two databases
  2. The name given to the application is different: backend-database vs telegram-bot-database
  3. Finally, there are a couple of environment variables that set user and password with the same values for both databases. Yes, not very secure… but remember this is for development purposes
oc new-app -e POSTGRESQL_USER=luke \
-ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data \
centos/postgresql-10-centos7 --name=backend-database
oc new-app -e POSTGRESQL_USER=luke \
-ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data \
centos/postgresql-10-centos7 --name=telegram-bot-database

Now run the script.

./03-deploy-databases.sh

Check the status of the deployment with this command. Eventually you should see something like this where the status of our database pods is Running, like backend-database-1-ttq2b in this example output.

$ oc get pod -n $PROJECT_NAME | grep database
backend-database-1-deploy 0/1 Completed 0 5m
backend-database-1-ttq2b 1/1 Running 4 4m
telegram-bot-database-1-7qv4d 1/1 Running 4 4m
telegram-bot-database-1-deploy 0/1 Completed 0 5m

One of the goals of the lab is to help you with the local development stage. For instance, regarding the HIS backend, our Spring Boot service (Java) needs to be able to connect to an HTTPS listener, hence you need the root CA certificate related to that listener added as a trusted CA to a keystore.

To do so, we have prepared this script ./04-prepare-development-env.sh. It basically does the following:

Extracts the root CA cert from a secret

oc extract secret/${CLUSTER_NAME}-cluster-ca-cert -n ${PROJECT_NAME} --keys=ca.crt --to=- > src/main/resources/ca.crt

Adds it to a Java Keystore

$ keytool -delete -alias root -keystore src/main/resources/keystore.jks -storepass password -noprompt
$ keytool -import -trustcacerts -alias root -file src/main/resources/ca.crt -keystore src/main/resources/keystore.jks -storepass password -noprompt

How does the Spring Boot application know which Keystore use to connect to kafka?

Well, there’s a class called KafkaConfig that prepares the configuration to connect to Kafka, open file backend/src/main/java/com/redhat/his/service/KafkaConfig.java to see how we use NONE to exclude some properties when running in OpenShift.

Then we have to different profiles default and openshift, so, when developing we use default (application.properties) and when running in OpenShift we use openshift (application-opeshift.properties).

Next you can find the relevant properties in the default profile.

# Kafka Bootstrap Servers => ./06-run-backend.sh sets KAFKA_SERVICE_HOST automatically for local dev.kafka.bootstrap-servers=${KAFKA_SERVICE_HOST}:443# Client and Groupkafka.clientId  = kafkaClientHisBackendkafka.groupId   = kafkaHisBackendConsumerGroup# Topicskafka.topic = hl7-events-topickafka.security.protocol = SSLkafka.ssl.truststore.location = src/main/resources/keystore.jkskafka.ssl.truststore.password = passwordkafka.ssl.keystore.location = src/main/resources/keystore.jkskafka.ssl.keystore.password = password

And this is the openshift profile cuonterpart.

# Kafka Bootstrap Serverskafka.bootstrap-servers=state-machine-cluster-kafka-brokers:9092# Client and Groupkafka.clientId  = kafkaClientHisBackendkafka.groupId   = kafkaHisBackendConsumerGroup# Topicskafka.topic = hl7-events-topickafka.security.protocol = NONEkafka.ssl.truststore.location = NONEkafka.ssl.truststore.password = NONEkafka.ssl.keystore.location = NONEkafka.ssl.keystore.password = NONE

The other thing done by ./04-prepare-development-env.sh is to install all dependencies needed by the Telegram Bot and the HIS Front end, bot Node JS applications.

Now please run this command.

./04-prepare-development-env.sh

Developing our EDA locally

So there are 4 pieces we need to be able to run locally first and then move to our OpenShift cluster, namely:

  • Telegram Bot
  • HIS Backend
  • Integrations: HL7 to Events and Events to Telegram Bot
  • HIS Frontend

Let’s run them test them one by one

First you need to use BotFather in order to create you Telegram Bot

Go to your Telegram App, here we’ll show you haow to do it with the desktop app but it should work in any supported platform.

Look for BotFather.

Now create a new bot with /newbot. Just follow the steps, when done you should receive a token, please copy it.

Time to paste this token, open ./05a-run-telegram-bot.sh and paste it when required.

Run this script in an new terminal window. Be aware that it will use oc port-forward to open a tunnel with the database running in OpenShift. By the way this script could also work properly completely local if you uncomment the required lines Using docker to run a database.

$ . ./00-environment.sh$ ./05a-run-telegram-bot.shPASTE TOKEN: YOUR_TOKENUSING TOKEN YOUR_TOKEN> telegram-bot@1.0.0 start /Users/cvicensa/Projects/openshift/tap/state-machine-assistant/telegram-bot> node app.jsnode-telegram-bot-api deprecated Automatic enabling of cancellation of promises is deprecated.In the future, you will have to enable it yourself.See https://github.com/yagop/node-telegram-bot-api/issues/319. internal/modules/cjs/loader.js:959:30DEV: truebody-parser deprecated bodyParser: use individual json/urlencoded middlewares app.js:21:9body-parser deprecated undefined extended: provide extended option node_modules/body-parser/index.js:105:29Health Assistant Bot has started. Start conversations in your Telegram.Telegram Bot started at: Tue Jan 14 2020 16:06:36 GMT+0100 (Central European Standard Time) on port: 9090Error: connect ECONNREFUSED 127.0.0.1:5432at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1129:14) {errno: 'ECONNREFUSED',code: 'ECONNREFUSED',syscall: 'connect',address: '127.0.0.1',port: 5432}Forwarding from 127.0.0.1:5432 -> 5432Forwarding from [::1]:5432 -> 5432

Hopefully your Telegram Bot is running in localhost and is connected to PostgreSQL (local or remote). Let’s test it.

First, open your Telegram App and look for your bot, in our case state machin finds our bot. Second click on START

As you can see some help is displayed.

Create a user (in our case user already existed).

Finally send a message and see the result in your Telegram App. Something like Patient JOHN SMITH with ID(PATID1234) has been admitted (ZZZ)

./telegram-bot/send-message.sh 9876543210W http://localhost:9090

Before we can run the integration layer we need to deploy the Telegram Bot to OpenShift.

INFO: This is so, because the integration layer runs in the cluster so it would be required for your local Telegram Bot to be listening in an external IP reachable from the cluster

We’re going to deploy our application using Nodeshift. Nodeshift helps us deploying our NodeJS application from the command line using Source to image behind scenes. in order to do that you have to provide minimal information in the shape of YAML descriptors in a folder named and being logged in to an OpenShift cluster.

Here’s a list of descriptors already prepared for deploying the Telegram Bot:

  • credentials-secret.yml is a proper k8s Secret object containing the database credentials
  • route.yml is a fragment of a Route object that routes to the Service object
  • deployment.yml is a fragment of a DeploymentConfig object. You’ll notice that there are some environment variables there for database host (literal, hardcoded), credentials (value comes from the secret mentioned above) and the Telegram token (value refers to a ConfigMap)

Hmm may be you’ve notice some things are missing here:

  • Where’s the ConfigMap to get the token from? It’ll be created when you run ./05b-deploy-telegram-bot.sh
  • Where are the Service, ImageStream, BuildConfig objects? Those are inferred by Nodeshift along with the missing pieces when the object is not complete but a fragment as in the case of deployment.yml

Now please run this command and provide the Telegram Token you obtained before.

INFO: If you happen to forget the token, you can always go to BotFater and ask him with /mybots ;-)

$ ./05b-deploy-telegram-bot.sh
PASTE TOKEN: YOUR_TOKEN
USING TOKEN YOUR_TOKEN
> telegram-bot@1.0.0 openshift /Users/cvicensa/Projects/openshift/tap/state-machine-assistant/telegram-bot
> nodeshift - strictSSL=false - dockerImage=registry.access.redhat.com/rhoar-nodejs/nodejs-10
2020–01–15T11:02:43.945Z INFO loading configuration
2020–01–15T11:02:43.952Z INFO using namespace state-machine-assistant at https://api.cluster-kharon-be2a.kharon-be2a.example.opentlc.com:6443

2020–01–15T11:04:07.398Z TRACE npm info ok
2020–01–15T11:04:07.398Z TRACE

2020–01–15T11:04:08.393Z TRACE Getting image source signatures
2020–01–15T11:04:08.716Z TRACE Copying blob sha256:04dbaef9d44294aedd58690d85eb37e5b57dd98d9e922be36e4ae4e019b21619

2020–01–15T11:04:09.214Z TRACE Copying config sha256:1e16c354e3774215a388a4a0b94c1376c6f042f3e0965d1c7ac16972de8b7a1c
2020–01–15T11:04:09.556Z TRACE Writing manifest to image destination
2020–01–15T11:04:09.559Z TRACE Storing signatures
2020–01–15T11:04:10.957Z TRACE 1e16c354e3774215a388a4a0b94c1376c6f042f3e0965d1c7ac16972de8b7a1c
2020–01–15T11:04:14.130Z TRACE
2020–01–15T11:04:14.131Z TRACE Pushing image image-registry.openshift-image-registry.svc:5000/state-machine-assistant/telegram-bot:latest …
2020–01–15T11:04:14.132Z TRACE Getting image source signatures
2020–01–15T11:04:14.223Z TRACE Copying blob sha256:5af42566e7d1943de0196a7d22dc5abb18d916ae5cdb762dffd28d305a11ad41

2020–01–15T11:04:15.572Z TRACE Copying config sha256:1e16c354e3774215a388a4a0b94c1376c6f042f3e0965d1c7ac16972de8b7a1c
2020–01–15T11:04:16.615Z TRACE Writing manifest to image destination
2020–01–15T11:04:16.708Z TRACE Storing signatures
2020–01–15T11:04:16.710Z TRACE Successfully pushed image-registry.openshift-image-registry.svc:5000/state-machine-assistant/telegram-bot@sha256:f49ed9fbd0baf545ccbba03cf6246ba1ef29c647012d049fb3768e817599c9b3
2020–01–15T11:04:16.719Z TRACE Push successful
2020–01–15T11:04:19.345Z INFO build telegram-bot-s2i-6 complete
2020–01–15T11:04:19.392Z INFO openshift.yaml and openshift.json written to /Users/cvicensa/Projects/openshift/tap/state-machine-assistant/telegram-bot/tmp/nodeshift/resource/
2020–01–15T11:04:19.808Z INFO creating new configMap telegram-bot
2020–01–15T11:04:19.846Z INFO creating new secret telegram-bot-database-secret
2020–01–15T11:04:19.849Z INFO creating new service telegram-bot
2020–01–15T11:04:19.855Z INFO creating new route telegram-bot
2020–01–15T11:04:19.857Z INFO creating deployment configuration telegram-bot
2020–01–15T11:04:19.999Z INFO route host mapping telegram-bot-state-machine-assistant.apps.cluster-kharon-be2a.kharon-be2a.example.opentlc.com
2020–01–15T11:04:20.003Z INFO complete

HIS Backend

Now it’s time to run, locally, our Spring Boot HIS API. In order to do so we run ./06-run-backend.sh. Open a new terminal and run it:

INFO: This script runs our application which connects to the Kafka topic $HL7_EVENTS_TOPIC. If you’re wondering why it connects to the Kafka cluster, the answer is this environment variable KAFKA_SERVICE_HOST. When run locally it’s filled with the result of running this command: =$(oc -n ${PROJECT_NAME} get routes ${CLUSTER_NAME}-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'), when running in OpenShift the value is predefined in application-openshifr.properties and equals to state-machine-cluster-kafka-brokers:9092.

./06-run-backend.sh

If it all works properly you should get something like this:

...2020-01-14 19:22:44.229  INFO 80644 --- [  restartedMain] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values:auto.commit.interval.ms = 5000auto.offset.reset = latestbootstrap.servers = [state-machine-cluster-kafka-bootstrap-state-machine-assistant.apps.cluster-kharon-be2a.kharon-be2a.example.opentlc.com:443]check.crcs = trueclient.id =connections.max.idle.ms = 540000default.api.timeout.ms = 60000enable.auto.commit = falseexclude.internal.topics = truefetch.max.bytes = 52428800fetch.max.wait.ms = 500fetch.min.bytes = 1group.id = kafkaHisBackendConsumerGroupheartbeat.interval.ms = 3000interceptor.classes = []internal.leave.group.on.close = trueisolation.level = read_uncommittedkey.deserializer = class org.apache.kafka.common.serialization.LongDeserializermax.partition.fetch.bytes = 1048576max.poll.interval.ms = 300000max.poll.records = 500metadata.max.age.ms = 300000metric.reporters = []metrics.num.samples = 2metrics.recording.level = INFOmetrics.sample.window.ms = 30000partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]receive.buffer.bytes = 65536reconnect.backoff.max.ms = 1000reconnect.backoff.ms = 50request.timeout.ms = 30000retry.backoff.ms = 100sasl.client.callback.handler.class = nullsasl.jaas.config = nullsasl.kerberos.kinit.cmd = /usr/bin/kinitsasl.kerberos.min.time.before.relogin = 60000sasl.kerberos.service.name = nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.ticket.renew.window.factor = 0.8sasl.login.callback.handler.class = nullsasl.login.class = nullsasl.login.refresh.buffer.seconds = 300sasl.login.refresh.min.period.seconds = 60sasl.login.refresh.window.factor = 0.8sasl.login.refresh.window.jitter = 0.05sasl.mechanism = GSSAPIsecurity.protocol = SSLsend.buffer.bytes = 131072session.timeout.ms = 10000ssl.cipher.suites = nullssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]ssl.endpoint.identification.algorithm = httpsssl.key.password = nullssl.keymanager.algorithm = SunX509ssl.keystore.location = src/main/resources/keystore.jksssl.keystore.password = [hidden]ssl.keystore.type = JKSssl.protocol = TLSssl.provider = nullssl.secure.random.implementation = nullssl.trustmanager.algorithm = PKIXssl.truststore.location = src/main/resources/keystore.jksssl.truststore.password = [hidden]ssl.truststore.type = JKSvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializer2020-01-14 19:22:44.391  INFO 80644 --- [  restartedMain] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.12020-01-14 19:22:44.391  INFO 80644 --- [  restartedMain] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14705e51bd2ce5>>> init() with topics = hl7-events-topic2020-01-14 19:22:44.898  INFO 80644 --- [  restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'2020-01-14 19:22:44.953  WARN 80644 --- [  restartedMain] aWebConfiguration$JpaWebMvcConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning2020-01-14 19:22:45.024  INFO 80644 --- [       Thread-8] org.apache.kafka.clients.Metadata        : Cluster ID: JiCToAYZTHCDpbNYVyMlKA2020-01-14 19:22:45.029  INFO 80644 --- [       Thread-8] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-1, groupId=kafkaHisBackendConsumerGroup] Discovered group coordinator state-machine-cluster-kafka-1-state-machine-assistant.apps.cluster-kharon-be2a.kharon-be2a.example.opentlc.com:443 (id: 2147483646 rack: null)2020-01-14 19:22:45.038  INFO 80644 --- [  restartedMain] o.s.b.a.w.s.WelcomePageHandlerMapping    : Adding welcome page: class path resource [static/index.html]2020-01-14 19:22:45.040  INFO 80644 --- [       Thread-8] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-1, groupId=kafkaHisBackendConsumerGroup] Revoking previously assigned partitions []2020-01-14 19:22:45.040  INFO 80644 --- [       Thread-8] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-1, groupId=kafkaHisBackendConsumerGroup] (Re-)joining group2020-01-14 19:22:45.270  INFO 80644 --- [  restartedMain] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 2 endpoint(s) beneath base path '/actuator'2020-01-14 19:22:45.340  INFO 80644 --- [  restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''2020-01-14 19:22:45.342  INFO 80644 --- [  restartedMain] com.redhat.his.BoosterApplication        : Started BoosterApplication in 6.678 seconds (JVM running for 7.135)2020-01-14 19:22:47.966  INFO 80644 --- [       Thread-8] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-1, groupId=kafkaHisBackendConsumerGroup] Successfully joined group with generation 1022020-01-14 19:22:47.968  INFO 80644 --- [       Thread-8] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-1, groupId=kafkaHisBackendConsumerGroup] Setting newly assigned partitions [hl7-events-topic-0]

Let’s run some tests, for instance let’s get all the patients in the dabase. Let’s remember that when run locally, default profile is used and H2 is the database, not PostgreSQL. Please run this command from a different terminal.

$ curl http://localhost:8080/api/patients[{"patientId":1,"personalId":"0123456789Z","firstName":"JOHN","lastName":"SMITH","stage":"idle"},{"patientId":2,"personalId":"9876543210W","firstName":"PETER","lastName":"JONES","stage":"idle"}]

The integration layer comprises two Java classes implementing two Camel routes:

  • HL7 to Events: which receives HL7 messages, translate them to human readable text and send them to a different topic
  • Events to Telegram Bot: which receives Events and pass them through to the Telegram Bot

Theses are the relevant lines of code of class HL7ToEvents

There is just one route hl7-to-patient-info that

1. starts from a kafka topic: from("kafka:{{kafka.from.topic}}...")

2. then, sets some catches for exceptions: .onException(...)

3. later, it translates from HL7 messages using the good old HAPI libraries: .process(exchange -> ...

4. finally after some convertion the human readable message is sent to another topic: .to("kafka:{{kafka.to.topic}}...")

Next you can find the relevant lines of code of the second part of the integration layer, class EventsToTelegramBot.

This integration comprises two routes, the 1st one events-to-bot is even simpler:

1. Again it starts from a topic: from("kafka:{{kafka.from.topic}}...")

2. Then some exception handling: .onException(...)

3. Pass the message to the other route: .to("direct:send-event-to-bot")

The 2nd route send-event-to-bot

1. Sets some header to configure the http component: .setHeader(...)

2. Send the message to the Telegram Bot using the http component: .to("http://{{telegram-bot.host}}:{{telegram-bot.port}}/new-message")

Let’s run our integrations in two different (additional) terminal windows:

  • 07a-run-integration.sh in charge of translating from HL7 to human readable text events
  • 07b-run-integration.sh in charge of sending those events to the Telegram Bot

So, please in a different terminal run:

$ ./07a-run-integration.sh

And yet in another terminal window, run:

$ ./07b-run-integration.sh

Let’s run a test that generates a change in the status of a patient:

NOTE: In order for this test to work you should have signed up the personalId. You should have done this before when testing the Telegram Bot.

curl -H 'Content-Type: application/json' -X PUT \-d '{"patientId":2,"personalId":"9876543210W","firstName":"PETER","lastName":"JONES","stage":"admission"}' \http://localhost:8080/api/patients/2

If it all works properly no error should be received and a message should be sent to the Telegram App.

If in the desktop app you’ll see this.

Then in the Telegram App you’ll see the message.

So for now we have HIS Backend, running locally, both integrations running in OpenShift (although we see logs locally) and the Telegram Bot running also in OpenShift.

Next step is to run the frontend locally pointing to the backend running also locally.

This time we have to run a NodeJS application that contains the code of an Angular JS (with Material design) along with a proxy that sends all /api/patients requests to http://localhost:8080/api/patients.

INFO: The trick is in the script dev in ./frontend/package.json that runs in paralell client and server. client uses --proxy-config server.conf.js to set the proxy rules. Open and check the line npm run dev.

./08-run-frontend.sh> frontend@0.0.0 dev /U.../state-machine-assistant/frontend> npx concurrently --kill-others "npm run client" "npm run server"...

Now open a browser and point to http://localhost:4200. You should see something like this.

Now add a Reason and click on Next for Patient PETER JONES (9876543210W) This way you change the status from adminssion to triage

If you go to he backend terminal you would see something like:

Record sent to partition 0 with offset 74

On integration 7a:

[2] 2020-01-15 11:49:40.131 INFO  [Camel (camel-k) thread #1 - KafkaConsumer[hl7-events-topic]] hl7-to-patient-info - body: TVNIfF5+XCZ8QURUMXxNQ018TEFCQURUfE1DTXwxOTg4MDgxODExMjZ8U0VDVVJJVFl8QURUXkEwOHxNU0cwMDAwMXxQfDIuNA1FVk58QTA4fDIwMDUwMTEwMDQ1NTAyfHx8fHwNUElEfHwyfDk4NzY1NDMyMTBXfHxKT05FU15QRVRFUnx8MTk2MTA2MTV8TXx8MjEwNi0zfDEyMDAgTiBFTE0gU1RSRUVUXl5HUkVFTlNCT1JPXk5DXjI3NDAxLTEwMjB8R0x8KDkxOSkzNzktMTIxMnwoOTE5KTI3MS0zNDM0fig5MTkpMjc3LTMxMTR8fFN8fDk4NzY1NDMyMTBXNTAwMV4yXk0xMHwxMjM0NTY3ODl8OS04NzY1NF5OQw1QVjF8MXxJfEVSXl5eXl5eQnxVfHx8MzdeTUFSVElORVpeSk9ITl5eXl5eXkFjY01ncl5eXl5DSXx8fDAxfHx8fDF8fHwzN15NQVJUSU5FWl5KT0hOXl5eXl5eQWNjTWdyXl5eXkNJfDJ8NDAwMDc3MTZeXl5BY2NNZ3JeVk58NHx8fHx8fHx8fHx8fHx8fHx8fHwxfHxHfHx8MjAwNTAxMTAwNDUyNTN8fHx8fHwNR1QxfDExMjJ8MTUxOXxKT0hOXkdBVEVTXkENSU4xfDAwMXxBMzU3fDEyMzR8QkNNRHx8fHx8MTMyOTg3DUlOMnxJRDE1NTEwMDF8U1NOMTIzNDU2NzgN
[2] Encoded Message TVNIfF5+XCZ8QURUMXxNQ018TEFCQURUfE1DTXwxOTg4MDgxODExMjZ8U0VDVVJJVFl8QURUXkEwOHxNU0cwMDAwMXxQfDIuNA1FVk58QTA4fDIwMDUwMTEwMDQ1NTAyfHx8fHwNUElEfHwyfDk4NzY1NDMyMTBXfHxKT05FU15QRVRFUnx8MTk2MTA2MTV8TXx8MjEwNi0zfDEyMDAgTiBFTE0gU1RSRUVUXl5HUkVFTlNCT1JPXk5DXjI3NDAxLTEwMjB8R0x8KDkxOSkzNzktMTIxMnwoOTE5KTI3MS0zNDM0fig5MTkpMjc3LTMxMTR8fFN8fDk4NzY1NDMyMTBXNTAwMV4yXk0xMHwxMjM0NTY3ODl8OS04NzY1NF5OQw1QVjF8MXxJfEVSXl5eXl5eQnxVfHx8MzdeTUFSVElORVpeSk9ITl5eXl5eXkFjY01ncl5eXl5DSXx8fDAxfHx8fDF8fHwzN15NQVJUSU5FWl5KT0hOXl5eXl5eQWNjTWdyXl5eXkNJfDJ8NDAwMDc3MTZeXl5BY2NNZ3JeVk58NHx8fHx8fHx8fHx8fHx8fHx8fHwxfHxHfHx8MjAwNTAxMTAwNDUyNTN8fHx8fHwNR1QxfDExMjJ8MTUxOXxKT0hOXkdBVEVTXkENSU4xfDAwMXxBMzU3fDEyMzR8QkNNRHx8fHx8MTMyOTg3DUlOMnxJRDE1NTEwMDF8U1NOMTIzNDU2NzgN
IN2|ID1551001|SSN12345678||132987^JOHN^^^^^^AccMgr^^^^CI|||01||||1|||37^MARTINEZ^JOHN^^^^^^AccMgr^^^^CI|2|40007716^^^AccMgr^VN|4|||||||||||||||||||1||G|||20050110045253||||||89|9-87654^NC
[2] sendingApplication ADT1
[2] >>> HL7 code: ADT event: A08
...

On integration 7b:

[1] 2020-01-15 11:49:40.148 INFO [Camel (camel-k) thread #2 - KafkaConsumer[events-topic]] events-to-bot - Route started from Kafka Topic events-topic [1] 2020-01-15 11:49:40.148 INFO [Camel (camel-k) thread #2 - KafkaConsumer[events-topic]] events-to-bot - body: {"personalId":"9876543210W","patientId":"2","message":"Patient PETER JONES with ID(9876543210W) has been updated (A08) in Black Mountain"} [1] 2020-01-15 11:49:40.148 INFO [Camel (camel-k) thread #2 - KafkaConsumer[events-topic]] events-to-bot - Sending message to telegram bot http://telegram-bot:tcp://172.30.247.180:8080/new-message: {"personalId":"9876543210W","patientId":"2","message":"Patient PETER JONES with ID(9876543210W) has been updated (A08) in Black Mountain"}...

And hopefully you’ll see a notification:

And the whole message as before:

So for now we have HIS Frontend and Backend, running locally, both integrations running in OpenShift (altough we see logs locally) and the Telegram Bot running also in OpenShift.

Next step is to run deploy HIS Frontend and Backend.

Deploy the remaining components to OpenShift

To deploy the frontend, as we did before for the Telegram Bot we’re going to use Nodeshift, on the other hand for the backend we cannot use Nodeshift because is a Java/Maven app... but no worries we have a similar tool called Fabric8 Maven Plugin.

For now just run this command:

./09-deploy-frontend-and-backend.sh

Eventually you’ll see in Projects->YOUR_PROJECT->Workloads something like this:

If previously we have two script to run the integrations HL7 to Events and Events to Telegram Bot now we only have one ./10-deploy-integration.sh. The other difference is that we don't want to see the logs locally, so no need for the --dev flag any more.

This is the relevant piece in our script:

/kamel run --configmap=hl7-to-events \ -d camel-gson -d mvn:ca.uhn.hapi:hapi-base:2.3 -d mvn:ca.uhn.hapi:hapi-structures-v24:2.3 \ ./integrations/HL7ToEvents.java ./kamel run --configmap=events-to-bot ./integrations/EventsToTelegramBot.java

Now please run the script, your should receive an output like this:

./10-deploy-integration.shintegration "hl7to-events" createdintegration "events-to-telegram-bot" created

Final tests

Now that we have everything deployed in OpenShift, it’s time to do the final test.

We only need the route of our HIS Frontend application, you can get it by running this command:

You can also use the web console and go to Projects->YOUR_PROJECT->Workloads Click on frontend then look for Resources->Routes

oc get route/frontend -n $PROJECT_NAME

Open a browser and change the status of PETER JONES at will, you should expect the same results as before.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬.

To join our community Slack team chat 🗣️ read our weekly Faun topics 🗞️, and connect with the community 📣 click here⬇

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Carlos Vicens

Written by

Mobile tech lover, programming languages collector, open source believer, redhatter

Faun

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade