Using Docker and Refinitiv Real-Time Connector for testing Streaming Market Data applications

Umer Nalla
LSEG Developer Community
21 min readJan 19, 2022

Knowledge Prerequisite:

  • Working knowledge of Windows, developer tools, Refinitiv RT-SDK and basic Linux + Windows command line usage
  • Some experience of developing with Refinitiv Streaming APIs such as EMA, ETA and/or Websocket API

Introduction

The Refinitiv Real-Time SDK, consisting of the high-level Enterprise Message API and lower-level Enterprise Transport API is recommended for developers writing or migrating a real-time streaming data Consumer, Posting (Contributor) or Provider application in Java or C++.
Alternatively, if you are using Python or other languages then you can also consider using the Websocket API for Consumer + Contributor applications.

Testing your code

Under normal circumstances, developers can test their:

  • EMA, ETA or Websocket Consumer applications by connecting to a live feed:
    - over the internet via Real-Time Optimised (RTO),
    - via a deployed Refinitiv Real-Time Distribution System (RTDS)
    - or the Refinitiv Real-Time Connector (RTC).
  • Provider type applications generally require an RTDS to publish the data to and to consume the published data from.
  • Contributor (Posting) applications require access to either
    - Refinitiv Contributions Channel (for external contributions)
    - RTDS (+ Contribution gateway — for external contributions)

However, there can be scenarios when you as a developer may want to test your code without a live feed or access to an RTDS — for example:

  • no access to an internet connection
  • no credentials or user ID required to consume data
  • test with a specific payload/content set
  • no access to your corporate network
  • your market Data team has not yet configured your RTDS system to receive data from your Provider
  • you want to test a Provider or Posting application with different payloads etc without causing any issue on the RTDS system
  • and so on…

In my previous article — Testing EMA Consumer and Provider Applications, I covered a few different options such as
– direct connect between Provider and Consumer
– capture and playback of data
– playback of canned data
– using the Refinitiv Real-Time Connector (formerly known as ADS POP)

Of the above, I would say my favourite is probably using the Refinitiv Real-Time Connector. A simplistic way of describing RTC is ‘a cut down RTDS in a single application’ — a description which no doubt undersells RTC — but from a developer’s perspective, it provides what I need to test my applications without accessing (or disturbing) a real RTDS deployment. You may already be aware of the similar ADS POP application — which in simple terms was the precursor to RTC.

Why do I prefer using RTC to the other methods? Because it allows greater flexibility in my testing e.g.:
· some control over the ADS/ADH type functionality e.g., drop a service, log out a user, check various statistics and so on
· to Post (contribute) data to a Cache service that I define on the RTC
· connect & test my Interactive and Non-Interactive providers without a dependency on the Market Data team

Do you have access to an RTC?

As mentioned in the above mentioned article, RTC is a licenced product — so may not be an option for all developers. However, many of our client organisations have the required licences which allow them access to RTC — so please check with your Market Data team or your Refinitiv Account team.

If you plan to attempt the following steps, you will need a valid Licence file for RTC in order to successfully complete the process.

If you do have an RTC licence available to you, in this article I am going to show how you can use RTC installed deployed in a Docker Desktop to test your Consumer, Provider and Contributor applications on your localhost Windows PC. If like me, you are relative Docker newbie, I will share some of what I have learnt about Docker along the way too.

If you are a competent Linux user, you should be able to adapt the notes below and achieve the same on your Linux device as well.

NOTE: Before proceeding, please be aware of the two key points:

· Refinitiv does not officially support RTC running in a (Windows) Docker Desktop instance — official support is available with a standard Linux Docker deployment. However, I have found the above to be quite stable and given the next point — this should not be a major issue ….
· Whilst RTC is very convenient for local testing during the development phase, you are strongly advised to test all applications within the appropriate ‘real’ UAT environment before moving to UAT and Production stage. The live system will almost certainly be configured/optimised differently from the RTC container instance we are about to create.

An article of two halves

Due to the detailed nature of this article, it has been split into two parts

  • This 1st half covers Basic installation and configuration to get an instance of RTC running inside a Docker container — supporting Consumer functionality
  • The 2nd half details the additional configuration of RTC to support Providers and Posting (contributions) — as well as a brief look at the RTC monitoring tool

Part 1 - Deploying RTC inside Docker

Firstly, you need to ensure you have the Docker Desktop installed on your host PC. If you have not already done so, you can follow the official guides on the Docker website e.g. Install Docker Desktop on Windows — guides for other OSs are available on the same website.

Assuming you have obtained an RTC Licence file from your Market Data team or Refinitiv Account team, you can then deploy an RTC Docker image and build the Docker container as detailed below.

Local config, log and licence files

Firstly, I am going to create a localhost directory structure, where I store and access my RTC config, log files and licence file — outside of the Docker container.
I do this so that these files persist outside of the RTC container — allowing me to

  • deploy the licence file, edit the config file, view the log files easily from Windows
  • rebuild the docker container (e.g. when a new version is released) easily

So, for example, I created the following structure:

My RTC container directory structure

where the:

  • config folder contains my RTC config file and licence file
  • the debug folder is where any RTC debug will be stored
  • and log folder will contain the RTC log files

The above folders are on my localhost (Windows) drive, However, the RTC container also needs to access the config files and output debug and log files. Therefore, I will need to define some bind mounts, to mount the above directories inside my RTC container.

Why am I creating bind mounts? Docker does not store persistent data inside containers and files written to a container’s storage won’t be accessible once the container is stopped. Also, accessing files inside a container’s storage even while it’s running is not easy. By using bind mounts, we can ensure these files persist outside the container and are easily accessible using the standard host’s file system tools.

In the config folder, I will place the
· RTC config file — rtc.cnf (see later for details)
· RTC License file — REFINITIV_LICENSE (you should have already obtained)

My Docker Compose file

The bind mounts and other docker operations & config can be performed on the command line using docker commands. However, I prefer to use a docker-compose.yml file which will allow me to put my related config in one place and do the following with a single docker command:

· specify and pull my RTC image
· define my ports
· define my bind mounts
· compose my container

I created a docker-compose.yml file that looks like this:

version: "3"
services:
rtc:
container_name: rtc
hostname: rtc
restart: "always"
image: refinitivrealtime/rtc:3.5.4.L1
ports:
- 14002:14002
- 14005:14005
- 14003:14003
- 15000:15000
- 8080:8080
- 8101:8101

volumes:
- "c:/docker/rtc353/config:/opt/distribution/config"
- "c:/docker/rtc353/debug:/opt/distribution/debug"
- "c:/docker/rtc353/log:/opt/distribution/log"

For now, I copied the above docker-compose.yml file into my c:\docker\rtc354 folder — and I will come back to this later.

Let’s breakdown of the above in a bit more detail:

1. set the service, container and host name all to rtc
2. set to restart policy to automatically restart the container if it stops (unless manually stopped) — full details at Start containers automatically | Docker Documentation
3. specify the docker image to pull from the Docker Hub — at the time of writing that was version 3.5.4.L1
4. publish the ports for the RTC container that I want to interact with from my Windows OS:
· 14002: RSSL Consumer connecting to RTC
· 14003: Non-interactive provider connecting to RTC
· 14005: Interactive provider to listen for RTC connection (non-default value)
· 15000: WebSocket Consumer to RTC
· 8101: Legacy MarketFeed SSL subscriber to RTC

5. specify my bind mounts e.g., c:/docker/rtc354/config:/opt/distribution/config

  • Where c:/docker/rtc354/config is my local host path for my config files and
  • Where /opt/distribution/config is the RTC docker container config file path

By default, an Interactive Provider and the RTC would communicate on port 14002 — however, since I may want to run both Consumer and IProvider on my host PC, I have changed the IProvider port to 14005 (I will come back to this in the 2nd half).

Before I can create my RTC container, I need to configure my RTC as per my requirements. For my usage scenarios, I want to configure my RTC to:

  • Connect to a Refinitiv Real-Time Optimized (RTO) feed for my live streaming data.
  • Define a cache service so that I can test Contributions code by Posting data to the cache
  • Configure at least one Interactive and one Non-Interactive service

Initial RTC Configuration

For the initial phase, I configured only the basics required to get an instance of RTC up and connected to RTO (the Refinitiv Real-Time internet based streaming data service). If you don’t have access to RTO and want to use your own internal RTDS feeds — speak to your Market Data team for the necessary config parameters.

I am not an RTDS or RTC expert, so I took a vanilla rtc.cnf file from an RTC package and just ‘hacked’ it to meet my above requirements. I have included a vanilla rtc.cnf for your reference.
I have also provided a copy of my modified rtc.cnf file — minus my credentials. So, in theory, you could just add your credentials/connectivity params and run.
However, I think it would be useful to look at the key changes I made — to provide some understanding for when want to customise RTC to your needs. Just to be clear, whichever file you use — it will need to be renamed to rtc.cnf

If you have an internal Market Data team — they should be able to help you configure your RTC as per your needs.

NOTE: for some of the following rtc.cnf entries, you will be updating existing entries — for others, you will be adding new entries — so note the wording I use.

Connect to an RTO feed

This step is only required if you are using Refinitiv Real-Time Optimised — if you are sourcing your real-time data from an internal ADS, speak to your Market Data team — they should be able to help you configure your RTC appropriately.

Most of the RTO related changes I made were based on my colleague’s excellent article — Configuring adspop docker to connect to Refinitiv Real-Time Optimized (adspop is the precursor to RTC)
In the rtc.cnf, update the routeList configuration to cloud so it will connect to the Refinitiv Real-Time Optimized.

*rtc*routeList : cloud

Next, I need to provide my RTO MachineID and Password — however, the password needs to be obfuscated first

To obfuscate my password, I used the dacsObfuscatePassword tool that comes with the RTC package — which your Market Data team should be able to supply you with.

If not, you can download the same tool from our Developer Portal’s RT-SDK Tools Downloads page — Refinitiv Real-Time SDK — Tools. On that page, you will note there is a Windows and Linux version of ‘RCC — Dacs Obfuscate Login Tool’.

Once you unpack the above zip file, you will find the dacsObfuscatePassword tool which you can run as follow:

C:> DacsObfuscateLogin -e <your MachineID password>
Or
#./dacsObfuscatePassword -e <your MachineID password>

to generate an obfuscated version of your password.

And then update the userName and obfuscated userPassword entries in the rtc.cnf as follows:
*rtc*cloud.route*userName : <your RTO machine ID>
*rtc*cloud.route*userPassword : <your obfuscated password>

You can also update your location entry — so that RTO will connect to the closest endpoints — the default is us-east. However, as I am based in the UK, I updated mine to:
*rtc*cloud.route*location : eu-west

If you are in APAC, you could update it to:
*rtc*cloud.route*location : ap-southeast

Note: Whilst the RTC container is running, it will be logged into RTO using the above MachineID. So, if you need to use the same MachineID in another application — you can stop the RTC container, to avoid any login conflicts.

Licence File

I also updated the location of my licence file to match the bind mount folder I defined earlier in my docker-compose.yml file:
*rtc*licenseFile : /opt/distribution/config/REFINITIV_LICENSE

For reference, the licence file you have been provided should look something like this:

{
LICENSEID = <xxx>
KEY = <license >
LICENSEE = <your details>
NODEID = * (for temp licence)
EXPIRATION = <Date>
LICENSETYPE = <Type>
PRODUCT = RTC
VERSION = 3.5


}

Dictionary + Logfile folders

I also updated the log file path to match the container folder defined in my docker-compose.yml file:
*rtc*logger*file : /opt/distribution/log/rtc.log

RTC needs dictionary files to decode the data — a set of which are included in the RTC docker image. When the container is created, it places them in the container’s /opt/distribution/etc folder.
However, a default rtc.cnf is looking elsewhere, so I updated the location as below:
*fieldDictionary : /opt/distribution/etc/RDMFieldDictionary
*enumFile : /opt/distribution/etc/enumtype.def

Deploy and run the RTC container — finally(!)

Once I copied the updated rtc.cnf and REFINITIV_LICENSE files into my config folder, I then proceeded to build and run my RTC container.
If you recall, earlier on I copied by docker-compose.yml file into my c:\docker\rtc354 folder — which I then ‘run’ using the following command.

c:\docker\rtc354> docker-compose up -d

The above command creates and starts my RTC instance in detached mode — i.e., runs the container in the background. The first time you run the above, the output should look something like this:

[+] Running 16/16
- rtc Pulled 16.1s
- 1e7cdd8e55f9 Pull complete 9.4s
- ac8f959ec7d9 Pull complete 9.5s
- fd08f01e9913 Pull complete 9.8s
- db7070756e15 Pull complete 9.8s
- 491754c3df24 Pull complete 9.9s
- 97edfd880226 Pull complete 9.9s
- 6334591e144e Pull complete 10.0s
- 4a0d7610dbad Pull complete 10.0s
- 61fa6819281f Pull complete 10.1s
- 28ebdd3b2b7c Pull complete 10.2s
- 7d59bce42f1c Pull complete 11.5s
- 6e0927527697 Pull complete 11.6s
- 2596cef2f39d Pull complete 11.7s
- b2240f2b1c86 Pull complete 13.1s
- f75501f123b7 Pull complete 13.1s
[+] Running 1/1
- Container rtc Started 1.7s

To confirm the RTC container is running, you can execute:

C:>docker inspect -f {{.State.Running}} rtc

which should return true.

You could also use the Docker Desktop Dashboard to check the status:

Docker dashboard showing RTC is up and running

If the RTC fails to start, then you can check the log folder e.g., c:\docker\rtc354\log for the rtc.log file. If the content of the log file does not help you identify the issue, please speak to you Market Data team or raise a support ticket on My.Refinitiv for the Refinitiv Real-Time Distribution System product.

Using VS Code to interact with Docker

I should mention here that if like me, you are a fan of Visual Studio Code, you can also use that to deploy Docker Containers and interact with them — by installing the various Docker related extensions for VS Code — Docker — Visual Studio Marketplace

For example, if I open the folder containing my docker-compose.yml file inside VS code, I can then right-click the yml file and run Compose Up / Down / Restart etc.

I can also inspect the Docker Containers and Images from within VS Code as well restart/stop a container or delete an Image/Container and so on:

Assuming the RTC container started up and RTC is running, you can then connect a Websocket API or RT-SDK consumer example to consume some data from RTC using localhost:14002 (for RSSL connection) or localhost:15000 (for Websocket connection).

For instance, I can test it with any Consumer example such as 100_MP_Streaming that is supplied with EMA Java or C++.
I just need to ensure I have
· changed the code to consume data from the ELEKTRON_DD service
· and the host is set to localhost:14002

OmmConsumer consumer( OmmConsumerConfig().host( "localhost:14002" ).username( "user" ));

consumer.registerClient( ReqMsg().serviceName( "ELEKTRON_DD" ).name( "IBM.N" ), client );

When I run the Consumer example it will connect to the RTC instance on port 14002 and request some data from the ELKTRON_DD service.

If you are using the Websocket API and normally request data without specifying a service name e.g.:

{
"ID": 2,
"Key": {
"Name": "TRI.N"
}
}

there is one more change you will need to make to the rtc.cnf file — i.e., specifying a default service for WebSocket requests — see towards the end of the article for details.
In the meantime, you could just change your code to specify a service name in the JSON request message e.g.:

{
"ID": 2,
"Key": {
"Name": "TRI.N",
"Service":"ELEKTRON_DD"
}
}

Part 2 — Configure RTC for Providers and Posting

Once I had the basic Docker install, deployment and RTC configuration out of the way, I moved onto the focus of these articles — i.e., configuring the RTC to allow me to:

  • Post data to a cache
  • Test both Interactive and Non-Interactive Providers more easily
  • Access some functionality normally only available to the MDS admin team

Configure RTC for a Posting Cache and NIProvider service

The first thing I added was a service that would allow me to test Post (Contributor) type applications as well as Non-Interactive provider code.

To enable the NI-Provider functionality of the RTC, I activated the RsslServer as follows (in the rtc.cnf file):

!!! RSSL
*rtc*enableRsslServer : True
*rtc*rsslServerPort : distribution_rssl_source
*rtc*distribution_rssl_source*serverToClientPings : True
*rtc*distribution_rssl_source*clientToServerPings : True
*rtc*distribution_rssl_source*pingTimeout : 30
*rtc*distribution_rssl_source*minPingTimeout : 6
*rtc*distribution_rssl_source*maxConnections : 100
*rtc*distribution_rssl_source*guaranteedOutputBuffers : 200
*rtc*distribution_rssl_source*maxOutputBuffers : 400
*rtc*distribution_rssl_source*compressionType : 0
*rtc*distribution_rssl_source*zlibCompressionLevel : 3
*rtc*distribution_rssl_source*interfaceName :
*rtc*distribution_rssl_source*reusePort : False
*rtc*distribution_rssl_source*connectionType : 0
*rtc*distribution_rssl_source*serverCert :
*rtc*distribution_rssl_source*serverPrivateKey :

In the above section, I set the first parameter to True and removed the ‘!’ comment from the remaining lines in the section.

I then defined a service called NI_PUB that I could

  • Post (contribute) data from a Posting application
  • Publish data to from a NIProvider application:
!!! Non-interactive Cache for Posting + NIPROV
*rtc*cacheServiceList : NI_PUB
*rtc*NI_PUB*cacheLocation : ssl
*rtc*NI_PUB*cacheType : sourceDriven
*rtc*NI_PUB*maxCache : 50000
*rtc*NI_PUB*domainsCached : ALL
*rtc*NI_PUB*capabilityList : 6, 7, 8, 9, 10
*rtc*NI_PUB*buildCacheFromUpdates : False
*rtc*NI_PUB*markItemsOkOnUpdate : False
*rtc*NI_PUB*forwardPostUserInfo : False
*rtc*NI_PUB*dumpAccessLockFileName :
*rtc*NI_PUB*dumpAuditUserInfoFileName :
*rtc*NI_PUB*convertMfeedDataToRWF : False
*rtc*NI_PUB*validateMfeedToRWFConvForPosts : False
*rtc*NI_PUB*enableOverrideInsertRecordResp : False
*rtc*NI_PUB*overrideClearCache : False

Once I saved the above changes to my rtc.cnf file, I restarted my RTC container — via the Docker Dashboard, VS Code or from the cmd line e.g.

C:>docker restart rtc

Where rtc is the container name — as defined in the yml file.

To test if the above changes were valid, I used an EMA Posting example and NIProvider example.

NOTE: For all the test scenarios below, I am using EMA C++ examples — the same tests can be done with EMA Java — by making similar changes to the equivalent EMA Java example. Likewise, ETA developers should also be able to perform similar testing with the appropriate changes to the equivalent ETA examples.

Testing the Post (contribution) functionality

For a Posting test, we can use the EMA example 341_MP_OffStreamPost with the following changes in the main() and onRefreshMsg() methods:

// changes to main() method
UInt64 handle = consumer.registerClient( ReqMsg()
.domainType( MMT_MARKET_PRICE )
.serviceName( "NI_PUB" ).name( "MYTEST.RIC" ), client, closure );

// changes to onRefreshMsg() method:
_pOmmConsumer->submit( PostMsg().postId( postId++ )
.serviceName( "NI_PUB" )
.name( "MYTEST.RIC" ).solicitAck( true ).complete().payload(
RefreshMsg().payload(FieldList()
.addReal(25,35,OmmReal::ExponentPos1Enum ).complete())
.complete() ), ommEvent.getHandle() );

Once you build and run the example, you should some output like:

Received:    AckMsg
Item Handle: 1999905924208
Closure: 0000000000000001
Item Name: not set
Service Name: not set
Ack Id: 1

Received: RefreshMsg
Handle: 1999902871904 Closure: 0000000000000001
Item Name: MYTEST.RIC
Service Name: NI_PUB
Item State: Open / Ok / None / ''
Name: ASK Value: 350

As you will note, we first get

  • the AckMsg acknowledging the Post was accepted by the RTC server
  • then we see the RefreshMsg confirming that the MYTEST.RIC subscription request to the RTC server was successful
  • and the payload has an ASK field as per the Post message we submitted earlier.

NOTE: As a colleague (thank you Zoya!) rightly pointed out — the first time you test the Posting functionality described above, you may see an ‘F10 — Not in Cache’ error. This would be the case if the RIC you are contributing to, does not yet exist (and we are requesting it in the main() method).

Test the NI-Provider functionality

For the NIProvider testing, we can use any of the EMA NIProvider examples such as 100_MP_Streaming — from the NIProvider folder (– not Consumer folder).

The example as provided should already have the host set to localhost:14003 and the service name should be set to NI_PUB in the source code.

If you run the NIProv100 example, you should see some warnings about missing Dictionary configuration, followed by something like:

loggerMsg
TimeStamp: 16:11:17.744
ClientName: ChannelCallbackClient
Severity: Success
Text: Received ChannelUp event on channel Channel_7
Instance Name Provider_1_1
Connected component version: rtc3.5.3.L1.linux.rrg 64-bit
loggerMsgEnd

The above message confirms that the NIProvider successfully connected to the RTC instance. The example will then run for around 60 seconds publishing updates to the RTC server before exiting. To see some evidence that the data is really being published, we will need to fire up a Consumer example connecting to the RTC and subscribing to the service and instrument name as used by the NIProvider.

We can re-use the Consumer example 100_MP_Streaming example — just change the service and instrument name as follows:

consumer.registerClient(ReqMsg()
.serviceName("NI_PUB").name("IBM.N"), client );

If you then run NIProv100 in one console and Cons100 in a 2nd console, you should see Cons100 receiving a Refresh and Updates for IBM.N from NI_PUB

>Cons100
RefreshMsg
streamId="5"
name="IBM.N"
serviceName="NI_PUB"
Payload dataType="FieldList"
FieldList FieldListNum="0" DictionaryId="0"
FieldEntry fid="22" name="BID" dataType="Real" value="33.94"
FieldEntry fid="25" name="ASK" dataType="Real" value="39.94"
FieldEntry fid="30" name="BIDSIZE" dataType="Real" value="13"
FieldEntry fid="31" name="ASKSIZE" dataType="Real" value="19"
FieldListEnd
PayloadEnd
RefreshMsgEnd

UpdateMsg
streamId="5"
name="IBM.N"
serviceId="10000"
serviceName="NI_PUB"
Payload dataType="FieldList"
FieldList
FieldEntry fid="22" name="BID" dataType="Real" value="33.95"
FieldEntry fid="30" name="BIDSIZE" dataType="Real" value="14"
FieldListEnd
PayloadEnd
UpdateMsgEnd

Configure for Interactive Provider services

Finally, I wanted to configure the RTC to support a couple of IProvider services — i.e., the RTC mounting an IProvider application running on my local (Windows) host PC. When the RTC receives any data requests for those services, it will forward those requests onto my IProvider — which can then choose to fulfil or reject the requests.

The first step was to define a route for my Interactive Provider service — so I updated the existing route list:

*rtc*routeList : cloud,iprov

And then declared the services for that route — note that I am also changing the port for this route to 14005

!!! IPROV on Windows Host Route
*rtc*iprov.route*serviceList : IPROV, DIRECT_FEED
*rtc*iprov.route*port : 14005
*rtc*iprov.route*hostList : host.docker.internal
*rtc*iprov.route*userName : umer.nalla
*rtc*iprov.route*protocol : rssl
*rtc*iprov.route*singleOpen : True
*rtc*iprov.route*allowSuspectData : True
*rtc*iprov.route*disableDataConversion : True
*rtc*iprov.route*compressionType : 1
*rtc*iprov.route*pingInterval : 2
*rtc*iprov.route*maxCache : 50000
*rtc*iprov.route*IPROV*appServiceName : IPROV
*rtc*iprov.route*DIRECT_FEED*appServiceName : DIRECT_FEED

You could just define a single service — however, I wanted some flexibility for testing purposes. Most of the EMA IProvider examples and the default EMA config file are hardcoded with the DIRECT_FEED service name. Defining an additional IPROV service just allows me to validate any config changes etc I make (to ensure I am not inadvertently relying on some defaults).

I defined the services as follows:

!!! Interactive Provider
*rtc*serviceList : IPROV, DIRECT_FEED
*rtc*IPROV*cacheLocation : srcApp
*rtc*IPROV*cacheType : sinkDriven
*rtc*IPROV*maxCache : 50000
*rtc*IPROV*domainsCached : ALL
*rtc*IPROV*capabilityList : 6, 7, 8, 9, 10
*rtc*IPROV*buildCacheFromUpdates : False
*rtc*IPROV*markItemsOkOnUpdate : False
*rtc*IPROV*forwardPostUserInfo : False
*rtc*IPROV*dumpAccessLockFileName :
*rtc*IPROV*dumpAuditUserInfoFileName :
*rtc*IPROV*convertMfeedDataToRWF : False
*rtc*IPROV*validateMfeedToRWFConvForPosts : False
*rtc*IPROV*enableOverrideInsertRecordResp : False
*rtc*IPROV*overrideClearCache : False
!!! Interactive Provider
*rtc*DIRECT_FEED*cacheLocation : srcApp
*rtc*DIRECT_FEED*cacheType : sinkDriven
*rtc*DIRECT_FEED*maxCache : 50000
*rtc*DIRECT_FEED*domainsCached : ALL
*rtc*DIRECT_FEED*capabilityList : 6, 7, 8, 9, 10

As before, once the modified rtc.cnf file had been saved, I restarted the RTC container — via the Docker Dashboard, VS Code or from the cmd line e.g.

C:>docker restart rtc

Test the Interactive-Provider functionality

Once restarted, I can test my IProvider application. However, I need to ensure that the IProvider application will listen on port 14005 (for the inbound RTC connection) — rather than the default 14002 (I covered the reason for doing this in part 1).

For instance, as I was using the IProv100 example, I changed the OmmProvider port configuration as follow:

OmmProvider provider( OmmIProviderConfig()
.port( "14005" ), appClient );

Once I make the above change, rebuild and and fire up IProv100, I should see something like:

>IProv100.exe
loggerMsg
TimeStamp: 15:38:49.023
ClientName: ServerChannelHandler
Severity: Success
Text: Received ChannelUp event on client handle 2214874061056
Instance Name Provider_1_1
Connected component version: rtc3.5.3.L1.linux.rrg 64-bit
loggerMsgEnd

To test the IProvider I can then run a Consumer example connecting to the RTC and requesting data from the DIRECT_FEED service. Once again, I can re-use the Consumer example 100_MP_Streaming example — by just changing the service name as follows:

consumer.registerClient(ReqMsg()
.serviceName("DIRECT_FEED").name("TRI.N"), client );

NOTE: Just to be clear, the consumer will still be using port 14002 — connecting to the RTC, which will forward any item requests onto the IProvider on port 14005.

Provided the IProv100 (pun intended!) is still running, the Consumer should get some data back e.g.:

RefreshMsg
streamId="5"
domain="MarketPrice Domain"
Solicited
RefreshComplete
state="Open / Ok / None / 'Refresh Completed'"
name="TRI.N"
serviceName="DIRECT_FEED"
Payload dataType="FieldList"
FieldList
FieldEntry fid="22" name="BID" dataType="Real" value="39.90"
FieldEntry fid="25" name="ASK" dataType="Real" value="39.94"
FieldEntry fid="30" name="BIDSIZE" dataType="Real" value="9"
FieldEntry fid="31" name="ASKSIZE" dataType="Real" value="19"
FieldListEnd
PayloadEnd

And so on….

When you move on from your development phase & your Interactive Provider application is ready for UAT testing and connecting to your real RTDS system, remember to change the port back to the default 14002 (or whichever port your RTDS system uses).

Admin functions with rtcmon

The RTC package also includes an admin and monitoring tool called rtcmon (short for RTC Monitor)– which can be found in the /opt/distribution folder of the RTC container.

So, if I attach a shell to the RTC container with:
C:> docker exec -it rtc /bin/bash
And then run rtcmon:
# rtcmon

OR as pointed out by a reader (thank you!), you can use the one-line alternative:
C:> docker exec -it -w /opt/distribution/ rtc ./rtcmon
(where the -w param is setting the working directory to the location of rtcmon before executing it)

You should see something like:

You can then experiment with the tool to see things like how many items each connected application has open or to drop services to observe the impact on your application etc.

Note: you will need to use Linux style navigation keys e.g. H, J, K, L for cursor movement, Tab for changing sections etc — as detailed in the RTC System Admin manual.

For example, to drop a service I perform the following:
· Select ‘Source Routes and Application Proxies
· Tab to the ‘List of Source Routes
· Select the route for the service you want to drop e.g. ‘cloud
· Select the routeEnabled value (Enter to edit, Ctrl+U to delete existing value)
· Replace the True with False to drop the service

My application should then receive a Status Message indicating ‘Service down’ for any open instruments.
Once I have checked my application behaviour, I can set the above value back to True and the instrument should recover with a Refresh Message.

Configure a default service for WebSocket Requests

As mentioned earlier, the RTC can be configured such that when making Websocket API requests, you don’t need to explicitly specify a service name — RTC can default to a configured service.
This requires an additional change to the rtc.cnf file as follows:
*rtc*defaultJsonServiceId : 10002

Where the 10002 represents the default service to use — however, this value may differ on your RTC installation.
To work out the correct Service Id, I fired up rtcmon and selected the ‘Service Statistics’ option and noted the ID for ELEKTRON_DD — which in my case was 10002:

Once I added the above ServiceId to my rtc.cnf, I restarted rtc as before

C:>docker restart rtc

I should then be able to make Websocket API requests from ELEKTRON_DD, without specifying the service name:

{
"ID": 2,
"Key": {
"Name": "TRI.N"
}
}

Other rtcmon + RTC functionality

There are probably other stats/figures and functions you can discover & experiment with in the relative safety of your own little ‘RTDS’ installation — by referring to the RTC System Admin manual linked below (your MDS admin team may also be able to advise you further) .

You can also experiment with/test other scenarios to observe their impacts on your application e.g.

  • stop/restart RTC container
  • kill RTC service from the docker bash shell
  • restart docker
  • deploy multiple RTC instances — to test scenarios like
    - failover (with ChannelSets) when one RTC instance is stopped (and possibly restarted later)
    - publishing to multiple servers
    - source aggregation

Summary

As a developer having to code & test their Refinitiv Real-time Consumers, Providers and Posting (Contribution) applications there are certain test scenarios that have a certain level of reliance on our local Market Data Admin teams. These teams are often quite stretched and may not able to be able to provide access to the RTDS testing resources in a timely manner — potentially causing a bottleneck in our testing regime.

Having access to our own Refinitiv Real-Time Connector can allow us to perform some of the tests at our pace and time of choosing — before we do our final testing in the ‘real’ RTDS environment. I hope the above guide will help you in the same way it has helped me in my RTDS testing regime without having to vie for my Market Data admin’ time.

Documentation/Articles referenced:

NOTE: As mentioned before, I am nothing like an RTDS/RTC or Docker expert by a long shot — the above is based on my experimentation and pulling materials from various Refinitiv sources — feel free to contact me with corrections/improvements.

You can find additional links to related articles and the official Developer Portal pages for the RT-SDK etc in the right-hand panel at the top of this article.

--

--