How to Make Your OPC-UA Based Data Accessible and Useable with the Tempus IIoT NiFi OPC-UA Bundle

Access and Query an Unmodified, Original OPC-UA Data Stream for a Full Range of Use Cases and Consumption Patterns

--

by Chris Herrera

A Big Ask for OPC-Based Data

One common theme that I continue to hear from clients and at conferences is “help me connect to my facility or enterprise historians”.

Existing systems that collect these data streams today — SCADA networks, historians, etc. can be very challenging to integrate within a modern data and analytics pipeline. At the same time, the demand for these datasets continues to increase across industry use cases as noted in the graphic below.

A Foundational, Template-Based Accelerator Approach

Our engineering approach to these types of challenges remains the same, namely, to continue building foundational, open source, easily consumable templates as part of Hashmap’s Tempus IIoT Accelerator and to extend the value of our industry memberships — such as the OPC Foundation — as much as possible.

So, if you are limited by the analytics capabilities of your traditional systems and are looking to further leverage your Big Data platform to capture, collect, and securely transmit OPC-based data to your various business groups, I’d encourage you to check out our OPC-UA Analytics Accelerator Foundations Data Sheet and continue reading for all the details on the NiFi-OPCUA-Bundle below which can also be accessed on GitHub.

Get Started with the NiFi-OPCUA-Bundle

These processors and associated controller service allow NiFi access to OPC UA servers in a read-only fashion. This bundle provides 2 processors, GetOPCNodeList and GetOPCData. GetNodeIds allows access to the tags that are currently in the OPCUA server, GetOPCData takes a list of tags and queries the OPC UA server for the values. The StandardOPCUAService provides the connectivity to the OPCUA server so that multiple processors can leverage the same connection/session information.

Key Features

This processor aims to provide a few key features:

  • Access to list the tag information currently in the OPC UA server
  • Access to query the data currently in the OPC UA server
  • Optional null value exclusion
  • Configurable timestamp selection

System Requirements

  • JDK 1.8 at a minimum
  • Maven 3.1 or newer
  • Git client (to build locally)
  • OPC Foundation Stack (instructions to build below)

Build the OPC Foundation Stack

Clone the OPC Foundation GitHub repository

https://github.com/OPCFoundation/UA-Java.git

Change directory into the UA-Java directory

cd UA-Java

Execute the package phase (NOTE: at the time of this writing, there were test failures due to invalid tests, there are currently PR’s out there to address these, but they have not been merged into master, therefore we need to skip tests)

mvn package -DskipTests

Setup the local build environment for the processor

To build the library and get started first off clone the GitHub repository

git clone https://github.com/hashmapinc/nifi-opcua-bundle.git

Copy the jar from the previous step where we built the OPC Foundation code from the cloned repo of the OPC foundation code (Where repo_location is the location of where the cloned repo is and {version} is the version of the OPC Foundation code that was cloned.)

{repo_location}/UA-Java/target/opc-ua-stack-{version}-SNAPSHOT.jar

Place that file into the following directory (where repo_location is the location of where the nifi-opcua-bundle repo was cloned.)

{repo_location}/nifi-opcua-bundle/opc-deploy-local/src/main/resources

Change directory into the root of the nifi-opcua-bundle codebase located in

{repo_location}/nifi-opcua-bundle

Execute a maven clean install

mvn clean install

A Build success message should appear

[INFO] — — — — — — — — — — — — — — — — — — — — — — — — — — — —

[INFO] BUILD SUCCESS

[INFO] — — — — — — — — — — — — — — — — — — — — — — — — — — — —

[INFO] Total time: 9.384 s

[INFO] Finished at: 2017–08–01T08:27:00–05:00

[INFO] Final Memory: 31M/423M

[INFO] — — — — — — — — — — — — — — — — — — — — — — — — — — — —

A NAR file should be located in the following directory

{repo_location}/nifi-opcua-bundle/nifi-opcua-bundle/nifi-opcua-bundle-nar/target

Copy this NAR file to the /lib directory and restart (or start) Nifi.

Using the NiFi OPC-UA Bundle

Finding the Starting Node in the Address Space

Once NiFi is restarted the processors should be able to be added as normal, by dragging a processor onto the NiFi canvas. You can filter the long list of processors by typing OPC in the filter box as shown below:

Add the GetOPCNodeList processor to the canvas and configure it by right-clicking on the processor and clicking configure from the context menu. For now, go ahead and auto-terminate the failure relationship as shown below due to this being a quick test (in real life, you will want to do something else with this relationship).

Click on the SCHEDULING tab and set the timer to something like 10 seconds, as below. Otherwise you will just slam your OPC server with requests to enumerate the address space.

Next, configure the Properties by clicking on the PROPERTIES tab. Create a new controller service by clicking in the box to the right of the OPC UA Service property and clicking on Create a new service from the drop down as shown below.

This will bring up the Add Controller service modal box as shown below. Leave all as default and click Create.

Don’t worry about configuring the controller service just yet, lets focus on the other properties in the GetOPCNodeList. In this step we are going to configure the processor to allow us to visualize the list of tags within the OPC UA server. So lets go ahead and set the following properties:

When you are done configuring the processor as per the table above, click on the little arrow to the right of the controller service (shown below). This will allow you to configure the controller service. NiFi will ask you to save changes to the processor before continuing, click Yes.

You should be presented with a list of controller services, if you are on a fresh instance of NiFi you should only see the StandardOPCUAService controller service that we created above. Click on the pencil to the right of the controller service that we created above. This will take you to the Configure Controller Service modal box. Click on the PROPERTIES tab. and configure it as per the image below, replacing the Endpoint URL with your own opc.tcp//{ipaddress}:{port} endpoint.

Click apply to close the configuration modal. Enable it by clicking on the little lightining bolt next to the service when you are back at the controller service list. When the Enable Controller Service Modal appears, leave the scope as service only and click Enable. Once it is enabled, click Close to close the modal. Finally, click the X at the top right corner of the Modal box to return to the NiFi canvas. Go ahead and drop a GetOPCData processor on the canvas as we did with the GetOPCNodeList processor. Don’t worry about configuring it at this stage, just drop it on the canvas and create a relationship from GetOPCNodeList to GetOPCData by mousing over the GetOPCNodeList and when the arrow appears drag it to the GetOPCData processor and when the Create Connection modal appears check the Success box and click ADD. Your flow should look like the one below.

We are now ready to get the Node list, start the GetOPCNodeList processor by right clicking and selecting Start from the context menu. Refresh the canvas by right clicking anywhere on the canvas and selecting Refresh from the context menu. You should see 1 flowfile queued in the Success relationship. At this point you can stop the GetOPCNodeList processor as there is no reason to keep pulling the same data. Right click on the Success relationship between the GetOPCNodeList and GetOPCData processors and choose List Queue from the context menu. This will bring up a modal with the flowfiles currently in the queue. Click on the i icon in the first column next to the flow file to bring up the content viewer window, and click the View button in the bottom right. This will show the contents of the flow file and you will look for something that has your data. (most of the http://opcfoundation.org stuff is system related). The data is shown in the image below.

This image has a bunch of stuff above and below the snip, but this show you the root of the data as being ns=2;s=SimulatedChannel.SimulatedDevice. Record this as your starting node. In the next step we will use this to reconfigure the processor to only get the tag list and then configure the GetOPCData processor.

Getting the Data

Reconfigure the GetOPCNodeList processor

Now that we know what our starting node should be we are ready to reconfigure our processor. Right-click on the GetOPCNodeList processor and select Configure from the context menu. Click on the PROPERTIES tab. Enter the properties as below.

An explanation of these different properties is in the table below:

Click Apply. Now the processor is configured to simply return a tag list as shown below.

You are now ready to configure the GetOPCData processor.

Configuring the GetOPCData processor

Head back to the NiFi canvas now, and right-click on the GetOPCData processor and select Configure from the context menu to configure the processor. Go ahead and auto-terminate the failure relationship, as was done above, by checking the checkbox next to Failure. Click on the PROPERTIES tab and fill out the information as below.

NOTE: You will want to use the same controller service instance as created above for the GetOPCNodeList processor.

The description of the properties is in the table below:

For this example we will just write the data to a file using the PutData processor. So grab that processor and add it to the canvas as before. Configure it by putting in a valid path to a directory on your machine. Create a relationship by dragging your mouse from the GetOPCData processor to the PutFile processor and ticking the Success box.

Now your flow should look like the one below.

Some Helpful Hints

This is fine for a test, however, you would want to modify this in production use. Ideally you would have 2 flows, one that updates the tag list, and one that gets the data for the tags. The one that updates the tag list would run at a lower frequency. Additionally, depending on the number of tags, the queries should be split up so that they don’t overwhelm the server.

Should you Need OPC-DA Connectivity…

A quick mention that if your SCADA system or data collection environment uses OPC-DA as a connection protocol, we are fans of Kepware’s KepServerEX for the OPC-DA to OPC-UA wrapper which allows us to then connect the NiFi-OPCUA-Bundle directly to Kepware with minimal development time required.

It makes for a much easier to implement, less costly, and much more supportable/maintainable environment long-term for OPC-DA data sources — and it works great.

Get Started with a Quick 4 Week Accelerator

To make this solution easy to deploy, we have packaged a quick 4 week OPC-UA Analytics Accelerator Foundation consulting engagement that gets the foundation setup and working in your environment.

We will also continue to engineer and develop analytics applications that leverage real-time data collection and transmission for Hashmap’s Tempus Industrial IoT Accelerator.

I’d encourage you to visit GitHub and give us a call to start working with OPC-UA data and addressing specific use cases within your organization.

Feel free to share on other channels and be sure and keep up with all new content from Hashmap at https://medium.com/hashmapinc.

Chris Herrera is a Senior Enterprise Architect at Hashmap working across industries with a group of innovative technologists and domain experts accelerating the value of connected data for the open source community and customers. You can follow Chris on Twitter @cherrera2001 and connect with him on LinkedIn at linkedin.com/in/cherrera2001.

--

--