WZL x GCX x IOTA — Status Report 02

Data(-base) specification and data preparation

--

Background: Finetool Machine with Coils. Image: © WZL | Christian May & Semjon Becker

Co-Authors: Semjon Becker and Felix Mönckemeyer

Previous Stories

Retrospective

This PoC is about an industrial fineblanking machine type XFT 2500 speed from Feintool AG. It ist used for the mass production of safety-critical components, such as brake carrier plates or belt straps. In the first article we achieved to extract selected data from the machine, converted it from analog to digital, stored it in a .json-file and uploaded it into the official testnet. However, we did not optimize any of those steps, we just wanted to go the complete mile. The following video shows you the machine and the process, just in case you missed it in the Status Report 01.

Fineblanking at the WZL of RWTH Aachen University using an Feintool XFT 2500 speed. Video: © WZL | Herman Voigts

Focus of Status Report 02

This article shows how we prepared the machine data, how we decided to attach the data package to the tangle and which technology stack we chose. Check Status Report 01 to see the PoC’s architecture.

Data package specification

For this PoC we have only selected a set of data which we can quickly measure. This has enabled us to work through all task packages relatively efficiently. For a productive scenario, it is certainly necessary to question which data is needed and which data is to be used. With the completion of this PoC, however, this is only a development decision and no longer a technological hurdle.

Data set

Nevertheless, we have tried to choose practical data. Our data package thus contains measured process data as well as metadata. The metadata includes a unique ID (type: integer) which is determined by digital image processing from the component surface and a generated hash (work in progress, not yet stable) and the material name (type: string) according to the international standard of the material used. Other metadata such as the name of the product, the name of the machine operator, the name of the manufacturer or the customer, etc. are conceivable at this point. In addition, real machine data has been measured. This includes the maximum punch force (type: float, unit: kN) of the ram, which is the reaction force resulting from the cutting contour and the material, and the punch stroke (type: float, unit: mm). Since the material properties and sheet thickness are not constant from part to part due to material fluctuations, the punch force and punch stroke differ each time. The die roll (type: float, unit: mm) is currently estimated on the basis of existing analytical models, as we still have to work on the measurement technology. The timestamp (type: unsigned integer) is calculated using the UNIX timestamp and describes the time of production, not the time of upload.

Exemplary raw data set file

The final data package looks like the following lines. It is divided into three parts: Firstly, public key and signature hash of the data package, secondly the MAM channel to the Tangle, thirdly the data itself. The file format chosen is JSON.

# Data set
[
{
"pubKey": "-----PUBLIC KEY-----",
"sign": "3fe...930"
},
{
"mamAddr": "9QJ...TMX",
"sidekey": "BTE...FEP",
"root": "ABC...9QW"
},
{
"id": "1c06b4ab6c7d3cdff34a2960",
"material": "X210CrW12",
"punch_force": 2492.5676,
"punch_stroke": 15.2656,
"die_roll": 2.5865,
"timestamp": 1407390451216
}
]

The machine to MAM channel mapping takes place in our backend. This decouples the machine from IOTA as carrier medium. As such the backend contains a mapping of machine identification to the corresponding MAM channel address. For each machine identification only one MAM channel exists.

Data preparation

Data preparation is mainly about data signing. We do not want to store the data itself on the Tangle, neither should it be freely accessible for anyone. Thus, it needs to be signed. As hashing algorithm we chose SHA-2-256, because we want to hash huge amounts of data in an efficent way and SHA-1 has proven weaknesses. Additionally, we decided to go with RSA and PKCS 1.5 respectively as public-key encryption technology because of the huge support and easy-to-use online validators of this signature.

# Sign Data
def sign_data (self,
PrivPublKeys = "YourKeys.json",
DataToBeHashed = "YourData.json",
DataPackageHashed = "YourHashedData.json"
)

The signed data provides a signature string that can be later verified with the real data and the on-Tangle stored signature. Hence we are able to identify corrupted or tempered data.

How to store our data packages

We decided to use AWS due to the fact that there are already existing, open source implementations of PoW for AWS Lambda which in turn simplifies the realization of this PoC. Our data is stored in a DynamoDB to deal with the huge amount of data for every workpiece in upcoming modifications of our PoC. In the future, every workpiece should generate only one MAM transaction on-Tangle and the data itself will be stored off-Tangle.

Importing into DynamoDB. Image: © WZL | Felix Mönckemeyer & Semjon Becker

Stages of PoW

There are three possible solutions for PoW. We could perform the PoW directly on the machine (with an FPGA from Thomas Pototschnig (Microengineer)), we could use an EDGE server (on premise) located in the shop floor or we could use a cloud service. For the PoC we used the simple solution and calculated the PoW in AWS. In future enhancement we will move the calculation towards the machine, as an essential part of the tool. This will help to create a trustless environment without any third-party service that have the possibility to tamper the data.

Stages of PoW. Image: © WZL | Felix Mönckemeyer & Semjon Becker

Using AWS Lambda to do the PoW

We modified the IOTA API to use curl.lib.js and therefore generate the PoW in one AWS Lambda function. Currently we are using DynamoDB, because it is easy to use and does the job at this point of the PoC. However, in order to parallelize and speed up PoW and to not loose the manufacturing order/history, we will be also using Amazon SQS streams in combination with the DynamoDB that can handle concurrent computing of PoW in an upcoming version. Out of the Amazon SQS stream any amount of AWS Lambda can read in parallel and do the heavy workload. This should push our transactions to the desired 2-4 transactions per second (TPS). However it should be possible to increase the amount of TPS to around 20 because the main bottleneck is the synchronously running AWS Lambda. This should enable us to handle even faster running machines and therefore be future-proof.

Concurrent attachment of messages. Image: © WZL | Felix Mönckemeyer & Semjon Becker

Attaching workpiece data to the tangle

Attaching the signature to the Tangle is pretty straight forward. The data stream will be read and then the index will be extracted. The index is used to push the data to the tangle. Afterwards the rest is handled by the MAM Library.

// Get the Data out of the SQS
event.Records.forEach((record) => {
data = JSON.parse(event.Records[0].messages[0].body)
})
// Get the index in which position of the MAM stream it should be
// placed
let index = parseInt(data.indx)

// Send the transaction via the MAM library and with local PoW
const root = await client.send(JSON.stringify(data), index - 1)

Coming up

The next step is to provide a web-based frontend that communicates with the tangle/cloud and allows our clients to decrypt the data.

Sneak peek of the #WZL x #GCX x #IOTA Web-Frontend.

Acknowledgement

I would like to thank everyone involved for their support. Especially the team from grandcentrix GmbH: Sascha Wolf (Product Owner), Christoph Herbert (Scrum Master), Thomas Furman (Backend Developer), and all gcx-reviewers and gcx-advisers; some testnet-nodes-operators, who were intensively used for above transactions: iotaledger.net; the team from WZL: Julian Bauer (Service Innovator), Semjon Becker (Design Engineer and Product Developer), Dimitrios Begnis (Frontend Developer), Henric Breuer (Machine Learning Engineer, Full-Stack Developer), Niklas Dahl (Frontend Developer), Björn Fink (Supply Chain Engineer), Muzaffer Hizel (Supply Chain Engineer and Business Model Innovator), Sascha Kamps (Data Engineer, Data Acquisition in Data Engineering and Systems Engineering), Maren Kranenberg (Cognitive Scientist), Felix Mönckemeyer (Backend Developer), Philipp Niemietz (PhD student, Computer Scientist), David Outsandji (Student assistant), Tobias Springer (Frontend Developer), Joachim Stanke (PhD student, Full-Stack Developer), Timo Thun (Backend Developer), Justus Ungerechts (Backend Developer), and Trutz Wrobel (Backend Developer), and WZL’s IT.

Donations to the IILA

The IILA is a non-profit community. We appreciate every help we can get.

YAEIESUZBAQNKACLQOIHWZMWEGOCBGSYXSCMBUXXQOXNZUPU9QGEKZWMCMXKSTATAVS9EFHMLW9IRNYKDBXUAOK9DZ

Check our address on thetangle.org.

Get in contact

You have questions or want to join/contribute in any way? Ask for Daniel or write an E-Mail | Follow me on Twitter | Or check WZL’s webpage.

Image: © WZL | Peter Winandy

--

--

Daniel Trauth
Industrial IOTA Lab Aachen @ WZL of RWTH Aachen University

danieltrauth.com works in digital transformation (senseering), tokenization of CO2 emissions (BlackFourier), & stands up for human rights (BraveBrew).