- Part 0: Overview of the Industrial IOTA Lab Aachen
- Part 1: WZL x GCX x IOTA — Status Report 01: About data acquisition and first transactions
- Part 2: WZL x GCX x IOTA — Status Report 02: Data(-base) specification and data preparation
Retrospective 01: Fineblanking
This PoC is about an industrial fineblanking machine type XFT 2500 speed from Feintool AG. It ist used for the mass production of safety-critical components, such as brake carrier plates or belt straps. In the first article we achieved to extract selected data from the machine, converted it from analog to digital, stored it in a .json-file and uploaded it into the official testnet. The following video shows you the machine and the process, just in case you missed it in Status Report 01.
Retrospective 02: Premiere of the Frontend
Focus of Status Report 03
This article shows the current status of the WZL x GCX x IOTA Web-Frontend. The Frontend ensures access to the tangle and makes it possible to check data integrity for each workpiece data set.
The WZL x GCX x IOTA Web-Frontend
Let’s assume a machine economy in which there are reasons to believe that a machine, a B2B customer or a B2C customer wants access to the production data of one or more safety-critical components. Access to only one data record is conceivable, since IOTA allows feeless micropayments. Let us also assume that the manufacturer operates worldwide, i.e. has several locations and can produce different parts.
The requirements for the WZL x GCX x IOTA Web-Frontend were derived from this scenario.
- A diagram should show the utilization of the production. Even if this was not the intrinsic motivation of IOTA, it is still easily possible: assuming the number of transactions per second (tps) is higher than the number of workpieces per second (wps).
- Another diagram is intended to show how production capacity utilization is distributed across different locations.
- Using a browser it should be possible to find all components using a combination of Tangle x AWS DynamoDB. The use of an additional database like AWS DynamoDB is foreseen, since it is planned that a database entry will exceed 1.5 kB per workpiece in the future.
- For each Workpiece ID, the corresponding data record must be extracted from the tangle and the database. The integrity of the data must be ensured by a signature which has to be stored in the tamper-proof tangle.
- Based on this, it should be possible to view the component information only by making a micropayment (work in progress).
- Based on a Finite Element Analysis, it should also be possible to create a digital twin for each components information. For this purpose, an additional M2M connection to a finite element (FE) server is automatically established when the user requests and pays for it (work in progress).
Apart from the payment and FEA part, the current status of the Frontend gives a good overview. See for yourself:
The Frontend requirements are discussed in detail below.
The Dashboard Diagram
The dashboard works like a common KPI based tool. For a selected time frame, the successfully completed transactions can be used to conclude on the produced parts per second, see Fig 1.
The Feintool XFT 2500 speed fineblanking press can theoretically perform 140 strokes per minute, which corresponds to approx. 2.3 strokes per second. In industrial practice it is common to manufacture several workpieces per stroke, e.g. 2 per stroke or 4 per stroke. If we calculated with 4 workpieces per stroke at a speed of 140 strokes per minute, we would theoretically have to store 9.3 workpieces per second in the tangle. However, for production reasons we only work with 60 strokes per minute and 2 workpieces per stroke, since the quality of the workpieces is more important than a high stroke rate and the best workpieces quality is expected at 60 strokes per second.
Regardless of this, we have shown that our PoW implementation is already capable of up to 10 tps. Higher tps should be easily possible due to the scaling options of AWS:
The Dashboard World Map
The Dashboard World Map allows one to quickly and easily compare production utilization across national borders. This visualization is already possible and access to it can be granted, if a customer wishes for this feature.
With the help of the browser, the AWS DynamoDB can be quickly and easily searched for location, part, time period and free text via filters, see Fig. 3. The Browser gets a part list from the AWS, but is not getting their details. How the fineblanking press, AWS and the IOTA Tangle work together can be read in the WZL x GCX x IOTA — Status Report 02, in case you missed it.
Workpiece ID Naming Convention
In order to reduce the effort of scanning the table partitions and shorten the the time to lookup results, the following naming convention is proposed:
- AAA [string(3)] = Factory/Machine ID maximum 3 digit string
- BBBB [string(4)] = Product/item ID with maximum 4 digit string.
- CCC [int(3)]= Product/item version or revision consist of 3 digit int.
- DDDDDD [int(6)]= Date Month Year format in number ddmmyy.
- EE [int(1)]= Rollover number when the item has reached the item generated number limit (FF limit). default=0.
- FFFFFF [int(5)] = the iterative number generated per part coming from machine of that day. When the number has reached 100000 it will add +1 to EE. If the machine is processing the first part of the day, the default would be 1 and the next 2000 parts/JSON would be 2. default =1.
In the Viewer, the data measured at the machine is displayed directly in plain text and an integrity check is also performed automatically. The Viewer computes the signature itself, and compares it with the one computed by AWS. If it matches, the Signature matches message is shown. There is no connection to the Tangle on load. The Viewer shows the following data, see Fig. 4.
- Classification Data: The data associated to the workpiece, like the timestamp or the manufacturing settings
- Channel Reference: The address of the public MAM channel where the data of the workpiece is stored.
- Verify Tangle Entry: This requests a message of the MAM Channel address. Currently, it is expected that there is only one message stored, which contains a json storing the ID and the signature. If this is the case, and the Signature matches, the Verification successful message is shown.
- Serialized Workpiece Information: All relevant data of the workpiece which we would like to hash and whose integrity should be verified. This is serialized as a string so we can sign it.
- Workpiece Information Hash: This is the SHA-256 hash of the serialized workpiece information.
- Public Key: The public key of the key pair which was used to sign the data.
- Signature: The computed signature, based on the data we got from the backend.
Verify the Tangle entry
Even if the integrity check (Signature matches) is valid, it is not yet guaranteed that the signature stored in the database for each data record, or made available to the customer, is also the signature that was written into the tangle at the time of production. Therefore a Verificator is required, which — here manually — checks whether the provided signature also exists in the tangle. The Tangle Entry is only verified on request because it takes some time. In the Viewer, the workpiece data, hashed and in plain text, as well as the public key and the signature are specified, see Fig. 5.
Alright, it worked!
Because the signature persisted in the tangle, it is tamper proof. Therefore it can be guaranteed that the database entry is also tamper proof, because otherwise there would be no match possible between tangle signature, database signature and integrity check, see Fig. 6.
Someone forged Data or Signature
If someone tries to forge false data or a false signature, the verification process outputs an error message in the Frontend, see Fig. 7.
Behind the Frontend
The heart of the current implementation is DynamoDB, which serves as a transition point for the machine, the tangle and the Frontend, see Fig. 8.
Steps 0 to 3, from machine to DynamoDB, and steps 3 to 6 + 10, from DynamoDB to Tangle, have already been explained in the WZL x GCX x IOTA — Status Report 02 about the backend. However, the frontend steps are basically the same. One list extracts the workpiece ID (browser), the other extracts the workpiece details (viewer). Both realized with AWS Lambda and the API-Gateway.
Two major developemnts are coming up soon: Micro-payments and Finite Element Analysis.
We are planning to enable a micro-payment system utilizing IOTA for each part produced. The micro-payment is not charged for the physical workpiece itself, but for the digital data belonging to this specific workpiece. This enables new scenarios like pay-per-production, machine to machine markets, or machine to third-party services like Finite Element Analysis.
Finite Element Analysis
A Finite Element Analysis uses the finite element method, a numerical approach for solving problems in engineering, e.g. calculating temperatures, stresses, and strains during fine blanking, that cannot be measured in real life, see Fig. 9.
Now imagine another scenario in which a customer wants to know how much dissipated heat has been generated for each component. Temperature is important because it directly influences the structural properties of the material and can cause metallographic transformations at high temperatures, reducing its strength.
With the help of IOTA it is possible to request and pay for a digital FEA twin for each physical component. Seasonal and daily boundary conditions can be taken into account, such as weather influences in summer or winter, fluctuations in the machine hydraulics, which means that blankholders and counterholders provide less force, etc. The individual setup can be easily created by parameterized discretization.
I would like to thank everyone involved for their support. Especially the team from grandcentrix GmbH: Sascha Wolf (Product Owner), Christoph Herbert (Scrum Master), Thomas Furman (Backend Developer), and all gcx-reviewers and gcx-advisers; some testnet-nodes-operators, who were intensively used for above transactions: iotaledger.net; the team from WZL: Julian Bauer (Service Innovator), Semjon Becker (Design Engineer and Product Developer), Dimitrios Begnis (Frontend Developer), Henric Breuer (Machine Learning Engineer, Full-Stack Developer), Niklas Dahl (Frontend Developer), Björn Fink (Supply Chain Engineer), Muzaffer Hizel (Supply Chain Engineer and Business Model Innovator), Sascha Kamps (Data Engineer, Data Acquisition in Data Engineering and Systems Engineering), Maren Kranenberg (Cognitive Scientist), Felix Mönckemeyer (Backend Developer), Philipp Niemietz (PhD student, Computer Scientist), David Outsandji (Student assistant), Tobias Springer (Frontend Developer), Joachim Stanke (PhD student, Full-Stack Developer), Timo Thun (Backend Developer), Justus Ungerechts (Backend Developer), and Trutz Wrobel (Backend Developer), and WZL’s IT.
Donations to the IILA
The IILA is a non-profit community. We appreciate every help we can get.
Check our address on thetangle.org.