My idea is to integrate SAP products with this powerful machine learning tool.
The first thing needs to do is install a model in it. AWS provides several pre-trained models. So the model I deployed was the face detection model. It can detect human faces and give you the confidence of it for 0.0 to 1.0 scale.
The architecture is pretty simple after Deeplens detect the object, it sends JSON-formatted messages with detected object types and corresponding confidence to AWS IoT MQTT topic.
The data was only contained object name and confidence. Also, it sends each face data individually, Since what I want to store in the database should contain DateTime, face name and confidence and all of them should be one data. (For example, I want one JSON file including three faces data rather than three JSON files.)
I need to edit some code in this pre-trained model. You can edit code on AWS Lambda function. (after each time you edited it, you need to upload it as the newest version and deploy into Deeplens again)
After edited code in Lambda function code. The output JSON file store with several faces data now. I have the correct data what I want now :)
You can route the same MQTT topic to an AWS service, here, I point to AWS S3 bucket.
How to leverage these data with SAP Products?
So far, I have streaming data at AWS S3. How can I use it for SAP products? The straightforward idea is storing it into SAP HANA. So I crafted a python script helps me to read data from AWS S3 and insert to SAP HANA.
A python script you can download data from AWS S3 and upload data into SAP HANA - JeffChern/AWS-S3-SAP-HANA-tool
You also need open-db-tunnel at the same time when you insert data into HANA.
Now, I got the Deeplens data on SAP HANA. Next step is to analyze it on SAP Analytics Cloud.
SAP Analytics Cloud
I use the live connection with SAP HANA, which can update my data visualizations and stories with new data in real-time. Now, I can show the data filter by time, confidence or faces range what I want.