What is dynARTwork?
DynARTwork is an IoT system that can be used in museums and galleries to bring innovation and technology to the artistic sector.
This system allows you to collect the environmental data of the room and correlate them with the work of art.
Thanks to this mechanism, the work of art will continuously change according to the environmental conditions.
In this way, visitors will proactively participate in the construction of the work and will be able to visit it again daily and yet, so much the work is dynamic.
The idea is to give dynamicity to the artworks through the IoT: users can upload an image of their choice which will be processed in the cloud. This will be done thanks to a Grid Eye thermal camera which will collect data (in the form of an 8x8 matrix) that will be sent to the cloud; here the raw data are processed and transformed into a heatmap, that is suddenly stored in a Google Cloud Storage bucket.
The image uploaded by the user will be merged with this heatmap, producing a final artwork.
The dynamicity relies on the fact that obviously, the Grid Eye will collect data continuously changing the heatmap according to the number of people in front of the artwork or in the room, and every X seconds will send new telemetry with raw data, and the flow will begin, again and again, making the artwork change in real-time.
Examples of dynARTwork
Our architecture is composed of 4 main parts:
- Sensors (IoT element): It is the input of our dynARTwork algorithm. We use an IoT device performing RIOT-OS to collect information in the Museum using a Panasonic Grid Eye sensor.
- Cloud components: We will use the Google Cloud Platform (IoT Core, Firestore, Hosting, Pub/Sub, Cloud Vision API and Storage) to collect and manipulate data.
- Artists’ WebApp (End-user components): This part will be used by the artists. It will have a simple UI/UX interface to hide all tech detail in order to permit artists to build their dynARTwork without worrying about technical details.
We use Angular + Material + Firebase to create easily a PWA. It will be responsive and it will be immediately ready for Android / iOS and on the web.
- Actuators (IoT elements): We will create an IoT device (Raspberry PI 0) that pulls dynARTwork from the Cloud and it will redirect this stream to the HDMI output where the museum projector will be connected.
Why a cloud-based architecture?
As you can see from the figure, our architecture is strongly based on the Cloud.
The cloud has guaranteed us:
- Strong modularity.
- Loosely couped architecture.
- Parallel development, thanks to component mocking.
In the first part the IoT device installed in the museum that runs RIOT-OS send messages over the MQTT-SN protocol, then the Mosquitto RSMB broker convert the MQTT-SN packet into MQTT and finally, a custom transparent gateway, which uses Paho MQTT library, connect the Mosquitto RSMB broker directly to Google IoT Core and forward the messages on GCP.
Google Cloud Platform
Once the message gets to google cloud platform is managed by Google IoT core configured with a single Register: (Pub-sub topic on telemetry data):
This Pub-Sub topic has a single subscription of type Push, which call the Cloud Run instance, on
The instance of Cloud run called “sensors-service” it is a Docker container that runs a Flask application with Gunicorn, a Python WSGI HTTP Server for UNIX.
We have a single API which is: “POST” on “/”.
It receives a PubSub message where is wrapped in the IoT telemetry and manipulates this raw telemetry data using “MatPlotlib” and Wand.
Wand is a ctypes-based simple ImageMagick binding for Python implementing all the functionality of MagickWand API and we use it in order to marge the image uploaded by the artist with the heatmap of the sensor.
It computes sequentially this 2 step.
- Create a heatmap from Raw telemetry data using “Matplotlib”.
- Fetch the original image uploaded by the artists, (correlated to the telemetry) and merge.
This operation creates a new image and uploads it on a second bucket “processed_artworks” associated with a notification event to PubSub topic “processed-topic”.
The artist who uses our service will interface with our WebApp to insert his artwork then it will be managed with our algorithm.
The WebApp allows inserting an image of the work of art that is then inserted thanks to the use of firebase within Google Cloud Storage.
Finally, a python script on the Raspberry PI (pull type) is subscribed to “processed-topic” and, when the “processed_artworks” bucket will trigger the notification event, it will download the new image and show it on the HDMI monitor.
Technical details on the IoT elements
Is a low-cost, low-power system on a chip (SoC) series with Wi-Fi & dual-mode Bluetooth capabilities.
Grid-EYE is able to provide thermal images by measuring actual temperature and temperature gradients. Grid-EYE enables the detection of multiple persons, identification of positions, and direction of movement, almost independent of ambient light conditions without disturbing privacy as with conventional cameras. Description.
The Grid Eye Sensor is connected directly to the esp32 through the i2c interface. We bought a grid eye with a Maxim package that adds a One-wire interface to the sensor to connect the sensor even over long distances using a 6-wire RJ11 cable.
RIOT-OS: RIOT is a real-time multi-threading operating system that supports a range of devices that are typically found on the Internet of Things (IoT): 8-bit, 16-bit, and 32-bit microcontrollers.
In our architecture, we use RIOT-OS to allow the IoT devices to communicate and generate packets containing sensor data.
RIOT-OS has a simple interface in order to read/write data on i2c registers. This is a stub of code that handles grid-eye telemetry:
I2C stub of code
Google IoT Core: Cloud IoT Core is a fully managed service that allows you to easily and securely connect, manage, and ingest data from millions of globally dispersed devices. Cloud IoT Core, in combination with other services on Cloud IoT platform, provides a complete solution for collecting, processing, analyzing, and visualizing IoT data in real-time to support improved operational efficiency. Detail.
Google Pub-Sub: Pub / Sub is the Google service that allows you to interconnect various services such as IoT Core, Storage and Cloud Run.
This pattern provides greater network scalability and a more dynamic network topology, with a resulting decreased flexibility to modify the publisher and the structure of the published data.
A subscription can use either the pull or push mechanism for message delivery. You can change or configure the mechanism at any time.
Pull subscription: in the pull delivery, the subscriber application starts requests to the Pub/Sub server to retrieve messages that in turn respond with the message and an ack ID that will be used by the subscriber to acknowledge receipt.
Push subscription: in push delivery, Pub/Sub initiates requests to your subscriber application to deliver messages at a pre-configured endpoint which acknowledges the message by returning an HTTP success status code.
For more details check the Architecture document in our repository.
In our architecture we use different topics:
Type of delivery: PUSH
- “processed-topic”: type of delivery: PULL.
Google Cloud Vision: allows developers to easily integrate vision detection features within applications, including image labeling, face, and landmark detection, optical character recognition (OCR), and tagging of explicit content. In our architecture, we use SafeSearch Detection that detects explicit content such as adult content or violent content within an image. This feature uses five categories (adult, spoof, medical, violence, and racy) and returns the likelihood that each is present in a given image. See the SafeSearchAnnotation page for details on these fields.
Google Storage: The Buckets resource represents a bucket in Google Cloud Storage. There is a single global namespace shared by all buckets.
Buckets contain objects which can be accessed by their own methods. In addition to the acl property, buckets contain bucketAccessControls, for use in fine-grained manipulation of an existing bucket’s access controls.
A bucket is always owned by the project team owners group.
Google Cloud Run: Cloud Run is a managed computing platform that enables you to run stateless containers that are invocable via web requests or Pub/Sub events. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.
RASPBERRY-PI: The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It is a capable little device that enables people of all ages to explore computing and to learn how to program in languages like Scratch and Python.
At first, our idea was to use an STM32MP157C-DK2 which, having an HDMI interface, would allow projecting the processed images. STM32 also having available Python3 on its OS did not have the precompiled packages available on Python by Google Pub/sub and Google Storage and then we used and therefore we decided to migrate from STM32 to Rasp thanks to Piwheels, a repository that contains the already precompiled pip packages for Rasp Pi architecture.
For more technical description visit our Github Repository.
Video demonstration of dynARTwork
Progressive web apps are web applications built with technologies that make them behave like native apps. A benefit of progressive web apps is the ability to work smoothly when network coverage is unreliable. Also, unlike native apps, no installation is required, but they are faster than typical web apps.
The web app will be used by artists to upload images of their artwork and set the values of the control parameters. It will also be used as a showcase and tool to determine the type of average user interested in our service.
The artist, in order to use our service, needs to register using the form or using Google button.
Once logged in, the artist can then access the page to create a new dynARTwork and then upload the image of his work of art
How was evaluated our solution
Once the project has been finished, we performed some tests, in order to evaluate both the technical quality of the work and the user experience.
We initially decided to use the STM32MP1 but than for several reasons we decided to replace it with two different boards. So we used the ESP32 to take the data from the Grid Eye sensor and the Raspberry Pi to send the works of art to the projector using the HDMI port.
We made this choice for several reasons:
- The physical distance between the grid eye and the projector which would not have allowed optimal coverage of the room.
- Reuse of the Riot-os code of the previous tutorials
- Ease of development on Raspberry Pi compared to the proprietary OS STM
- Reduction of costs, as when fully operational the cost of these two boards is significantly lower than the STM32MP1
At the start of the implementation phase, we were focused on an edge deployment. We tried to do the part of image processing on our Raspberry PI, but the results were awful: because of the low computational power, an entire flow of data, from the grid eye to the image displayed, took about 10 minutes, an unthinkable time. We tried different images, with the hope that the time could depend on the size, but the results were bad, again. So we moved to a cloud approach: here we give a sketch of our tests. We tested different images with both approaches, and the table shows the time of execution.
We first tried to run private tests (between the member of the group) in two ways: initially “black box” tests on small parts of the architecture, to monitor single tasks of the project, in some cases also simulating the parts, not in the exam, and then we tried with the entire architecture running.
The results are good: the architecture, despite the complexity, performs well in all tasks we designed, and the flow of data is linear. To execute an entire run, from the collection of data to the final result, we need just some seconds, that should be a good result given that we pass through a lot of different nodes during the run.
Moreover, with the actual configuration, we are not experiencing any expenses for traffic consumption, which would increase in the case of a modification of the image processing.
About the user experience, we gave the web application to about 20 people, where they loaded an image of their choice, and we collected opinions about the difficulty of the actions, and the satisfaction with the final result provided by our architecture. All of them have been happy about the easiness with which it is possible to approach the application, and a majority of people have been satisfied also by the final artwork, saying that they expected exactly something like the result given.