Code the City 11

This weekend at Robert Gordon University, Aberdeen, Code the City 12 will take place. The civic hacking initiative started back in 2014 and over the years, topics such as health and the environment have been tackled. This time the two-day event is focusing on tourism. Coders, designers and anyone else with an interest in improving a tourist’s experience in the Granite City will brainstorm ideas, form teams, build a prototype and present it to the other teams.

A couple of months ago at Aberdeen University, the theme for Code the City 11 was more open-ended: fun, just have fun. From a plant monitoring system to an electronically controlled theatre for a screenplay about a robot, there was a diverse set of projects. Some of us from Codify — Mark G, Mikko Vuorinen, Sam and I — were there taking part too.

A brief brainstorming session on the Friday before the event generated some ideas, mainly centred around Mikko’s new Raspberry Pi and camera module. We have also been paying close attention to Microsoft’s incredible Cognitive Services and decided this was the perfect opportunity to try some of them out.

Santa Surveillance System

On the Saturday morning, we arrived at the university with our laptops, Raspberry Pis and snacks to keep us going. We batted around some more ideas and decided, with Christmas just around the corner, we would build a surveillance system which would alert people when Santa Claus was spotted. This prototype would demonstrate the capabilities of the Raspberry Pi cameras and some of Microsoft Azure’s services.

Capturing the Images

Sam took on the task of writing a program for the Raspberry Pis that captured images from their attached cameras. This was achieved using a great Python library called python-picamera. With just a few lines of code, we were capturing an image every couple of seconds. These images were then posted to an endpoint hosted in Azure (more on that in a minute).

Between us we had three Raspberry Pis and four cameras. A few modifications to the script were made to handle dual cameras before it was copied on to all the Raspberry Pis. Our four cameras were now in place, ready to catch Santa.

Detecting Santa

Computer vision has been made remarkably accessible by Microsoft’s Cognitive Services. There’s a Face API for identifying faces in images, an Emotion API for recognising people’s emotions, a Computer Vision API for analysing the contents of an image and a Custom Vision service for creating your own specialised computer vision model.

Codify’s Azure expert, Mikko, set about using the latter service. He uploaded some training images to customvision.ai and tagged the ones containing our Santa hat with a ‘santa’ tag. That’s all it takes to create your own model. It may not be very accurate to begin with, but with more training it will improve. Then you can use the Custom Vision API to give your model a new image and find out what it thinks it should be tagged as.

Training our custom computer vision model.

Next we had to link the Raspberry Pis to our model. To do this, we used another Azure service: Logic Apps. The Logic Apps designer lets you hook up Azure services to create a workflow.

Our logic app.

Here’s a brief summary of what our logic app is doing:

  • The first step specifies that the app is triggered by a HTTP request.
  • Then we’re storing the image from the request in blob storage.
  • At this point we return the blob’s metadata to the caller, while also continuing down a second branch.
  • The URL for the saved image is then passed to our custom vision model.
  • We check the tags returned by the model to see whether it thinks this is an image of the Santa hat, setting a flag if it does.
  • Lastly we create and save a JSON object with the image URL and the flag.

Monitoring the Feeds

Now we just had to do something with the output of our logic app. Mark and I worked on two other components: an Azure function to fetch that output and a WPF desktop application to present it.

Azure Functions is a serverless computing service. It lets you host some code in Azure and trigger its execution in response to HTTP requests, timers, logic apps and many more events. There’s no need to set up your own server — Azure takes care of that and you only pay for the time spent executing your code.

Our function is triggered by a HTTP request. Using the API for the Blob Storage service, it fetches the JSON file (the logic app’s output) for a specified camera. The image the file points to is downloaded, Base64 encoded and sent back to the caller along with the flag which indicates if Santa was detected.

Azure Functions allows you to develop your function directly in the browser, which we did. However, not having IntelliSense was a frustrating experience. Fortunately, functions can be developed and tested in Visual Studio before being published to Azure. We will definitely work this way the next time we use Azure Functions.

Back on the desktop, we created a simple WPF application that polls the function’s endpoint and presents the images from each of the cameras. If Santa is in any of the images, we play a festive jingle.

Presenting to Santa.

Presentations

It was great to see what everyone had achieved in such a short space of time. Much to our surprise, the judge turned out to be Santa. The pressure was on! Despite very little training, our model didn’t let us down. Santa was impressed (perhaps a bit concerned too) and we won first place.

Thank you

Thanks to everyone at Code the City for organising a great event. We had a lot of fun and will be back another time. We’re looking forward to seeing what the teams come up with at Code the City 12 this weekend.