Reese Innovate
Published in

Reese Innovate

Health Greeter Kiosk Q&A with Daniel Sanchez and Max Hudnell

Pictured: Fans interact with one of Reese Innovation Lab’s Health Greeter Kiosks, which were deployed at entrances to Kenan Stadium for UNC’s football game against Virginia Tech on October 10th.

To provide a deeper look into our work on the Health Greeter Kiosk project, I sat down and talked with project leads Daniel Sanchez and Max Hudnell. The two were instrumental in rolling out this novel application of AI technology, a mobile solution that encourages proper mask wearing and social distancing, and helps keep our community safe! Read on to learn about their involvement with the project, important takeaways and feedback they received from our partners at Lenovo.

Daniel Sanchez, Reese Innovation Lab VR Developer

Q: What was your role with the Health Greeter Kiosk project? What were you responsible for?

DS: I helped manage the deployment and monitoring of the kiosks. Specifically, I designed and implemented a system that allowed us to efficiently make new kiosks, push updates to deployed kiosks, and monitor the status of the deployed kiosks.

Q: What did you learn from working on this project? Any takeaways that you’ll apply to your work moving forward?

DS: I learned how to look at the whole objective of a project and use that to inform technical decisions and implementations. It is important to take into consideration all aspects of a project when making technical decisions. For example, the idea that we needed to quickly deploy the kiosks around campus informed technical decisions that prioritized ease of set up and transportation between locations. This project provides me an example of how business goals affect technical decisions.

Q: What kind of feedback did you receive from the Lenovo team?

DS: The Lenovo team felt that the ease of set up and monitoring made the kiosks a device that could be sold on a large scale to consumers. They were pleased overall with the designs and implementations.

Max Hudnell, Reese Innovation Lab Developer

Q: What was your role with the Health Greeter Kiosk project? What were you responsible for?

MH: I was the lead researcher & developer for the computer vision application which runs on the kiosks! We wanted to perform mask detection and social distance detection, but we didn’t know how we were going to do that. So my role included exploring & testing different networks for face-mask detection, along with researching potential techniques for distance estimation, trying them out and seeing what worked best for us. After finding what worked best, my job was to build a system around those techniques capable of collecting various statistics.

Q: What did you learn from working on this project? Any takeaways that you’ll apply to your work moving forward?

MH: I learned a lot! On the computer vision side of things I got to learn a good deal about how modern researchers approach human pose estimation, which is the task of analytically estimating the positions of key points on the human body (eyes, shoulders, hip joints, etc.), from a 2d image. By reading up on and then applying this technique I was able to use it to compute our social-distance metric.

I also learned a good bit about general application development. Apart from the computer vision components, we needed a whole application which was capable of streaming video to a web browser, while also collecting statistics and sending them to our backend server. We originally wrote everything in python, then I later rewrote the application in C++. It was a good learning experience translating code from one language to another. I even started keeping a c++ note sheet where I’d write down what I was learning about c++ language concepts and coding techniques.I hope to continue to use it and add to it as I continue programming in c++ throughout my career.

Q: What kind of feedback did you receive from the Lenovo team?

MH: Lenovo helped steer us in the right direction when it came to the hardware of the application. A requirement of the application was that it needed to be fast enough to run on an “edge device,” which in our case was a small, low-resource computer connected to one of our cameras. To achieve this they provided some of their computers and also recommended we use Intel’s Neural Compute Stick and their OpenVINO framework. These turned out to be essential in getting our application fast enough to run on a low-resource edge device.