How Does Supervaisor Think of Your Data
Supervaisor’s mission is to improve traffic safety. We do it by analyzing videos from a variety of sources that include user uploaded content from phones and dashboard cameras. It is important for us to be clear on our stance on data privacy and how we think about protecting the rights of people whose lives we touch.
Both private companies and governments have for a long time held information in their hands that when misused could have dire consequences (cell phone location data, healthcare data, tax data, private communications etc.). In case of private companies it is usually their economic interests that mostly limit their ability to misuse the data as losing the trust of their users/customers usually affects their revenues. In some ways it is a self-organizing system.
With governments however the situation is slightly more complex. In the ideal case a government would benefit from perfect information to be able to optimize the well-being of the society, but in the other extreme the government’s interests might not always be aligned with the citizens. And that’s where it gets tricky. There are various levels of how transparent governments are about questionable use of technology, but by no means can we assume that any government is completely free of ill-doings. Although currently residing in Estonia (with one of the highest trust towards government in the world) — our teams’ memories of the soviet times inspire us to design systems in ways that don’t assume endless trust.
Let’s get specific!
As we capture videos that might contain traffic offences there are often bystanders and pedestrians present in the scenes that had nothing to do with the event at hand. Without giving it much thought, we could just ignore their clearly recognizable faces and continue to focus on our offence use cases. However — we all know where face recognition technology is heading and one day we could have the government knocking on our door asking us to either hand over all the data or run face searches on the footage we have captured. And THAT does not sit well with us.
For reasons as such it gives me great pleasure to announce some of the things we already do in our early days to limit potential misuse:
- All the video footage we capture with the help of our community goes through an anonymization layer that blurs all recognizable faces to disable the use of face-recognition applications.
- We turn off both dashcam and mobile phone audio for our recordings to disable the use for wiretapping.
- We only capture location data for reports, thus disabling general location tracking of our community members.
- We let our community decide when and what to capture — as such, no all-encompassing video recording happens and we only receive footage through active intent from a citizen
- we take active steps to protect the rights of the data subjects through mechanisms to claim data as well as using human-in-the-loop solutions for any future use cases that can potentially impact people’s lives
- we build technology to benefit the society that depends on citizens — as such there is little room to stray from citizens values by design
This of course is just the beginning and we have more things in the pipeline addressing the privacy aspects of our technology both in and outside its core purpose.
The topic of ethics in the context of AI systems is a complex one, but luckily some amazing work has been done by others that we can take inspiration from. More specifically our inspiration comes from:
- Asilomar Principles (just so happen to be co-authored by one of our investors)
- Montreal declaration of responsible development of artificial intelligence
- Unboxing Artificial Intelligence: 10 steps to protect Human Rights — Council of Europe Comissioner for human rights
- EU Ethics Guidelines for Trustworthy AI (also co-authored by one of our investors)
It is with these values in mind that we build our technology and seek ways to provide value to the society together with our community.
As @benedictevans writes “You yourself can decide that you don’t want to build X or Y, but that really has no bearing on whether it will get built”. It is for this reason that we feel a sense of responsibility in combining existing inventions in ways that add value to the society while aligning with human values. This way we’re arming the world with examples of how invented technologies can be used in better ways and with less potential for misuse.
Looking forward to building something amazing together with you!