Learn Adapt Repeat
The cover seems out of a sci-fi movie, doesn’t it? Comuna 13 is similar to the favelas in Rio de Janeiro; it is beautiful, mystic and dangerous. Look! there’s even an electric staircase locals use to go up the hill. You might be tempted to use it until you find out that gang members wander around the area often. But, what if the police could guarantee your safety? What if you know that the police is using Artificial Intelligence (AI) to be at the right place at the right time before a crime is committed? Would you give it a chance and visit this magical place?
I know what you are thinking… “That sounds like the movie Minority Report”, which you are correct but a real solution should not rely on psychics and especially they should not be retained against their will; that is called kidnapping from where I come from. The real solution we need is something different, something like Machine Learning (ML).
Our journey so far to achieve an AI startup that helps law enforcement agencies to prevent crimes before they happen in a city like Medellín has been full of obstacles and frustrations. Let me start where it all began:
During lunchtime, a co-worker shows us a joke. We all worked for a mobile app development company and we were laughing about ÑerApp, an “App” that could let you translate what a ñero is saying. Ñero is an urban word for tough or criminal. The app also future the ability to summon a force field that ñeros would not be able to pass. That would be quite useful if you ask me.
But the joke became a serious conversation in the group. We always wanted to create our own initiative and security was always a hot topic in the city we all called home: Medellín, once known as the murder capital of the world.
After some research, I found these 3 interesting initiatives:
- Crime and Place is a consumer driving product that helps users in the U.S. to acknowledge if they are in a danger zone. Apparently, the data is from the annual FBI report on crime statistics.
- Predpol and Hunch Lab are services offered to law enforcement agencies in the U.S. that help them to predict crime which is something that sounds out of a movie (like Minority Report).
- Palantir which offers products and services for human-driven analysis of real-world data. This means they can amplify the data an organization has to a new level of intelligence. If I’m not wrong the US Army is working with this guys.
My first thought was “maybe I can do something similar”. I was aware of the challenges: How to get the data, how to monetize the service, etc, but in order to get this thing going, we ended up building what we know we could build best: an App. This sounds like a mistake and probably it is, but if we wanted to grab attention from possible investors, we need something to show. Sometimes is better to learn by action rather than analysis, and Colombia’s tech space works quite different than Silicon Valley.
Our team started with multiple disciplines:
- Manuel Urrego — Founder / UX Designer
- Juan Alvarez — Futurist / Coach
- Natalia — Financial Officer
- Juan Carlos — Backend Developer
- Alexander — iOS Developer
- Daniela — Android Developer
- Felipe — Web Developer
- Juliana — UI Designer
- Luis — Tester
After months of user research, discovery phases, multiple designs iterations, and the comes and goes that involve software development we came with the following solution:
It took us quite a time to get to that level of simplicity and we are far from a real user experience product that we can put into the government hands. The UI needs to adapt to the information generated by our robot after being fed with all the relevant data. We are calling our robot LAR.
In Max Tegmark’s Life 3.0 the main discussion is about the responsibility each one of us has with society when we are creating AI solutions and do the nature of what we are trying to solve it makes me think about the corruption a country like Colombia faces in a constant basis. What if our solution is used for something bad? How catastrophic will be the result? Should I have a backdoor where I can “unplug” LAR if it turns evil? Only time will tell.
We are aware of the ethical challenges and criticism that comes with artificial intelligence, one of the pinpoints discussed in this year World Economic Forum (weforum) https://bit.ly/2HeYUPO. Let’s try to analyze them with criminology in mind:
The bias problem
In Chicago and Los Angeles where AI algorithms have been used for crime prediction, there have been cases where law enforcement agents are more prompt to arrest African American people rather than other race. The problem could be in the data were (and this is an assumption) white people denounce crimes more often than other race. If the dataset has only crimes perpetrated by African Americans, then the algorithm will target them.
Colombia is no stranger to racism but it is not as visible as other first world countries. LAR will face similar challenges like Chicago and Los Angeles but rather about race is about stratification. Colombia classifies its people by their income in 6 levels being 1 the poorest and 6 the wealthiest. The country also faces a problem of distrust from their citizens in the legal system. This means that not many people denounce crimes that have happened to them due to high levels of impunity. We could say (and this is also an assumption) that the higher the strat for a citizen is, the more chances he denounces a crime. LAR could focus its efforts on people with low stratification and that is discriminatory. Colombia has white collar criminals and they should be targeted by the algorithm as well.
Now, as suggested by the weforum https://bit.ly/2VRxq5R there are ways to prevent these discriminatory practices, but I’m a true believer that the answer is in the type of data a robot is consuming and how that robot presents its information to for our interpretation. Crime datasets should only contain the type, location, and date-time of the crime. We shouldn’t care who committed the crime, we should just care if there’s a pattern of recurrence in a specific place at a specific time and day where the law enforcement agents are not present. The AI should not be the one who makes the decisions, it should be the cops who decide if what the AI is suggesting should be something to be considered. The information displayed to these agents should be direct, easy to digest, and convincing enough for them to make the best decision on how to proceed. This is were my fears that I talked previously in this article takes place: Corrupt cops won't follow what the AI is recommending since it will not suit their personal interests; or even worst, what if a corrupted cop gives this technology to a criminal organization? 😧
I want to end this pinpoint by thinking further from just the idea of placing cops at the right place at the right time. Despite I said that we don’t need to know who committed a crime to make the AI work; what if we know where this criminal comes from? Where does he live? Where has he lived? Placing cops where they need to be is just part of the solution but we need to think on how to eradicate the problem from its roots. If we know what neighborhoods are being targeted by criminal organizations, if the government invest in their social and security development, there could be a chance that fewer criminals are out there in the streets. Let’s get wilder with this idea and consider the climate variable. If we know that a criminal is a person who lost its home due to climate change effects and his only option was to take his family and move to a city where the only job he could found was to join a criminal organization; the government could invest in preventive actions where these endangered communities are so its people can stay and live in the land they call home.
The transparency problem
As one of the founders of LAR being transparent is key to the success of this initiative. It wouldn’t make any sense trying to trick a government that the AI is reliable for preventing crimes to happen if the data used by the robot is corrupted. For how long would I be able to fake the best possible scenario after being caught? If we take into consideration what happened in Fyre Festival, my guess is that for not too long.
Yes, AI is trendy right now but I also think that it is at its early stages. The information we get from LAR should be considered experimental to begin with. Lots of tests will need to be taken. LAR will need to evolve and the law enforcement agents who will use it need to evolve as well. They need to trust each other because if they don’t, LAR will never be useful.
Every AI startup should be prepared for failure. The more transparent we are with ourselves and with the people we try to impact, the more trust will be built with the systems who will run our future.
The accountability problem
I’m not going to lie, LAR could F up if we are not careful and in the case that happens, we should be held accountable. This pinpoint is tied up with transparency where we are saying the truth about what LAR could deliver. I couldn't say how effective this AI is going to be at preventing crimes from happening especially by knowing all the challenges previously discussed in this article, but I would say that it is worth to try to find out how effective it can be.
The privacy problem
LAR should never know sensible information about the people who committed a crime. Data like names, phone number, home address should be of the table. It is our moral responsibility to protect the data of the people even if they hay committed a mistake in their lives. We don’t have the right to judge them and life have taught me that every single one of us deserves second chances https://youtu.be/gJtYRxH5G2k.
We are all human beings after all.
I hope for the day all these futuristic dystopian landscapes that a city like Medellín has will be safe to visit at any time thanks to an intelligent solution that help our law enforcement officers 👮 to keep us safe; but it can only be achieved if we learned from our mistakes, we adapt and be a better version of ourselves, and we constantly repeat this process.