I FAIL CREATING A STARTUP ABOUT A CRIME FIGHTING AI FOR MEDELLIN

Manual from Columbia
Apr 7 · 8 min read
Art by Sandra Posada

What if the police could guarantee your safety in one of the most dangerous cities in the world? What if you know that the police is using Artificial Intelligence (AI) to be at the right place at the right time before a crime is committed? Would you give it a chance to visit it?

It looks straight from a sci-fi movie.
This is España library | Photo Carlos Mora.

I know what you are thinking… “That sounds like the movie Minority Report”, which you are correct but a real solution should not rely on psychics and especially they should not be detained against their will; that is called kidnapping from where I come from. The real solution we need is something different, something like Machine Learning (ML).

Our goal was to achieve an AI startup that helps law enforcement agencies to prevent crimes before they happen but, in a city like Medellín, to journey for that goal is full of obstacles and frustrations.

It all started when a co-worker shows us a joke. I used to work for a mobile app development company and we were laughing about ÑerApp, an “App” that could let you translate what a ñero is saying. Ñero is an urban word for tough or criminal. The app also future the ability to summon a force field that ñeros would not be able to pass. That would be quite useful if you ask me.

But the joke became a serious conversation in the group. We always wanted to create our initiative and security was always a hot topic in the city we all called home: Medellín, once known as the murder capital of the world.

After some research, I found these 3 interesting initiatives:

  1. Crime and Place is a consumer driving product that helps users in the U.S. to acknowledge if they are in a danger zone. The data is from the annual FBI report on crime statistics.
  2. Predpol and Hunch Lab are services offered to law enforcement agencies in the U.S. that help them to predict crime which is something that sounds out of a movie (like Minority Report).
  3. Palantir which offers products and services for human-driven analysis of real-world data. This means they can amplify the data an organization has to a new level of intelligence. If I’m not wrong the US Army is working with this guys.

My first thought was “maybe I can do something similar”. I was aware of the challenges: How to get the data, how to monetize the service, etc, but to get this thing going, we ended up building what we know we could build best: an App. This sounds like a mistake and probably it is, but if we wanted to grab attention from possible investors, we need something to show. Sometimes is better to learn by action rather than analysis, and Colombia’s tech space works quite different than Silicon Valley.

Our team started with multiple disciplines:

  • Manuel Urrego — Founder / UX Designer
  • Juan Alvarez — Futurist / Coach
  • Natalia — Financial Officer
  • Juan Carlos — Backend Developer
  • Alexander — iOS Developer
  • Daniela — Android Developer
  • Felipe — Web Developer
  • Juliana — UI Designer
  • Luis — Tester

After months of user research, discovery phases, multiple designs iterations, and the comes and goes that involve software development we came with the following solution:

The app main screen uses the user’s location to pinpoint all the criminal activity the zone has for the current day and current time of the day. The app filter screen allows the user to know the criminal activity for a specific day, time of the day and how historic the results should be.

It took us quite a time to get to that level of simplicity and we are far from a real user experience product that we can put into the government hands. The UI needs to adapt to the information generated by our robot after being fed with all the relevant data. We were calling our robot LAR (Learn Adapt Repeat).

In the following public bitbucket project: https://bitbucket.org/account/user/informantes/projects/IN you will find each piece of code that involved creating informantes app.

In Max Tegmark’s Life 3.0 the main discussion is about the responsibility each one of us has with society when we are creating AI solutions and do the nature of what we are trying to solve it makes me think about the corruption a country like Colombia faces in a constant basis. What if our solution is used for something bad? How catastrophic will be the result? Should I have a backdoor where I can “unplug” LAR if it turns evil? Only time would tell.

If you like Wynwood, you’ll love Comuna 13 or Aranjuez.

We were aware of the ethical challenges and criticism that comes with artificial intelligence, one of the pinpoints discussed in this year World Economic Forum (weforum) https://bit.ly/2HeYUPO. We did analyze them with criminology in mind:

The bias problem

In Chicago and Los Angeles where AI algorithms have been used for crime prediction, there have been cases where law enforcement agents are more prompt to arrest African American people rather than other race. The problem could be in the data were (and this is an assumption) white people denounce crimes more often than other race. If the dataset has only crimes perpetrated by African Americans, then the algorithm will target them.

Colombia is no stranger to racism but it is not as visible as other first world countries. LAR will face similar challenges like Chicago and Los Angeles but rather about race is about stratification. Colombia classifies its people by their income in 6 levels being 1 the poorest and 6 the wealthiest. The country also faces a problem of distrust from their citizens in the legal system. This means that not many people denounce crimes that have happened to them due to high levels of impunity. We could say (and this is also an assumption) that the higher the strat for a citizen is, the more chances he denounces a crime. LAR could focus its efforts on people with low stratification and that is discriminatory. Colombia has white-collar criminals and they should be targeted by the algorithm as well.

Now, as suggested by the weforum https://bit.ly/2VRxq5R there are ways to prevent these discriminatory practices, but I’m a true believer that the answer is in the type of data a robot is consuming and how that robot presents its information to for our interpretation. Crime datasets should only contain the type, location, and date-time of the crime. We shouldn’t care who committed the crime, we should just care if there’s a pattern of recurrence in a specific place at a specific time and day where the law enforcement agents are not present. The AI should not be the one who makes the decisions, it should be the cops who decide if what the AI is suggesting should be something to be considered. The information displayed to these agents should be direct, easy to digest, and convincing enough for them to make the best decision on how to proceed. This is were my fears that I talked previously in this article takes place: Corrupt cops won’t follow what the AI is recommending since it will not suit their interests; or even worst, what if a corrupted cop gives this technology to a criminal organization? 😧

I want to end this pinpoint by thinking further from just the idea of placing cops at the right place at the right time. Despite I said that we don’t need to know who committed a crime to make the AI work; what if we know where this criminal comes from? Where does he live? Where has he lived? Placing cops where they need to be is just part of the solution but we need to think on how to eradicate the problem from its roots. If we know what neighborhoods are being targeted by criminal organizations, if the government invest in their social and security development, there could be a chance that fewer criminals are out there in the streets. Let’s get wilder with this idea and consider the climate variable. If we know that a criminal is a person who lost its home due to climate change effects and his only option was to take his family and move to a city where the only job he could found was to join a criminal organization; the government could invest in preventive actions where these endangered communities are so its people can stay and live in the land they call home.

The transparency problem

As one of the founders of LAR being transparent is key to the success of this initiative. It wouldn’t make any sense trying to trick a government that the AI is reliable for preventing crimes to happen if the data used by the robot is corrupted. For how long would I be able to fake the best possible scenario after being caught? If we take into consideration what happened in Fyre Festival, my guess is that for not too long.

Yes, AI is trendy right now but I also think that it is at its early stages. The information we get from LAR should be considered experimental, to begin with. Lots of tests will need to be taken. LAR will need to evolve and the law enforcement agents who will use it need to evolve as well. They need to trust each other because if they don’t, LAR will never be useful.

Every AI startup should be prepared for failure. The more transparent we are with ourselves and with the people, we try to impact, the more trust will be built with the systems who will run our future.

The accountability problem

I’m not going to lie, LAR could F up if we are not careful and in the case that happens, we should be held accountable. This pinpoint is tied up with transparency where we are saying the truth about what LAR could deliver. I couldn’t say how effective this AI is going to be at preventing crimes from happening especially by knowing all the challenges previously discussed in this article, but I would say that it is worth to try to find out how effective it can be.

The privacy problem

LAR should never know sensible information about the people who committed a crime. Data like names, phone number, the home address should be of the table. It is our moral responsibility to protect the data of the people even if they hay committed a mistake in their lives. We don’t have the right to judge them and life have taught me that every single one of us deserves second chances https://youtu.be/gJtYRxH5G2k.

We are all human beings after all.


Despite our effort and good intentions, it became impossible to have a consistent flow of data from the city of Medellín. When the new major of the city came to power, he cut us out from accessing the data. It is known that each leader lacks trust of what the previous administration was doing, and it seems all they want to do is to hide the real situation of a city who hasn’t stop being violent.

We heard that a new team in Bogotá (Colombia’s city capital) is attempting to bring a similar solution to life. We wish them all the luck.

I hope for the day that a city like Medellín will be safe to visit at any time thanks to an intelligent solution that help our law enforcement officers 👮 to keep us safe; but it can only be achieved if we learned from our mistakes, we adapt and be a better version of ourselves, and we constantly repeat this process.

Manual from Columbia

Written by

I’m a computer science guy who felt in love with design currently living in Los Angeles, California.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade