What if AI (Watson) could change democracy as you know it?
Disclaimer: the ideas on this post are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
The definition of Democracy
It is weird that when we say something about democracy people will quickly jump to say it is the process of choosing our representatives and yet the truth is that it goes way beyond that.
Teachers at Stanford University define democracy by four key elements:
1. A political system for choosing and replacing the government through free and fair elections.
2. The active participation of the people, as citizens, in politics and civic life.
3. Protection of the human rights of all citizens.
4. A rule of law, in which the laws and procedures apply equally to all citizens.
We remember just the first one and forget about our duty outside the election day. Yeah, it is quite hard to keep up with every decision and having a true understanding of everything while we live our lives so the question to be asked is:
Can we use new technologies to approximate government and democracy to the day-t0-day of a citizen?
We could use Chatbots but are they enough?
I won’t be writing about the rise of chatbots. There are so many good posts on the web about it that it makes no sense to write another one.
But what you need to keep in mind is that its usage is growing rapidly and you may find yourself using a chatbot instead of an app sooner than you think.
And it makes sense! Why would I require my users to download and install an app when I have messaging platforms (like Facebook Messenger and WhatsApp) that reach one billion people and can do the same?
The simple solution: a initial bot
Yes, we can start our solution with a bot. The first idea that comes to mind when we talk about chatbots and democracy is creating a bot that answers most frequent questions for public services. You could answer simple questions like: “what is democracy?” and many others. This would certainly help citizens to get educated and avoid many requests on official channels leaving space for more complete and complex answers on those channels.
Then you could evolve the bot to more complex questions like: “what was the budget for the city of São Paulo in 2013?” this would require integrations with databases and more complex stuff but would certainly create a nice feeling of transparency and it would educate even more our citizens.
The problem is that Q&A is almost a one-way approach to, maybe, educate citizens regarding sensitive matters that need to be explained. We need something more, we need to think about getting people to use and really feel they are helping.
So how can we increase the potential and attractiveness of these applications?
It wants to know what you think, stop the answering only interaction
So to evolve your application to make it more attractive you need to make it two-ways. At least this was the first and more obvious answer that I found while I was watching a video about an amazing project that IBM Research did for UNICEF Uganda.
You could start using the bot to reach citizens and get their opinion regarding important matters. Do they agree with something? Have they seen something? And even receiving open ended interactions like what would a citizen do about his situation or about his neighborhood. With this you get insights from the real experts on that, the people that are living that reality.
Where would Watson fit into this solution? Watson would handle the interactions. It could do disambiguation: is there an address necessary for this report? And more importantly: it would do classification and routing on those. So when the application receives a message like “there is a broken tree stopping traffic on street X due to heavy rain” it could classify it as “traffic problems” and route it to an agency so the problem could be resolved.
All of this could be done on different channels using the same intelligence. The same trained instance of Watson could answer texts originated from Facebook, SMS, Telegram, etc…
The complete solution — a bot that can see, talk and hear citizens.
Yes, internet bandwidth is still a problem on countries like Brazil, but at least on major cities we can rely on 3G/4G and file compression. So why not using multimedia to increase the quality of user input?
The first use would be geolocation. Considering the broken tree scenario above it would be much easier (and less time consuming) to ask a user for its location using GPS then to ask for a written address.
With cognitive computing we could go further. A user could simply say: “I’d like to report a problem”, send an image of trash bins that weren’t collected. The system would understand the intent to make a complaint, with its visual recognition it would classify the image as a sanitary problem and request citizen’s GPS location. That would probably require less than 30 seconds of that person’s time and it could give a feeling of completion of its civic duty. All of this could be done with no additional app or hardware other than a smartphone and a messaging app (like messenger, telegram, twitter, etc).
Audio could make it even smarter. Don’t like texting? With Speech to Text features users could say what they are complaining about and the system would translate it to text in order to continue with the normal flow. Even the answer could be outputted as audio also with speech to text.
It could go on…
e-Democracy initiatives have been doing some of those scenarios mentioned above for a while now. Companies like Poplus, MySociety Ciudadano Inteligente and the Brazilian Onde Fui Roubado have nice applications to those problems, they could be used as inspiration for what comes next.
We just need to insert each one of those initiatives inside every citizens phone and it is my belief that the way to do it is using cognitive apps.
The backend, maps with statistics, analytical dashboards, applications to connect representatives could also use some cognitive benefits but this is a talk for another time.
The framework and technological overview
If you would like a more technical approach to this problem and/or start building something similar to what have been said in here just contact me. I started creating a framework for all of the mentioned features (conversational agents, speech to text, text to speech, visual recognition, etc) using IBM Watson inside Facebook Messenger. It is still a work in progress, mostly done on weekends but maybe with some extra hands we could reach somewhere faster.