Democratize Artificial Intelligence today for a better future tomorrow

Sukant Khurana
The Startup
Published in
9 min readJan 21, 2018

by Aadhar Sharma, Raamesh Gowri Raghavan, and Sukant Khurana

Artificial Intelligence is a great help to humanity. Unfortunately, the scope for misuse is also huge. Given tendencies among corporate and state entities to establish dominance, we analyze ethical concerns and advocate the need for democratization.

Keywords: Data, Artificial Intelligence, AI, Democracy, Social-Control, Rights, Privacy, Transparency, Equality, Data-protection

Figure 1: United Nations General Assembly (Wikimedia Commons)

“It is not enough to be electors only. It is necessary to be law-makers; otherwise those who can be law-makers will be the masters of those who can only be electors.” ― B.R. Ambedkar

Consider the following scenario: You are late for your dream job interview. You grab your messenger bag and dash out of your apartment not bothered about the lights or door locks: they sense your absence and act accordingly. While you’re anxiously twitching in the elevator, your smartphone — being aware of the appointment — calls a driverless taxi. Leaping into the vehicle, you open a music streaming service, and it automatically tunes you into a Brandenburg Concerto deciding it needs to calm you down: the sweat on your palm conveys a lot to the device. On arrival, you dash into the building because your digital wallet automatically pays for the trip. Finishing the interview, you feel confident about your chances, but later find out you’ve been rejected due to your closet sexual orientation — detected by that strange-looking camera on the panel’s desk that kept staring you down.

How would you feel?

This isn’t even something futuristic: Artificial Intelligence (AI) permeates our lives today. From finding Tinder dates on the weekend to new planets out in the universe, just about any field employs it. AI is a great leap in technology, but it’s also continuously shrouded in controversies. On one hand it’s an enormous help to geologists to predict earthquakes, but on the other hand, it’s still naive enough for people to train it into practicing racial discrimination. While any openness or transparency in the technology is commendable, the secrecy behind most products becomes a cause of concern for potential exploitation. In its current state, AI appears to be custom-made for the tech-titans (perhaps convenient for authoritarianism too); while organizations defend their investments, as humble consumers, we are obligated to discern their true intentions and fight for our rights.

Data is not just data

Pedro Domingos, a professor of Machine Learning at the University of Washington and author of “The Master Algorithm,” says:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

Social media, search engines, the Internet of Things and similar platforms produce enormous volumes of data today, which may provide invaluable insights into our world (they sure know when we need to re-order toilet paper). According to some estimates, there will be more than 3 billion active smartphone devices by 2020 and more than 150 billion networked sensors — that’s 18 times the estimated human population — by the next decade. In other words, the data produce in volume will be oodles more than what we generate today. This appeal of Big Data is ensnaring companies — and governments — to invest and capitalize.

So, how would data change the dynamics of society? A data-controlled society would be one where data would hold ascendency in determining the organization of defense, economy, and all other governmental policies. The futuristic question, would such a society ever exist, is debatable, yet one must acknowledge that we already live in a world where data influences majority of decisions. AI complements the human art of decision making by studying data (data analysis) to discover interesting patterns and suggesting candidate solutions to the problem at hand. A tool that not only aids decision making by utilizing resources (time, workforce, capital, etc.) but also complies with explicit criteria (customizable: accuracy, speed, generality, etc.), undoubtedly, has a great potential for employability in any industry.

For example, when you report an e-mail as spam, the AI within Gmail learns about it and updates the ‘spam filter’. Whenever a new spam-like email arrives, the AI decides to throw it directly in the spam bin. Therefore, you don’t see similar e-mails in the inbox (but in the spam list). However, if you believe that it made a wrong decision (mis-classification: a false positive), you can always un-spam it and make the AI learn this too.

It is applications like these, which earns AI an enormous amount of attention and interest among industries and governments alike. An AI that proposes solutions for scrutiny of humans is limited in its powers. However, the same can’t be said for an autonomous decision making AI which has the liberty to take actions without human approval. The power of autonomy doesn’t necessarily imply that the AI would just be malevolent or benevolent, it’s the application and circumstances that will truly govern the behavior of such an AI.

Figure 2: Medical Image Segmentation: a brain tumor, is segmented in an MRI scan of the brain.The green-blob is the segmented tissue.

At present, oncologists take more than 4 hours to identify and classify cancerous tissues. To effectively target radiotherapy treatment, it’s essential to swiftly and accurately analyze the scans. AI algorithms such as image segmentation (is there a green blob?) and classification (is this green blob cancerous?) help in detection of tumors with astonishing speed and accuracy. Researchers at DeepMind Health have collaborated with UCL Hospitals to train AI models that rapidly perform the segmentation and elevate the quality of treatment offered to the patients.

Evidently, AI is immensely beneficial to humanity. Yet, autonomy in AI can also become a massive roadblock to the desire for ethical salience. The medical industry experiences a shortage of transplant-ready organs; doctors must quickly decide which candidate gets it. The process involves many ethical dilemmas hard enough for doctors — when AI starts taking those decisions, the ethical boundaries become even fuzzier. For instance, researchers at Duke University have developed a decision-making system that picks a receiver for a kidney transplant. It was asked to choose between a young alcoholic and an elderly cancer survivor; it picked the young fellow. But the ethical dilemma is: if the youngster goes back to his binge-drinking lifestyle, he’s likely to destroy the new kidney too. Perhaps the elder (with a new lease of life) might have done something more positive. This is a decision the most seasoned of doctors express themselves unable to take, for it is based on scenarios that are near impossible to predict (what if the young man didn’t drink? What if the elder’s cancer came back?). The algorithm is at question for making a decision it may not even understand, in that manner how ethical is it for such a system to decide?

Another way AI aids decision making is by countering cognitive biases. However, autonomy may also make systems vulnerable to security threats and unexpected incidents. Here’s an example: On May 6, 2010, the US stock market collapsed (the infamous ‘Flash Crash’), sweeping away billions of dollars from financial juggernauts. Why? Because the high-frequency anomaly detection system (autonomous decision making AI) failed to detect the nearly 19,000 fake selling orders. Within minutes, prices fell, wiping out nearly a trillion dollars of value form the market. The criminal investigation later on concluded that a trader had exploited previously unknown vulnerabilities (of the anomaly detection system) and the market structure, which resulted in the debacle. Testing simple software is hard enough, such high throughput AI is practically impossible to test exhaustively or track behavior in real-time. If the system violates any boundary condition in unprecedented ways, the whole machinery will succumb to chaos.

Transparency in AI is another issue. The most advanced algorithms in AI are essentially black-boxes, no one knows entirely how they technically make the decisions. Also, the fundamental designs of proprietary algorithms are often kept secret by companies. A program can predict sexual-orientation of a person just by looking at facial images. Other facial recognition programs can predict people’s emotional state, political alignment, and IQ. Some public studies create ethical controversies. However, the number of un-ethically deployed algorithms in the industry or governmental institutions may never be truly known.

“I think political systems will use [AI] to terrorize people.” — Geoffrey Hinton

No matter how much trust we put in, these algorithms can inherit a bias, if we train them on data that are already biased. For instance, if one trains an AI model on a dataset of criminal records that has a bias against African-American people, the algorithm will start out with this bias built in. Several states in the US issue a ‘risk assessment score’ to criminals, which predicts the likelihood of committing a future crime and assists the court in determining the optimal punishment. However, now it’s believed to inject racial bias in the court; there have been multiple instances when a black person — being prosecuted for the first time — got a significantly higher risk-score than a notorious white criminal. The company who developed the system refuses to reveal the calculations it uses to assign this score.

A lack of democratic control further exacerbates the problem of transparency. It compels one to ponder whether data accidentally manifests bias in AI, or is it the companies and governments that covertly regulate AI in order to establish dominance or discriminate on the basis of subjective criteria?

As we depend more and more on technologies such as search engines and social media, the potential for possible exploitation — by explicitly inducing bias in the data scene — is growing too. There’s a chance that control over elector data can tempt governments into fiddling with elections, to ensure they keep winning. Companies too can benefit financially by tweaking algorithms that promote a sister business over a competitor’s business. Governments or organizations that possess such powers may establish dominance over all others; this is a threat to social cohesion.

If the data set is clean, but the systems are genuinely performing ill, rather than being subjected to external manipulation, then there is something wrong with the technology (since, all algorithms aren’t perfect, and bugs are common in codes) and researchers are obliged to analyze the underlying algorithms to make them ailment-free. However, most of these systems are strongly protected by intellectual property rights. Companies often oversell their products and only publicize vague details which seldom divulge meaningful information about the machinery. Encapsulation of details such as study methods and algorithmic calculations is essential in a competitive industry, but a substantial lack of public information (transparency) on top of controversial episodes naturally puts suspicion in mind.

Conclusively, from a technocracy of social control — using artificially intelligent machines — a new form of dictatorship may emerge. To inhibit the emergence of top-down global control, we must democratize AI, which requires transparency and the protection of constitutional and civil rights. It also seems to be the best time to ask for additional rights that defend not only identity and agency but also social dynamics and diversity. The path to a better future with AI will have many highs and lows. At times we may find ourselves in unanticipated debacles, but it’s in such situations that our social cohesion and governmental policies will help us prevail.

— —

About:

Adhar Sharma was a researcher working with Dr. Sukant Khurana’s group, focussing on Ethics of Artificial Intelligence.

Raamesh Gowri Raghavan is collaborating with Dr. Sukant Khurana on various projects, ranging from popular writing of AI, influence of technology on art, and mental health awareness.

Mr. Raamesh Gowri Raghavan is an award winning poet, a well-known advertising professional, historian, and a researcher exploring the interface of science and art. He is also championing a massive anti-depression and suicide prevention effort with Dr. Khurana and Farooq Ali Khan.

You can know more about Raamesh at:

https://sites.google.com/view/raameshgowriraghavan/home and https://www.linkedin.com/in/raameshgowriraghavan/?ppe=1

Dr. Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com and if you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on linkedin https://www.linkedin.com/in/sukant-khurana-755a2343/.

Here are two small documentaries on Sukant and a TEDx video on his citizen science effort.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 287,184+ people.

Subscribe to receive our top stories here.

--

--

Sukant Khurana
The Startup

Emerging tech, edtech, AI, neuroscience, drug-discovery, design-thinking, sustainable development, art, & literature. There is only one life, use it well.