When Everyone Say Yes to Big Datas and AIs, I Say Yes to Human Intelligent!

Julia Liu
ooOMedia
Published in
5 min readFeb 18, 2020
Photo by Nick Hillier on Unsplash

Are tech companies going number crazy??

Just look at how Google ranks search results. Youtube pushes relevant content to you based on how many seconds you stay on a video, not to mention all the ratings for restaurants and drivers. These includes businesses such as uber, airbnb, and yelp.

These buzzwords metrics, numbers, big data, algorithms and AIs are everywhere in our lives acting as if they are going to take over the whole world. We might know the popularity of these new trends within numbers, but we might not be aware of how invasive they’ve become. They are in our daily life. Not to mention the fact these numbers get collected into the decision making process without our consent. It looks like these numbers all are backed up with scientific precision, accompanied with authority and power that is here to dictate every decision making process. Companies and organizations use AI for hiring, criminal charges, insurance rates, and even school applications. People tend to treat these numbers as unquestionable truth and facts. In reality, more often than not, they are full of bias or just plain wrong.

Data Privacy??

Tech companies tirelessly made statements along the lines of “blah blah blah, we are sensitive to the privacy concerns of our users.” Meanwhile, many of the companies’ business models are shaped around targeted advertising models that rely on the continued collection of personal information. Tech companies are snooping through every individual’s daily life and their behaviors have gotten more crazy and intrusive over the years. Your phones, emails, computers and smart home devices are all gateways that dozens of companies are preying over-your shopping habits, your preferences of music,and even your day to day conversation. Big companies like to say they collect data to invent better products and services to improve their customers’ experiences. In reality, big companies only care about monetizing indications beyond these data, and overlook human privacy.

A good example would be Amazon sending out marketing emails about lubricants and ‘intimacy facilitators’ promotions to their customers. This was followed by angry emails from customers and a threat from the CEO to shut down the whole email marketing channel. This practice is the best example of a business blindly adapting vanity metrics without thinking of customers first. Indeed, it does hit all the email metrics and gives correct product category recommendations that most marketers and companies care greatly about. However, it didn’t take into account a customer’s right to their privacy. Human intelligence can make up for this failure. The email algorithm would be a better practice if Amazon incorporated human curation into the process and rendered the human system to harness the elements that AI is unable to detect. With these together, we can achieve the perfect balance between the two.

“These algorithms have repeated the history of existing inequalities and biased policing practices on these marginalized groups that have historically been disproportionately targeted by law enforcement.”

The embarrassing story of Amazon email marketing practice doesn’t cause a huge personal impact. However, there are times when algorithms play a bigger role in decision making such as the criminal justice system in America. The criminal-justice system is using AI to collect data about a defendant to estimate the possibility of committing a crime in the future. These algorithms are trained by the historical crime data, and they are used for calculating a defendant’s risk of future crime. This assessment tool hadn’t been properly validated by any judicial organization. What’s worse is that these scores are used in the courtrooms in America. A judge follows these risk assessment tools and then later decides bond amounts and even how severe these defendants’ sentences are going to be. The record shows only 20 percent of the people predicted to commit violent crimes actually went on to do so. The AI’s predictions have been proved remarkably unreliable and biased. “Black defendants were still 77 percent more likely to be pegged at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind” (source by ProPublica Machine Bias.) The assessment tool is designed by biased humans and coded with dirty data that wasn’t properly evaluated. These algorithms are more likely to flag dark-skinned individuals, low-income communities, and minority communities into jails. These algorithms have repeated the history of existing inequalities and biased policing practices on these marginalized groups that have historically been disproportionately targeted by law enforcement.

Photo by Alexander Sinn on Unsplash

“AI and human intelligence is a bread and butter type of relationship”

Machine learning systems model the world through old data; that’s all it will ever know and that’s all it can ever see. What really matters now is to understand that these pieces of data are not objective nor neutral. These often are the numbers of bad data that are being thrown into the system, and they continue bad patterns. These are not mathematical uncontested facts, these are human errors repeating themselves. It’s important for us to know where does it work really well and where it doesn’t. The embedded data cannot get into the justice system to decide who goes to jail or not.

It’s time to fight against trying to quantify everything. We should reject the notion that machine-learning algorithms can replace human intelligence. Human intelligence is the bigger lens we can apply to examine faulty or incomplete data that are being used on patterns of discrimination and has the possibility to automate the status quo. It’s important for the government to step in to restrain the influence of computers over daily life. These numbers are only helpful when you are willing to weight in on social, political, and ethical implications these numbers can bring to the people. After all, fancy mathematical technologies can only achieve the most significant performance improvements when humans and machines work together.

Automation and human intelligence doesn’t have to be a binary choice: an “either or dilemma.” The ideal solution would be to embrace the efficiency AIs bring to the world, while remaining inclusive of human judgements, preferences, emotions and agency. The future of AI systems need to count humans as an important factor; AI and human intelligence is a bread and butter type of relationship, the two enhance the existence of the other. AI systems are built to help humans, and AI needs human intelligence to make its progress more meaningful and profound, beyond efficiency or correctness.

--

--