What Companies Need to Consider In Wake of The Algorithm Economy

AI is being adopted by businesses at an ever-quickening pace, but they aren’t auditing their systems as well as they should.

Eshita Nandini
10 min readAug 13, 2020

We are living in the era of the algorithm driving the digital economy. To keep up, businesses are turning to the backend of machine learning tools to power revenue streams.

According to Gartner’s 2019 CIO Agenda survey:

Between 2018 and 2019, the percent of organizations adopting AI practices grew from 4% to 14%.

As data continues to be the focal point of decision-making, we will only see more and more businesses harnessing AI capabilities to supercharge certain business functions; however, with this trend, there are growing concerns over the gap between decision makers and their ability to defend the inner-workings of algorithms put to practice by their organizations.

Algorithms Simplified

By definition, an algorithm is a series of steps taken to complete a task or operation.

In the context of AI, an algorithm is a process that learns from given data without having to be programmed explicitly through statistical methodology known as machine learning (ML). An ML algorithm can range in complexity from a few paltry steps to complete a single parameter regression to a deep neural network with thousands or millions of parameters interconnected through an even larger number of neurons to form relationships, which are largely unexplainable by today’s technology.

This latter type of model is known as a blackbox model and is quite common in the practice of AI: this is when the input yields a valuable output, with little to no knowledge of what took place inside of the model and how the data in question was processed.

The Lack of Algorithm Accountability

Traditionally, algorithm accountability has taken a backseat to profit. Researchers and business leaders haven’t needed to make public or defend the details of their proprietary models.

With no outside pressure, why would a company be motivated to pause and contemplate their data practices if their ML pipelines are healthy and customers are satisfied?

For starters, there is the interest in perfecting the product and strategy. Within the data pipeline, you should be able to guarantee quality and no bias at every step. If you can’t, there is a risk of welcoming bad quality data into the system or creating a highly biased model that turns away customers.

A famous example came last fall, when Apple released the Apple Card (issued by Goldman Sachs) and customers noticed that female cardholders were given a lower credit line than their male counterparts. When interrogated, Goldman Sachs was largely unable to explain the disparity and ended up facing no consequences.

This seems to be the pattern with companies and organizations who are called out for discriminatory systems: initial exposure by media and then swiftly brushed away as the product stays in production with few hassled changes on the company’s end.

But, as AI becomes democratized, companies should be wary that regulation will follow suit.

Setting the AI Ethics Standard at Your Company

“Garbage in = garbage out” is an overused adage in machine learning that will definitely be thrown at you in an intro class.

There is a common misconception that in order to remedy model accuracy, you must simply rein in more data. This is not necessarily true. it is quite more strategic to have good quality data than be able to mine a copious amount of data.

Without being thorough, you risk putting your company in a spot where the data team is struggling with an algorithm that isn’t performing optimally or is backed up with scrubbing messy data to a usable format. It is a laughable point in the industry that a data scientist spends most of their time cleaning data than any other glamorized task. While data scientists tend to be trained on clean toy datasets, in practice, data is extremely difficult to receive cleaned.

If you are building a company or running the department that works with data, it is your responsibility to find resources to create thoughtful data processes.

If you’re an early-stage company, it is crucial to set this data quality precedence as the company grows and is centered around AI technology. If you do come from a non-technical background and are starting up a company, it might help to consult experts in this area to ensure that your business is running on a healthy pipeline without being fully hands-off. There is a lot of contention in how much AI should be present in our lives, but it looks like it will become more of a reality sooner than later. We are the past the struggle of getting usable information out of these systems, but are stuck fully on the complicated side-effects that do come with it.

How Does AI Democratization Play in Ethical AI?

The previous section ties into the goals of democratizing AI — challenging the incorrect perception that AI is highly complicated and unapproachable by those who are not specifically trained in it. There are several no-code/low-code platforms that could help you build initial business intelligence and machine learning models. These platforms are ideal for early stage to early growth-stage companies for specific needs like predicting churn rate or customer lifetime value.

However, there are limitations to using no-code platforms with pre-built models in being able to do precise parameter tuning or scale well as the company and data grow. This is would likely be the time your company is able to grow a data department to take care of custom pipelines, but using a platform like Obviously AI to do initial data work is quite effective.

The main challenge for for startups has become how to thoughtfully and emphatically build these machines—and less about if it is possible to.

As a small to medium sized business, you might not be able to allocate resources to your data team, but you do have the ability to set up a data-powered company ethically and carefully very early on.

Still, there is a fair amount of apprehension with shifting towards the AI economy from analog and what this could mean for privacy and avoiding discrimination. To protect your company and customers, it is important to think about the steps that take to deploy a model.

Bias is Possible at Every Step of Building Your AI system

Suppose you are implementing the pipeline for determining credit limit:

  • You frame the problem as “What is the credit limit granted to the customer based off personal and financial factors?” You are trying to optimize for maximum repayment.
  • The model is trained on previously collected data because you have no new user data of this specific credit card since it is unreleased. You have access to tons of previous credit issuing and repayment data on thousands of customers, so you decide to use this to train your model. This data was collected in a way that ended up with a lot of missing data points.

Bias can already be introduced to your system if you didn’t stop to question a few things:

Has your previous data—which will be your training data—been checked for bias? Did you question why there’s missing data?

  • Now, we move on to processing the data. Here, you try to figure out what to do with the missing data and determine what variables to use in your model. For those instances of missing data points, you choose to drop them entirely from your dataset. You choose to remove “gender” and “race” from the input because you definitely don’t want a biased system. However, you did not consider the other variables which will implicitly group genders and races anyway — which happens quite often in problematic AI that have been made public.
  • You choose a proprietary model after some testing to output credit limits, which will maximize repayment for the company. The model is trained on the data you processed and cleaned in the previous step. You test for error rates and are satisfied and the model is put into production as the credit card is released.

The model may be able to now discriminate on the basis of race and/or gender. The missing data points could’ve been due to customers who are younger and are lacking information on their financial background or it could be customers who had to cancel a previous credit card or because of hardship have gaps in repaying on time.

This leads the system to not internalize these instances and form an unknown bias for new customers who may have similar backgrounds since those data points were dropped and excluded from the model.

Machine-intelligence emulates the societal prejudice that plagues us, but it does reduce bias to a certain degree. There are plentiful examples of why AI is better than allowing humans to be the sole decision maker in cases such as policing or determining what high-risk patient should receive care because human bias is exactly the thing that AI-based recommendations were set out to abolish. With this shift, we have introduced a different, more complex type of bias which is inherent to how AI functions.

The Call for Open Source and Explainable Models

Algorithms often become questionable when it enters spaces where data is sensitive — finance, healthcare, or the justice system.

Without our direct knowledge of it, these algorithms have entered these systems which typically should require more scrutiny and we are finding that they introduce bias in a more complex manner than human decision making — this poses to affect part of society quite adversely, including women and people of color.

The usual proposal for this type of recurring discrimination through AI is to make the use of proprietary modeling obsolete. The most infamous example of a company conducting secretive data operations is Palantir, who filed for an IPO in 2020. It faced a lot of public scrutiny because of their government ties. There hasn’t been a need for it to come forward and disclose publicly how it mines or uses data, likely because it does work with organizations such as the CIA and Pentagon.

Publishing your work in public gives it a better chance of it being checked for flaws or areas that it could collect bias. Popular AI frameworks like TensorFlow and Keras are open-source, and there’s people using it and pointing out any deprecations or bugs in the models frequently. The more likely scenario is that your company gets off the ground with no-code/low-code tools and then this may be something to worry about later down the line when scaling. It is also likely you might not even need blackbox AI tools to meet your business needs.

In this paper, Cynthia Rudin, professor of computer science at Duke University, vouches for picking interpretable models instead of blackbox models, especially when it comes to sensitive spaces. Of course, there is a compromise in model accuracy when we delve into using a simpler model with more explainability. However, a simpler ML model allows the researcher to be able to understand bias creation better and tune parameters as needed — doing the same for a highly complex model might be impossible. In most cases, Rudin argues, an interpretable model works good enough anyway for the problem being solved for. AI is marketed as an unattainable feat for the most part, but deep neural networks are not needed for you to automate internal processes.

Algorithms are prevalent in our digital day-to-day lives and they improve it for the most part. It is inevitable that loosely defined problems, uncleaned data, and unintended bias will enter the system somehow, but these should be dealt with before being put in production. This is because when facial recognition technology discriminates against Black people or your healthcare bot prioritizes men over women for medical care—and that is plain irresponsible.

Algorithmic Businesses are Long Overdue for Regulation

A lot of society is wholly skeptical of AI technology and with rightful reasoning because a lot of media perpetuates the controlling and invasive nature of AI repeatedly. While not understanding the true capabilities and limitations of the technology we have so far, lawmakers have slowly started to call for regulation but have fallen short:

  • In 2017, New York City put together an act to combat discriminatory AI systems. This received a lot of backlash because it was essentially forcing public agencies and tech companies to publicize their code. There was a huge concern for diminishing competitive advantage and also heightened security risk. In this case, it was obvious that the preparers of this act didn’t consider all the nuance that came with AI regulation.
  • In 2019, Congress proposed the Algorithmic Accountability Act, which would allow the Federal Trade Commission (FTC) to enforce impact assessment on companies who are suspected to be deploying biased systems. This was exceptionally better fleshed out than the act in 2017, but there is still question of how third party affiliation could affect this sensitive investigation inside of a company.

From these two initiatives, it is clear that the runway to more refined regulation is in place as companies are adopting and deploying AI exponentially more each year. Although a lot appears to fly under the radar as AI regulation is still quite nebulous, we will see more companies facing more speculation.

AI Algorithms Affect Consumers in Many Ways

This article should encourage thoughtfulness about data processes as it not only serves as a revenue funnel, but also can affect people in many ways and jeopardize the company accidentally. In the situation of Apple and Goldman Sachs, there was little accountability on their end aside from a short statement. For a business leader, this means you should be ready to defend any processes or models that led to customers or users feeling any sort of discrimination. There is no way to be better prepared for this, but by being involved in laying the groundwork for thoughtful AI.

🤖 Eshita Nandini is currently working on Option Impact at Shareworks by Morgan Stanley. Previously, Eshita studied applied math at UC Merced. Follow Eshita on Twitter here.

🤖 Don’t forget to follow Downsample on Twitter!

🤖 While you’re at it, follow Obviously AI!

🤖 If you have any ideas, submissions, insights you want to provide, email jack@obviously.ai.

🤖 For more reading, check out this 2019 AI Index published by Stanford.

🤖 Or the current controversy going on in the AI bias debate.

🤖 Lastly, don’t forget to follow us on Medium!

--

--