An Analysis on the Ethics of Machine Learning on the Cloud

Introduction to Machine Learning(ML)

Artificial Intelligence(AI) is a powerful tool which is capable of performing tasks more efficiently than humans and even traditional algorithms. ML is a subset of AI that enables an algorithm to learn as it makes decisions, which in turn makes it more accurate as time goes on.

The caveat of ML

These algorithms require lots of data and programming expertise to develop. Large companies such as Amazon and Google use ML for their voice recognition software and products(such as Amazon Alexa or Google’s Speech-To-Text). Youtube and Facebook use ML for their user recommendation algorithms. However, large companies are not the only ones that can derive great use from ML algorithms as they have many applications in security, malware detection, prediction based programs and more. Smaller companies and individuals can really benefit from these algorithms, but hiring software engineers, buying enough storage for data, and organizing the data is oftentimes too costly and unfeasible. In Deloitte’s 2020 State of AI study, 83% of organizations believed AI will be critical in the next couple of years to the success of their business’s efficiency and reach. This is where ML on the cloud becomes very convenient.

Credit: Codenex

Introduction to Cloud Computing

The cloud is a computing facility that allows for data storage and services over a network. To better explain the difference between a cloud service and a normal one, let’s look at the example of the Windows Notepad vs Google Drive. If you make a note on your computer using the Windows Notepad, it is stored and saved locally (on your computer’s physical hard drive). If you make a note on Google Drive using Google Docs, that document is stored and saved virtually (on Google’s hard drive) and on the internet (so you can access your document on any computer, as long as you log in to your Google account). Making that Google Doc does not take up space on your computer, but rather on the cloud. Cloud services are any services that store data on the internet rather than a local hard drive. Examples include Google Drive, Amazon Web Services, Zoom and more. (You can find more information about Cloud Computing here)

Difference in creating a document in a local drive vs. the Cloud:

ML Algorithms on the Cloud

Since the cloud allows individuals to run data-heavy programs without having to worry about storing the data on their own computer and because the cloud is highly accessible as long as people have internet access, many large companies like Google and Microsoft offer ML algorithms on the cloud to people that would like to purchase them. The benefit of using ML on the cloud is its convenience. Smaller businesses or developers can use these cloud services to use complicated ML algorithms, store data, and organize data for their programming goals.

The Downsides

But the convenience of these algorithms comes at a cost. First, they cost money, of course. But more importantly, they magnify the preexisting problems with ML algorithms.

Technical Issues

Sometimes ML algorithms have technical issues in their programming. For example, many doctors used an AI algorithm that would detect if a lesion was cancerous or not through the investigation of biopsy images. Now, the thing about ML algorithms is that they learn from the data that you give it. The algorithm changes and fine tunes itself with the data you give it. If a cancerous lesion is a cause of concern for a doctor, they will place a ruler in the biopsy images in order to measure the tumor. However, the ML algorithm began to only interpret a bad tumor as one with a ruler in it’s biopsy image. This causes an issue for many doctors that need to determine if their patients need cancer treatment or not. We know that when ML algorithms are available on the cloud, these algorithms become much more widespread. In an industry where mistakes can be fatal, the increased spread of possibly faulty algorithms is a factor that must be taken into account. (You can read more about this case here)

Systematic Issues

Let’s look at a specific example where ML algorithms can be problematic. ML algorithms used in crime predictive settings such as threat detection on security cameras, predictive policing, and judicial decisions are riddled with negative bias towards ethnic minorities. This stems from biased data resulting from a systemically racial society. Data informing an AI system used by judicial courts to predict if a convicted criminal is likely to commit more crimes was biased against minorities according to a 2016 ProPublic Investigation. ML makes decisions based on the data it is trained on, meaning any bias in the data would be reflected in the algorithm’s conclusions. Taking this fact into account, users of ML algorithms need to be aware of the algorithm’s bias and therefore not make decisions solely based on the ML algorithm’s choice. Another example are ML algorithms used in job application filtering. Many of these algorithms are used by companies in order to filter through job applications without having to manually go through them themselves. Many of these algorithms may filter out candidates with a criminal background, preventing corporations from giving a holistic look at all the candidates. With the development of ML on the Cloud, problematic ML algorithms are even more accessible than before, meaning they lead to informing more users with biased information. Without proper knowledge and training regarding ML algorithms’ bias, the existing negative effect these algorithms have on ethnic minorities will only multiply.


Despite the numerous benefits of ML on the Cloud to small companies and developers, the negative effect is also quite important. With ML on the Cloud, smaller organizations can keep up with the efficiency and computing power of larger corporal powerhouses. But just as ML can be misused by large corporations, they can also be misused by smaller corporations. Some may argue that the issues of logical errors and bias already exist in all ML algorithms, however availability on the Cloud makes these errors and biases augmented through widespread use. However, use of powerful tools that can change the course of a company should not be limited to just large tech behemoths. The Cloud can level the playing field between companies that are big and small. Therefore many suggest that although ML on the Cloud is useful, our society needs to do a better job of recognizing errors and biases in those ML algorithms.

There is a world that understands the bias in these algorithms and acts accordingly, whether or not we live in that world is up for debate. Nonetheless, with the immense capabilities of ML algorithms, users should always consider their power to do harm before utilizing them.

More Resources:

ACM at UCLA Medium Articles: ACM at UCLA — Medium

Tech Ethics Workshops: AI Ethics Speaker Series | ACM Teach LA (


Champion, Kerry. “Recognizing a Ruler Instead of a Cancer.” Menlo ML, 12 Jan. 2020,

Get in touch Nitin Mittal Principal | Deloitte AI Niti. “Deloitte Survey: State of AI in the Enterprise, Third Edition — Press Release.” Deloitte United States, 16 Oct. 2020,

“Time, Technology, Talent.” Deloitte Insights,

Wykstra 03.20.2018, Stephanie, et al. “To Reform Criminal JUSTICE, Design a Racist Algorithm?” Undark Magazine, 30 Sept. 2019,



Undergraduate student at UCLA studying Electrical Engineering.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ava Asmani

Undergraduate student at UCLA studying Electrical Engineering.