Will Artificial intelligence become the new “plastic”?

Nifesimi Ademoye
Analytics Vidhya
Published in
7 min readJun 19, 2021

--

When you think about it, Artificial intelligence and the plastic we use are both ubiquitous parts of modern human existence because of their usefulness to us.

I mean, can you even begin to imagine a world where they didn’t exist?

Now for you to completely understand the parallel between these two things, we are going to take a deep dive into a brief history of plastic and how it came to be such a big part of the way we live our lives.

Photo by Museums Victoria on Unsplash

Brief History of plastics

Since the introduction of plastic, its production has skyrocketed to 380 million tons in 2015, and half of all the plastic ever made has been produced in 2005. Now for all the hype, we see about recycling a lot less plastic winds up getting recycled than you might think. About 8.7% of the plastic generated in the united states gets recycled, with the vast majority ending up in landfills or the environment. And the fact of the matter is a lot of the plastic surrounding us is not recyclable and the rest usually just ends up in the environment which later gets consumed by humans. Research shows that, globally, people are consuming about 5 grams of plastic every week, which is the equivalent of a credit card but the truth is, things weren’t always this bad. In fact when plastic production began to grow in the 1950s, so did plastic waste and with that also came Public backlash.

Photo by Gayatri Malhotra on Unsplash

By the ’60s and ’70s, organisations began drawing attention to all the packaging waste littering the landscape through ads like this one, but the irony of the matter was that the organisation that produced these ads like “Keep America Beautiful” were funded by a plastic industry trade groups and leading packaging corporations. By now you are probably confused and asking yourself that why would the company that produces plastic waste in the first place be spreading awareness to it until you realise the reason behind the ad.

You see the companies reasoned that the best way to get rid of the problem was to make the public think that the issue of plastic pollution was the fault of the public. That way it would seem like the responsibility of disposing of the public waste lies with the public and not the plastic manufacturer. This strategy still works today and is the major reason why we still think pollution is caused by us the end-users . If these trends continue it is expected according to research that by 2050, the ocean is expected to contain more plastics than fish.

Photo by Nariman Mesharrafa on Unsplash

The Parallel between plastic and Artificial intelligence

You might be asking yourself how this relates to artificial intelligence well you see as long as plastics have been around, there has always been a question of how best to deal with the waste they generate after their use and more recently a similar question has started to emerge about the high levels of carbon emission it takes to train and deploy deep learning models. In a widely discussed 2019 study, a group of researchers led by Emma Strubell estimated that training a single deep learning model can generate up to 626,155 pounds of CO2 emissions — roughly equal to the total lifetime carbon footprint of five cars.

Illustration of five cars

As a point of comparison, the average human generates 36,156 pounds of CO2 emissions in a year.

Training a version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate — nearly the same as a round-trip flight between New York City and San Francisco. These numbers should be viewed as minimums, the cost of training a model one time through. In practice, models are trained and retrained many times over during research and development.

If you are familiar with the above paper cited then you might already be aware of Timnit Gebru, an ex-researcher at Google who is still a widely respected leader in AI ethics research, known for co-authoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color.” She is a co-founder of Black in AI, a community of black researchers working in artificial intelligence.

Timnit Gebru

She and her co-authors wrote a paper that referred to Strubell’s paper on the carbon emissions and financial costs of large language models. It was found that their energy consumption and carbon footprint have been exploding since 2017, as these models have been fed more and more data. The paper presented the history of natural language processing, an overview of the four main risks of large language models, and suggestions for further research. The paper pointed out that the sheer resources required to build and sustain such large AI models mean they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” The Paper states.

Why this paper matters

Timnit’s paper has six coauthors, four of whom are Google researchers, yet this paper wasn’t published by Google because according to Jeff Dean, the Google AI head, in an internal email, the paper “didn’t meet the bar for publication and ignored too much relevant research. Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias. However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It’s the sort of work that no individual or even pair of authors can pull off,” Emily M. Bender (one of the other co-authors) said. “It really required this collaboration.” The disturbing thing to me though is that the online version of the paper I read does speak to Google’s research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models although it argued that those efforts have not been enough.

Shortly after the paper was written and submitted for publication, Timnit was given an ultimatum by a Google exec Megan Kacholia who ordered her to retract her latest research paper or else remove her name from its list of authors, along with those of several other members of her team. She replied she would do so if Google provided an account of who had reviewed the work and how, and established a more transparent review process for future research. If those conditions weren’t met, Timnit wrote, she would leave Google once she’d had time to make sure her team wouldn’t be too destabilized. The response of Google was to Fire her.

What does this all mean?

Google pioneered much of the foundational research that has since led to the recent explosion in large language models. Google AI was the first to invent the Transformer language model in 2017 that serves as the basis for the company’s later model BERT, and OpenAI’s GPT-2 and GPT-3. BERT, as noted above, now also powers Google search, the company’s cash cow. Google has a responsibility to work toward new paradigms in artificial intelligence that do not require exponentially growing datasets nor outrageously vast energy expenditures. Emerging research areas like few-shot learning are promising avenues. The responsibility lies on Google and other large tech companies to find innovative carbon-free ways to create better models because they have the resources and talent to come up with a solution. Bender also worries that Google’s actions could create “a chilling effect” on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. “That has been beneficial in many ways,” she says. “But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”

Photo by Mika Baumeister on Unsplash

This point of this article is to highlight the destructive tendencies that can happen if large corporations do not listen to the warning signs in the early days and I see the same thing happening in the case of A. I. We need to take a step back and acknowledge that simply building ever-larger neural networks is not the right path to generalized intelligence. From first principles, we need to push ourselves to discover more elegant, efficient ways to model intelligence in machines. Our ongoing battle with climate change, and thus the future of our planet, depends on it. Right now we are in a catch-up race to try and figure the best way to save the environment from plastic pollution. We don’t have to make the same mistakes with Artificial intelligence and we have to work together as a community to make sure we do not go down the same road.

Photo by Ian Schneider on Unsplash

In a recent interview with Marques Brownlee (which you can find below), At timestamp 12:54 Marques asked Sundar Pichai (Google Ceo) what he wants his legacy in the tech world to be fifty years into the future. One of his answers was to have driven artificial intelligence forward responsibly. I truly hope in fifty years that aspiration comes true.

--

--