Reading Reflection 4

Bxteng
RAISE Seminar SP21
Published in
3 min readMay 4, 2021

I have never thought to be concerned with the environmental cost of model training. It was intriguing to see the amount of importance the paper ascribed to this particular area. Given that I rarely think about the direct environmental toll that computers take and that I have never had a conversation with anyone regarding this, I am now aware that this is truly an area of computer science that is not treated with enough regard.

The researchers discuss carbon emissions as they encourage us to find the balance between ensuring we have enough data to train our models, but not so much that we take an unnecessary toll on the environment. This is interesting to think about because we must rebalance any established standards every few years as technology will undoubtedly become more advanced such that more processing power can come from the same unit of energy. I wonder if we do create environmental regulations that standardize how much material we should use to train our language models such that they are “good enough”, whether once our hardware becomes more energy-efficient we will become complacent and not see the need to rebalance training efficiency and energy usage again, because we will have more space to train our models with more materials.

The paper also touches on the issue of environmental racism and how the language models we are developing do not directly benefit those who pay the highest environmental cost for them. It is true that many marginalized communities pay the price of climate change that is accelerated by the carbon emissions from many large corporations and government bodies, but I do think it is a little far-fetched to make a direct connection between, for example, “the residents of the Maldives (likely to be underwater by 2100)” and the carbon emissions produced from training language models, and thereon chastise ourselves for not doing enough to ensure such communities receive a slice of any benefits from our language model research.

There are a plethora of reasons that English is one of the most researched languages in terms of natural language processing, not to mention because it is the most widespread language, it has a short alphabet, and many of the largest research institutions in the world are in countries where English is the primary language. I am confident that focusing our efforts on the English language, for now, will enable us to accumulate enough knowledge so that we can more effectively cater to developing high-quality language models for other languages in the future. In other words, I see what we are doing now, training English language models that produce carbon emissions that disadvantage marginalized communities, as the best option for those marginalized communities, in terms of receiving the benefits of our research.

Although, after reading the section of the paper that emphasizes the need for cultural and linguistic appropriateness in any language model, given the user base and social context for which it will be used, I do want to note that the models we develop in English that we wish to apply to other languages will need to have the input of that language’s cultural and linguistic experts, which may very well be the natives. Perhaps such a professional position will become more popular in the future as we have natural language processing branch out to other languages. These individuals can also help us to curate our datasets and be more energy-efficient.

Dreaming of what the future could bring, I wonder if one day our language models could be so advanced so they personalize their communication with each human they are used by. The paper notes that humans “model each others’ mental states as they communicate,” and use the other person’s body language and what they know about the person’s beliefs to effectively and efficiently convey an idea. Perhaps one day our computers will be able to read our facial expressions and scan our brainwaves so as to mimic the beautiful conversational ability of humans.

--

--