Artificial intelligence and corporate social responsibility.

Albert Vilariño Alonso
6 min readFeb 12, 2018

--

Photo by Alex Knight on Unsplash

NOTE: This article was first published in Spanish and can be found here.

Very often we hear about news of new technologies that will do (or even do already) in a more efficient way tasks carried out before by humans, but that will bring about diverse debates and associated risks at the same time.

Most of the jobs that exist today could disappear in decades. As artificial intelligence surpasses humans in more and more tasks, it will replace them in more and more jobs.

If we rely on trends and new technological, computer and knowledge inventions, the horizon does not seem very promising for the maintenance of employment, but the majority of the active population in Spain does not seem to feel threatened in this regard.

According to the Infojobs report published this May and entitled “State of the labor market in Spain”, 76% of the active population does not believe that automation and new technologies are going to endanger their jobs.

However, the result is different if the respondent is working or not. The 40% of unemployed who answered the survey believe that their future work is at risk because of automation, while in the case of employed is reduced to 20%.

Whether the changes are very deep or of less draft or if they are made in a shorter or longer period of time, they will end up happening, and this will have a very important impact not only on job losses but also on how companies manage that matter and other problems that may be associated with the use of new technologies, and that will affect their transparency and the trust and credibility that they transmit to their stakeholders and therefore their sustainability.

The so-called industry 4.0 has already arrived and it seems that the companies are not very concerned about how they should modify their corporate social responsibility to face the challenges and debates that should be put on the table.

Without going any further, a Google search in Spanish for keywords “artificial intelligence and corporate social responsibility” or adding “industry 4.0” does not give results that deepen minimally on the subject, which I think is quite worrisome.

Artificial intelligence, can you account for something that nobody understands how it works?

For those little versed in these topics, the title of this section may sound absurd, how can it be that nobody understands how artificial intelligence works? Well, I regret to say to those who do not know that it is not absurd or I have invented it myself.

The development of systems based on artificial intelligence and in the so-called deep learning creates “black boxes” (due to its opacity, not to be confused with data recording of aircraft and the like) that generate good results, but that humans do not know very well how have been calculated, as we can read in this article by Business Insider.

There are already, for example, algorithms based on this technology to decide whether to grant a mortgage or a loan, who is the right person in a recruiting process or in what stocks and in what amount should be invested in the stock market to get a good performance.

These algorithms are not at all simple, and they start from the basis that after initial programming, cases are provided that the algorithm must solve. These results are valued and validated by humans and the information on whether it has been done right or wrong is reintroduced into the algorithm, so that it learns and reprograms itself to find solutions that at the end of the learning are even more accurate than human ones, but that have turned the algorithm into something that cannot be scrutinized and we can not know the exact reasons that have led to that solution or result.

In summary, instead of writing code, it is fed with data to the generic algorithm and it constructs its own logic based on the received data.

It is more or less like teaching a person, that will give us correct results at the end of learning but to which we can not scrutinize his/her brain to know how he/she has reached a concrete result.

We can ask the person why and how he/she has arrived at a decision or result and he/she can tell us, but the machine will not be simply a black box that spits out results.

Among the problems that can arise from depending on decisions taken by black boxes are those that Enrique Dans comments on his blog.

One of them is the lack of transparency. It is necessary that the tasks that were previously human and for which artificial intelligence algorithms are being created are transparent. We need to know why we are not granted a loan to improve our options to be granted, or why a car autopilot has decided to crash into a tree or any other decision that is important to us and has been taken by an artificial intelligence.

Another problem is the possible introduction of biases. We could think that algorithms will be more objective than people when deciding things, but they will only if they have been fed objectively, both at the beginning and as the algorithm learns.

As we see is a very complex issue. As of next summer, the European Union could require that companies are able to give users an explanation of the decisions taken by automated systems.

This could be impossible, even for systems that seem relatively simple on the surface, such as applications and websites that use deep learning to post ads or recommend songs.

The computers that run those services have been programmed and have learned in ways we can not understand. Even the engineers who build these applications cannot fully explain their behavior.

Are the organizations prepared to act in a socially responsible way in the face of this new technological horizon?

Artificial intelligence raises a series of questions on social, economic, political, technological, legal, ethical and philosophical issues.

Although the aforementioned algorithms can be used by companies such as banks, car manufacturers, etc., this is not an issue that has to do only with the private sector but also affects the public sector, the State, administrations , the armies, or other entities in which they can be used (virtually anyone).

Beyond that public sector and its own idiosyncrasy, will companies be able to account for these new technologies? How will they redefine their corporate social responsibility to deal with new paradigms like the one that is shaping up with the massive entrance of artificial intelligence in our life? How will they manage the introduction of robots and the consequent expulsion of their staff of human workers?

Many questions for which apparently there are few answers, at least made publicly, and that go beyond the debate of whether companies should pay taxes for their robots or if all people who do not work would have to receive a guaranteed basic income.

We expect a near future that is very exciting and at the same time disturbing, and the fact that it is not far away should give a touch of attention to the official bodies and organizations to regulate in the best possible way the transition to this new model of economy.

And also the companies should get down to work to make changes in their CSR policies and in the way they relate to their stakeholders because the areas to be dealt with can introduce specific issues that until now nobody was considering.

Seeing the opaque operations of the artificial intelligence and the large number of people who will lose their jobs because of it, it seems to me that companies are going to have an arduous task ahead if they want to be truly socially responsible.

Finally, I recommend the reader to take a look at the nine main ethical questions posed by artificial intelligence and to perform a mental exercise of how organizations should tackle those issues that affect them.

--

--

Albert Vilariño Alonso

Consultant in Corporate Social Responsibility, Sustainability, Reputation and Corporate Communication,and integration of people with disabilities.