Balancing Trust with Innovation
A Framework for Ethical Technologies
This is a translated version of an article that was originally posted in French at Harvard Business Review France covering the World Economic Forum’s Annual Meeting in Davos. You can read it here.
At Davos, one of the central themes in addition to climate change was the role of technology in business and society. The implications of using these new tools are broad and expansive, with everything from cybersecurity, privacy, and artificial intelligence to automation, re-skilling and the future of work.
As governments debate how to best regulate these technologies on a policy level, business leaders are grappling with a major strategic consideration: the role of AI ethics within their own organizations and how that applies to the tools they are using or creating.
The “Decade of Trust”
In a panel entitled “Walking the Tech Tightrope: How to Balance Trust with Innovation,” the question of what responsibilities businesses have defining ethical technology practices was a main point of discussion. Specifically, how can organizations balance unlocking new market opportunities while also protecting society from the more harmful applications of these tools?
Ginni Rometty, Chairman, President and CEO of IBM said that it was the responsibility of business to prove to the public that new technologies could be used in service of the greater good, and to do that trust had to be restored as many people are afraid of how these new tools were being used. “At the heart of this issue is that people are not sure there’s a better future here, when they take all this technology into consideration,” she said, declaring that this new decade had to be “the decade of trust.”
Companies Can No Longer be “Innocent Bystanders”
In order to that, companies need to be very clear about their values: what they stand for and what they won’t compromise on. “They have to take responsibility for these things, set what their principles are and be willing to be audited against them. Many companies are the ones that are doing,” Rometty continued. “they are the ones that have this data, collect this data, decide what to do with it and how to handle it.” She emphasized that this change has to be led by business leaders. “You can’t be an innocent bystander on this,” she added.
Specifically, how can organizations balance unlocking new market opportunities while also protecting society from the more harmful applications of these tools?
Joe Kaeser, President and CEO of Siemens AG called AI ethics an important leadership topic. As one of the founding partners of the Charter of Trust for Cybersecurity, he encouraged business leaders to join the charter and commit to upholding ethical standards of protecting data and privacy. “This is the type of stuff the private sector can do,” he said. “How do we act together as partners in ecosystems? We are in the biggest transformation of all time. This cannot be a bipartisan issue. This is a global matter of responsible people on how to shape the world and explain it to the uneducated what’s in it for them going forward.” He added that the impact of disruptive tech extends beyond geopolitical borders which is why companies have to work together to implement positive change.
What Every Leader Should Think About
One of the most important things today’s business leaders can do is start to think about the necessary frameworks required in moving towards ethical technology best practices. For Ginny Rometty there are three important factors every business leader should be thinking about.
- Precision Regulation — companies should look at regulation the use of technology versus regulating the technology itself. Rometty pointed to facial recognition as an example, saying that many consumers use Apple’s Face ID to unlock their devices and would be irritated to lose that functionality if the EU’s proposal to halt all facial recognition technology in public spaces for five years was accepted. According to a new study released by the IBM policy lab, 70% of Europeans favor a precision regulation approach for technology.
- Regulations Should be Risk Balanced — not all technology is used in the same way. Rometty compared a restaurant recommendation algorithm versus a medical diagnosis algorithm. Leaders should think about the level of exposure and risk that is included with the technologies they are creating or using.
- Be transparent and clear about bias. As more technologies get released to the public it is more important than ever to be very clear about how the technology was programmed. For example, knowing that a medical diagnostic was programmed by the unnamed internet versus a program that was trained by five globally recognized medical institutions could be an important factor in influencing consumer choice. IBM recommends technology should be created and implemented in accordance to anti-discrimination laws and consumer protection and privacy statues.
Without clear ethical guidelines leaders risk facing severe internal backlash from employees who don’t agree with the organization’s choices. This was the case for Google in 2018 when employees protested the tech giant’s projects with the Pentagon (drone image analysis) and the Chinese government (censored search engine).
Most importantly, both business leaders said that creating and adhering to cultural values around the ethical implementation of technology was going to be a key competitiveness advantage moving forward, and that executives should be overly threatened by the scale of competing markets. “Scale is not the only way you win AI,” Rometty said. “It comes from having the right people with the right skillsets. It comes from creativity, it comes from the way you protect intellectual property. It comes from a whole ecosystem of things.”