A Tech Governance Lesson from Facebook’s “Uber” Moment

Culturati Team
Culturati: Magazine
7 min readMay 31, 2018

By Andrea Bonime-Blanc, CEO and Founder, GEC Risk Advisory

Cyber — Breaches…Cyber — Security.

Data Theft…Data Protection.

Human Intelligence…Artificial Intelligence.

Digital Disruption…Digital Transformation.

Espionage…National Security.

Democracy…Authoritarianism.

Us…Them.

These are just a few of the key concepts we are dealing with in our turbulent new world order (or as Richard Haass, author of the prescient A World in Disarray might state: world disorder). Or, as geopolitical risk analyst extraordinaire, Ian Bremmer, has focused on in his brilliant and disturbing new book Us vs. Them: The Failure of Globalism we are living in a multifaceted “us v. them” world.

The Facebook / Cambridge Analytica story fits into the narrative of all of the above themes — it is a story that reflects the risks and opportunities, dangers, pitfalls, values and evils of our time. Facebook is now having its “Uber” moment — a moment of reckoning on governance and culture occasioned by a scandal (or accumulation of risks and scandals). And embedded in this story is a vital lesson: As the world becomes more enthralled with and dominated by artificial intelligence (AI), machine learning (ML) and related and unrelated technologies in almost every aspect of our lives (and in ways that are often seamless and invisible to most of us most of the time), it becomes even more incumbent on all of us — especially the tech firms in partnership with the public sector — to develop technological governance guardrails and parameters. In other words, as we build all of these amazing new technological capabilities, we must simultaneously build commensurate and impactful governance and culture within and across organizations to manage this explosion of game-changing technological capabilities — ethically, safely, socially responsibly and legally.

And these issues go to the heart of our democracies, politics and geopolitics. In the past few weeks and months, we have witnessed some of the concepts listed above revolving around possible 2016 U.S. election interference by foreign and domestic actors looking to undermine American democracy. And the US is not alone: revelations over the past two months show that Cambridge Analytica (the data firm at the heart of the unfolding Facebook “weaponized” social media scandal) may have been illegally or unethically involved in gathering and manipulating data in dozens of democratic campaigns in numerous democracies (including the Brexit vote). At the time of this writing, Cambridge Analytica had declared bankruptcy no doubt because of the existential reputation risk implosion they had experienced.

The sad truth is that some of the global tech darlings may have unwittingly or, more alarmingly, wittingly, allowed situations such as the one Cambridge Analytica appears to present. Alternatively, the tech companies have blissfully ignored these issues without consideration of proper guardrails or governance to prevent possible manipulation or outright abuses of data and data privacy (ultimately possibly affecting fundamental democratic rights), all this in the blind and unabashed pursuit of a strictly financial bottom line.

Facebook is now experiencing what I would call its “Uber Moment” — i.e., after months, even years, of ignoring or dismissing warnings, investigations, facts and figures, the situation has catalyzed into a moment of reckoning for its leadership. At Uber the founding CEO, Travis Kalanick, was ultimately removed and replaced with more “grown-up” leadership (Dara Khosrowshahi who appears to be doing a good job turning Uber’s culture around although even he has had his challenges). It’s not clear that this will happen at Facebook, but enough is going on to create a governance and culture moment that is similar and even larger than Uber’s (given the size, publicly traded nature and ubiquity of Facebook and its products) which the board and leadership of Facebook would be wise to take deeply seriously and for the long haul.

As with most companies that get into trouble, Facebook’s new attention to governance, ethics and risk is because of the massive reputation risk spotlight on them that is also hitting their financial bottom line, after a long accumulation of issues that they should have managed proactively in advance but didn’t. (For an in depth practical analysis of the concept of “reputation risk” and its many aspects, see my 2014 book, The Reputation Risk Handbook: Surviving and Thriving in the Age of Hyper-Transparency). And the backlash (or what some are calling “tech-lash”) is not only occurring in the US but even more so (and with greater enforcement teeth) in the EU. Improved and more systematic governance and culture must now become the central themes that Facebook (and other tech firms) must manage or risk damaging their business model and maybe even their long-term viability.

Though it represents a limited snapshot of only one kind of financial impact, take a look at Facebook’s stock price performance in the past 6 months below. While the stock price has more or less recovered from plummeting almost 20% in a few days in March 2018 when the Cambridge Analytica revelations first came out, these are signs worth noting and only represent one aspect of the reputational and financial damage that could reverberate over time.

And below is an illustration of some of the non-financial (reputational) hits that Facebook has suffered taken from a SurveyMonkey survey published in late March by Axios. The bottom line in this survey is that Facebook had a far more vertiginous favorability fall than any other major tech firms over the six months preceding the Cambridge Analytica scandal.

There are other financial and non-financial impacts that we are yet to see from this story, not the least of which is the multi-million-dollar (and growing) global lobbying, governance, risk management and reputation salvaging operation that is currently underway and that will last for the foreseeable future. Below is a summary table of some of the key ESG issues in AI from a piece I wrote for Ethical Boardroom ‘s Winter 2018 issue on “Artificial Intelligence and Reputation Risk: An ESG Perspective” in which some of the key ESG issues relating to AI and related technologies are listed in a way that is illustrative of how the tech issues of our day are stretching the simpler categories of issues, risks and opportunities of tomorrow.

In the midst of all this, Facebook also has a real reputation opportunity — if they manage the fallout from the Cambridge Analytica situation well by doing all the right things they need to do to build organizational resilience — expanding transparency, governance, ethics, risk management and trust vis-a-vis their major stakeholders (users, “data providers”, regulators, governments), they can yet overcome their current challenge and create sustainable, more resilient profitability.

The lesson from Facebook for all of us is stark and clear: We must take a collective deep breath and establish close and productive private/public partnerships to develop clear guidelines and guardrails around the design, gathering and use of data through AI, ML and in other ways through other new technologies. This is a shared responsibility of all stakeholders — government, business, society, NGOs and the overall citizenry. Not seizing this responsibility and dealing with it immediately and head-on will invite a continuous and dangerous attack on and degradation of key pillars of democracy and the rule of law and other social safety and security issues.

We are probably just witnessing the tip of the iceberg on these issues and risks. And here is where I would raise both the promise and the specter of the use and possible abuse of AI, ML, deep learning, other technologies (robotics, nanotechnology, biometrics, etc.) in all of this data gathering, organizing, development, delivery, learning and output. One of the greatest dangers and one of the greatest promises comes from these new technologies like AI as experts have expressed for a few years now. Many of these governance, ethics and social issues are discussed in my new coauthored book The Artificial Intelligence Imperative: A Practical Roadmap for Business.

Dr. Andrea Bonime-Blanc

Dr. Bonime-Blanc is founder and CEO of GEC Risk Advisory, a strategic governance, risk and ethics advisor, board member, and former senior executive at Bertelsmann, Verint and PSEG. She is author of numerous books including The Reputation Risk Handbook (2014), Gloom to Boom: How Leaders Transform Risk into Resilience and Value (2019) and coauthor, The Artificial Intelligence Imperative (2018). She serves as Ethics Advisor to the Financial Oversight and Management Board for Puerto Rico, start-up mentor at Plug & Play Tech Center, life member at the Council on Foreign Relations and faculty at the NACD, NYU, IEB (Spain) and IAE (Argentina). She tweets as @GlobalEthicist.

More information on The Artificial Intelligence Imperative.

More information on Cyber, Digital Governance and Risk Management.

Contact me at abonimeblanc@gecrisk.com

Visit our website at GEC Risk.com

Follow me on Twitter @GlobalEthicist

Follow me on LinkedIn

--

--

Culturati Team
Culturati: Magazine

Culturati is a community of CEOs, entrepreneurs, investors and other c-suite leaders who practice & study culture building and share our play books.