Are you safe?
Most Americans are waking up to the reality of Donald Trump as the 45th President of the United States. Speaker Paul Ryan called it the most incredible political feat in his lifetime, and it certainly shocked the country. The political analysis that inevitably follows any presidential election will be as curious and emotional as any before it, complete with unanswered questions and a fair dose of revisionist history.
In her gracious, powerful remarks, Secretary Clinton called for a peaceful transition of power and reminded us that constitutional democracy requires our participation. In that vein, innovators in any field would be wise to pay attention. The world is getting smaller, and technological advancement continues its unsatiated, inevitable expansion. Globalization, connectivity, and the threat that disruption poses to incumbent industries present significant challenges. In return, policymakers must balance public interest with public sentiment, while accounting for the public resentment expressed during this campaign.
Like the Industrial Revolution, the Internet Revolution is and will continue to be a transformational force, influencing our daily lives in ways that are unimaginable even today. However, the dramatic effect will also lead to social upheaval well before the potential is fully realized. For example, advances in self-driving cars, augmented reality, facial recognition, robotics, and enhanced artificial intelligence will significantly change society, necessitating a response that addresses privacy, digital security, trust, safety, and workforce automation, just to name a few.
To illustrate the point, below are three examples of predictable challenges that could easily lead to overreaction by policymakers as they respond to the passionate will of the people. Without an agile, proactive, and sustained public policy strategy, innovative businesses of all shapes and sizes could soon find themselves reacting to government mandates, public relations and brand management crises, fines, or worse.
The recent botnet attack that affected Netflix, Twitter, Spotify, Paypal, and more than 1,000 other websites may seem trivial in comparison to data breaches at Target, Home Depot, Anthem, and the Office of Personnel Management. But imagine the psychology of cyber attacks entering the home, and the resulting impact on cyber security policy and government regulations. When cyber attacks regularly breach our thermostats, refrigerators, and internet-connected televisions, the public outcry will spike. It will rise even further as hospitals, airplanes, and cars add internet-connected devices to our weekly routines.
Government departments and agencies currently lack broad principles, leading to a patchwork of authorities, laws, and standards. Technological innovation does not fit neatly into the oversight mechanisms of Washington, and that disconnect tends to create problems on Capitol Hill and throughout the Executive Branch.
As cyber attacks increase, today’s delicate regulatory environment could tip towards enhanced even mandatory standards. It is already slowly occurring in the more heavily regulated industries like banking and transportation. Banks repeatedly face new reporting requirements and creeping standards associated with ill-defined cyber events. The same can be said of autonomous vehicle technology, as agencies with limited experience in software and digital security feel compelled to fill the regulatory void.
American fears of terrorism are at the highest levels since September 11, 2001, due to the rise of self-radicalized or inspired extremism. Terrorist groups leverage the connectivity of social media just like any other enterprise. The difference, of course, is the convergence of online speech and physical violence. This reality forces government and industry to wrestle with new dilemmas. In particular, social media companies are facing increasingly sophisticated lawsuits, and the possibility of new laws that compel unprecedented and unwelcome business practices.
Earlier this year, families of the victims of unrelated terrorist attacks filed lawsuits against Google, Twitter, and Facebook, alleging that the platforms knowingly provided material support to the perpetrators. Although it is difficult to prove causation in such cases, the increased possibility of liability already threatens brand equity. If the United States suffers another major terrorist attack, policymakers in Washington will be forced to react to the predominant public sentiment, possibly creating new requirements for social media companies to detect, identify, apprehend, or sanction complicated forms of abuse — a costly and difficult engineering feat these companies know all too well.
By now, technology companies and policymakers are at least somewhat familiar with the European Union’s “right to be forgotten,” a reference to a data protection rule that enables Europeans to petition for removal of personally damaging information from the internet. Despite the profound implications of such a policy, the right to be forgotten may pale in comparison to the consequences of the newly proposed “right to explanation,” or clarification about algorithmic decision-making.
Taking effect in 2018, Europe’s right to explanation could establish a prohibitive pacing mechanism for the digital frontier. No doubt, it’s important to strike down unlawful, intentional discrimination, and even unrecognized or unintentional bias deserves acknowledgment and careful resolution. But hasty regulations will lead to costly compliance issues and unintended consequences for innovation. Enhanced machine learning is dynamic and, by definition, self learning. Therefore, requiring the auditability of enhanced algorithms — or certification of the integrity, accuracy, and neutrality of all data — may simply be unrealistic as neural networks and artificial intelligence push the boundaries of human understanding.
To be charitable, most politicians are not comfortable with advanced technology, and few grasp the inherent complexity. Complicating matters further, all elected officials are inundated with pleas for representation on every issue imaginable. After all, that is why we send them to Washington. But politicians simply do not have time to become experts in everything. Nevertheless, our system of government entrusts in them outsize influence over the activities of government: the debate or conflict among individuals or parties having or hoping to achieve power.
In order to protect existing gains and future investments, and to push the digital economy further, businesses must endeavor to inform government policymakers. They must engage, for policy decisions must be balanced and reflexively resist populist temptations. If not, apparent winners of the Internet Revolution will be assailed, and the broader opportunities that technology presents will be held in abeyance for innovators, investors, and individuals alike. Technology companies will stagnate, as will incumbent industries seeking to leverage new technologies to keep pace with foreign competitors, shifting demographics, and modern user expectations.
The future holds considerable, consequential decisions about technology. Fortunately, the associated challenges are all human choices. That is not to say they are easy, or easily influenced, particularly in light of the rising populist sentiment that the recent presidential campaign brought to the mainstream. But we make choices as a society through elections and representative government, a process that relies on informed, persuasive advocates to advance the public interest. In the weeks ahead, private companies and their investors would be wise to ask if they feel safe in the current political environment. If not, it might be time for a public policy pivot.
Brian Miller is a strategic advisor to Public Sphere, a boutique consulting firm serving clients in Washington, Boston, and San Francisco.