No One Left Behind

Making sure that everyone can benefit from the Connected Intelligence Revolution is a complex, multifaceted challenge.

Russell McGuire
Dec 22, 2020 · 16 min read

One of the online classes I took this year was “Robots in Society: Blessing or Curse” offered by Delft University of Technology. Last week I was informed that the final paper I submitted for the course had been selected for their “Hall of Fame”. Here is that paper.

The Connected Intelligence Revolution is the collision of advanced data processing technologies with massive amounts of available data, resulting in new ways that we see the world, anticipate the future, make decisions, and take action.

There are four core building blocks of this revolution:

  • The Internet of Internets: the rapidly growing interlinked webs of unstructured content, social relationships, connected devices, and structured data which provide timely and timeless information about virtually everything, from anywhere, and at any time.
  • Networked Computing Infrastructure: the global collection of digital hardware and connections that enable data collection, sharing, and processing.
  • Analytical Software: complex and evolving algorithms which consume structured and unstructured data, identify patterns and trends, predict likely future occurrences, evaluate potential options, and recommend specific actions.
  • Real World Interfaces: natural and intuitive human interfaces and software controlled machines that translate decisions into actions.

These advances promise tremendous benefits to businesses, individuals, and society as a whole. But they also threaten to exacerbate existing societal issues and introduce new problems hurting many people not well positioned to participate in the revolution.

How can we ensure that no one gets left behind in the connected intelligence revolution?

I propose this framework:

Image for post
Image for post

At the top I identify our goal. At the bottom, I identify principles that more tangibly define what needs to be true to achieve that goal. In the middle are three domains in which I believe we need to develop plans to achieve the goal:

  • Society
  • Industry
  • Government

Let’s look at each of these elements in turn.

As in any technology revolution, there will be winners and losers in the Connected Intelligence Revolution. Companies that understand the changing landscape and adapt appropriately will be positioned to thrive, while those that ignore the warning signs likely will struggle and perhaps even fail. That is the nature of healthy competitive markets and not a general cause for concern.

However, these technology shifts, and the expected impact on how businesses operate, are also likely to impact individuals who by themselves are not in a position to adapt, and therefore are at risk of increasingly struggling to thrive (and in some cases even survive). Our goal should be to ensure that the Connected Intelligence Revolution creates opportunities for all to flourish and that none are left behind without opportunity to participate.

In their book Competing in the Age of AI¹, Marco Iansiti and Karim R. Lakhani look back to the Industrial Revolution and the resulting Luddite movement as a warning for how we must help everyone make the transition to the new economy. “Just as we are seeing now, the Industrial Revolution upset the status quo, driving the obsolescence of traditional capabilities and manufacturing strategies and creating new ethical dilemmas.”

The authors specifically identify ethical challenges in five categories:

  • Digital Amplification: the “echo chamber” effect which intensifies human bias, discord, and misinformation.
  • Algorithmic Bias: data selection bias, labeling bias, and bias resulting from the goal defined for an algorithm (e.g. maximize profits).
  • Cybersecurity: data breaches, and hijacking digital platforms for evil purposes.
  • Platform Control: as digital networks gain almost limitless power, intentional or unintended abuse of that power at the expense of subpopulations.
  • Fairness and Equity: the ability of those increasingly powerful platforms to define the winners and losers in this new economy.

Given our goal of making sure no one gets left behind, what are the foundational principles on which we should focus?

The technologies powering the Connected Intelligence Revolution promise to dramatically improve the world, but are also widely recognized as representing a threat to humans and society as a whole. Robots, especially, have captured people’s imagination as representing an existential threat to humanity. Science fiction writers and filmmakers have made the most of these fears, further fueling paranoia about this threat.

As early as 1950, Isaac Asimov proposed “Three Laws of Robotics”²:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Others have sought to improve on this framework, most notably, in 2010, the UK’s Engineering and Physical Sciences Research Council (EPSRC) partnered with the Arts and Humanities Research Council (AHRC) to develop “Principles for designers, builders and users of robots”³:

  1. Robots should not be designed as weapons, except for national security reasons.
  2. Robots should be designed and operated to comply with existing law, including privacy.
  3. Robots are products: as with other products, they should be designed to be safe and secure.
  4. Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  5. It should be possible to find out who is responsible for any robot.

In reality, although the powerful mechanical physical manifestation of robotics can be intimidating, I believe that artificial intelligence (A.I.) and machine learning (including the algorithms directing those powerful robots) represent a much greater risk to our well-being and the health of our communities. Microsoft’s CEO, Satya Nadella, in a 2016 article for Slate magazine⁴ outlined six principles and goals for artificial intelligence:

  1. A.I. must be designed to assist humanity: As we build more autonomous machines, we need to respect human autonomy.
  2. A.I. must be transparent: We should be aware of how the technology works and what its rules are.
  3. A.I. must maximize efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity.
  4. A.I. must be designed for intelligent privacy — sophisticated protections that secure personal and group information in ways that earn trust.
  5. A.I. must have algorithmic accountability so that humans can undo unintended harm.
  6. A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.

All of these laws and principles have their own merits. Taking a more holistic view of all of the technologies involved in the Connected Intelligence Revolution leads me to a set of foundational principles that draws from the above and perhaps goes a bit further:

  • Do No Harm: To People or the Planet. Connected Intelligence systems shouldn’t be used as offensive weapons. Neither should they be used to destructively exploit creation.
  • Lift Up, Don’t Push Down: Increase Dignity & Quality of Life. As Nadella noted, modern technologies have the potential to significantly enhance life. In his article, he used the example of a blind Microsoft engineer who developed technology to audibly describe the world around him, enabling him to more easily fully participate in life. Unfortunately, digital technology can similarly hurt people relationally and emotionally. Machines can replace workers, driving people into poverty and dependence on others. Connected intelligence systems should enhance lives, not damage them.
  • Embrace Fairness: Eliminate Bias & Unjust Discrimination. Almost by definition, intelligent systems will discriminate between good and bad choices, but those decisions should not be unfairly biased against people based on who they are, what they believe, where they live, or who they associate with.
  • Defend Privacy: Transparency, Control, Security. To me, one of the scariest aspects of the Connected Intelligence Revolution is how much systems know about me and how bad actors could potentially use that information to hurt me (identity theft, fraud, intimidation, physical safety). Systems should be designed to proactively and aggressively defend the privacy of those whose data they steward. People should be able to easily learn what data is being maintained about them, should be able to control what is done with that data (including destroying it), and should expect that the data is protected by robust security systems and protocols.

Unfortunately, these principles won’t be achieved simply because people can nod and agree that they sound like a good idea. Systems are fragile and people are flawed. We all are tempted to passively or actively benefit from vulnerabilities in the architecture.

We need to develop proactive plans if we hope to see these principles embraced and our goal achieved.

All of us have a role to play in ensuring that no one gets left behind.

While none of us have the resources to individually bring about the Connected Intelligence Revolution, we will all be impacted by it. I believe that there are three areas where we need to take an active role in preparing for this revolution:

  • Awareness: Each of us need to take the personal responsibility to learn as much as we can about what is coming. I don’t mean that every citizen should learn to program in Python, configure autonomous systems, or deploy sensor networks. Of course, each of us should strive to gain skills that will make us able to contribute and participate in the new economy, but each of us will have different interests, abilities, and opportunities that will shape what we pursue.
  • Appreciation: Part of that awareness should be an appreciation of how these technologies can improve our lives. We should know what we hope to gain as the Connected Intelligence Revolution advances. As we see the world around us changing, we should look for the good in those changes and seek to improve our own lives and the lives of others.
  • Of course, as disconnected individuals it will be hard for each of us to gain awareness, build skills, and grow in our appreciation for these technologies. Society as a whole, including industry and government (as we’ll discuss), will need to act in a coordinated fashion to bring all segments of the population into this revolution.
  • Accountability: But perhaps the most important role that we, as individuals and as a society, can play in ensuring that no one gets left behind in this revolution is that of accountability. We need to hold industry and government accountable to embrace the principles I’ve outlined above and to actively participate as I’ve described below. Companies and governments should provide easy mechanisms for feedback from the public. Of course, social media and even peaceful protests provide opportunities to raise grievances if organizations aren’t responsive to concerns that have been raised, but independent watchdog organizations like the Algorithmic Justice League can also play a credible and influential role in holding industry accountable. Those in power need to remember that the world is watching.

The largest contributors to the development of the Connected Intelligence Revolution will be technology innovators and large corporations. These industrial players also have the greatest opportunity to exploit others for their own gain in this new world.

Milton Friedman famously wrote an article titled, “The Social Responsibility of Business is to Increase Its Profits” for the September 13, 1970 issue of the New York Times Magazine⁵. His words memorialized what many businessmen (and economists) had been thinking — that being in business is all about making money and nothing else. In fact, in the U.S. today, we have such a litigious society that corporate executives and board members have the Shareholder Wealth Maximization principle pounded into their heads⁶. The Connected Intelligence Revolution will likely provide plenty of opportunities for corporations to maximize profitability at the expense of other stakeholders.

Thankfully, the dominance of Friedman’s mindset is starting to fade. In August 2019, 181 CEOs as part of the Business Roundtable, jointly signed a commitment to lead their companies for the benefit of all stakeholders, not just shareholders⁷. The concept of Corporate Social Responsibility started to take hold in many organizations in the 1990s, but has gained significant momentum in the past few years. In recent months, public pressure has compelled many companies to commit to make changes in support of social justice causes⁸. Companies are even holding each other accountable to maintain standards of integrity and fairness⁹.

Especially in free market economies, like the United States, companies are generally free to make decisions about how they operate. The advances in efficiency promised by Connected Intelligence technologies are very attractive to companies seeking to reduce their human labor costs while enjoying the other benefits of the technologies. The current coronavirus pandemic appears to be accelerating adoption of robotics¹⁰ and AI¹¹ amongst companies seeking to survive the economic changes wrought by shutdowns and lockdowns.

One of the biggest dangers as the revolution progresses is that the corporate equivalent of “unilateral disarmament” may not work. If one company in an industry commits to the ethical use of Connected Intelligence technology, there’s no guarantee that its rivals will follow-suit and therefore the progressive company may find itself at a significant competitive disadvantage. We face the risk of an “arms war” across industries as companies seek to gain an advantage, or at least maintain competitive parity, in the deployment of powerful new technologies, often at the expense of jobs, skills, fairness, and privacy.

Therefore it has been encouraging in recent weeks to see major technology competitors Amazon and Microsoft follow IBM’s lead in shutting down the use of their machine vision technology for potentially biased and perhaps deadly use by government entities¹². It has also been encouraging to see companies holding each other accountable and holding government accountable.

Specifically, I believe that it is essential that standard cross-industry commitments be developed that cover three essential areas:

  • Enablement: As York Exponential puts it on their website “We believe technology represents an incredible opportunity to increase human capabilities, so we are intentional about bringing everyone along for the ride.” This company epitomizes the mindset that I believe is necessary across industries. The headline on their homepage is “Collaborative Robots. Augmentation not just Automation.” Companies need to try to outdo each other in how well they deploy the new technologies to enhance human performance rather than replace humans in their business.
  • Education: At the same time, the current and coming workforce likely lacks the skills and knowledge needed to fully participate in this technology enhanced workplace. Large companies like Amazon are already embracing this opportunity with their Amazon Future Engineer program which brings computer science programs into elementary schools and high schools, and specifically funding robotics programs in schools in underserved and underrepresented communities. Similarly, Microsoft has committed to provide free online training this year to 25 million people around the world to prepare them for 10 key positions in the new economy¹³.
  • Equity: Meanwhile, as companies increasingly allow machines to learn, make decisions, and take action, the risk of algorithmic bias doing serious harm to our communities becomes very real. Already, tech leaders like Facebook are facing significant pressure to deal with this issue. Two years ago, the company commissioned an independent audit of itself on civil rights issues. The findings from the audit acknowledge that Facebook is taking some of the right steps to deal with bias, but it’s not doing enough fast enough¹⁴. The authors of the report noted: “Facebook has an existing responsibility to ensure that the algorithms and machine learning models that can have important impacts on billions of people do not have unfair or adverse consequences. The Auditors think Facebook needs to approach these issues with a greater sense of urgency. There are steps it can take now … that would help reduce bias and discrimination concerns even before expert consensus is reached on the most challenging or emergent AI fairness questions.” In her book Weapons of Math Destruction, Cathy O’Neil shares many more examples of how AI is increasing inequality¹⁵. Across industries, companies are making decisions about hiring, scheduling, retaining, compensating, and promoting employees, presenting ads to website visitors, pricing services, serving customers, and offering them credit, all using algorithms that may be biased and unfair. Like Facebook, companies in all industries will need to embrace proactive steps to ensure that their systems don’t introduce or reflect biases that cause people to be treated unfairly.

But it’s not just industry that needs to step up. Not only are governmental entities some of the heaviest users of Connected Intelligence technology (and therefore should make the same commitments as industry players), but citizens expect their political leaders to be actively working to protect them from the dangers of this coming revolution.

Specifically, I think government has a role to play in three aspects of ensuring that no one gets left behind in the connected intelligence revolution:

  • Resources: Governments are well positioned to ensure that society is prepared for the changes being brought by Connected Intelligence. For example, in 2018 Finland launched a program to teach 1% of its population basic AI skills¹⁶. In fact, in 2017 the country established an 8-point strategic plan for “turning Finland into a leading country in the application of artificial intelligence.”¹⁷ In addition to education, the plan included providing small businesses with “innovation vouchers” to fund new capabilities to compete internationally in an increasingly digital economy, creating a data clearinghouse to enable turning data into value, establishing an “artificial intelligence accelerator” to help companies adopt AI, and positioning Finland as an attractive destination to attract global AI experts.
  • Restraint: While governments are aggressively moving to enable the positive benefits of the Connected Intelligence Revolution, they must exercise restraint in their own application of the technologies. As the saying goes “with great power comes great responsibility,” and governments are among the most powerful actors in the world. There are already examples of court systems making biased predictions of future criminal behavior¹⁸, police using biased systems to predict where crimes would occur¹⁹, and police using facial recognition systems to make arrests, even though the system is deeply flawed²⁰. In fact ClearView AI, a company that has scraped billions of photos of people from social media and other online sources without the permission of the online companies or their users, is used by over 600 law enforcement agencies around the world to identify persons of interest²¹. Governments need to err on the side of caution before allowing the dangers of Connected Intelligence technology to destroy the lives of their citizens.
  • Regulation: Finally, governments need to actively protect their citizens by judiciously enacting regulations. While industry should take specific actions as noted above, government oversight is appropriate to ensure broad adoption of such measures. Europe and California have both implemented laws to protect data and privacy (GDPR and CCPA respectively), while last year U.S. senators introduced the somewhat flawed Algorithmic Accountability Act²². Thoughtful and reasonable regulations should be developed and implemented to ensure that no one gets left behind in this revolution.

It’s important to note that, just as companies may be able to gain a competitive advantage by exploiting people through Connected Intelligence technology when their rivals exercise ethical restraint, countries and economies may similarly gain unethical advantages. For example, if China were to impose and enforce significant regulations on Chinese companies using Connected Intelligence technology while American companies remained unconstrained, Chinese industries may fall behind their American counterparts in terms of efficiency and effectiveness.Therefore multinational commitments, conceptually similar to the Paris Agreement on Climate Change, should be pursued to drive consistent commitments to an ethical framework and actions for Connected Intelligence technologies around the globe.

In summary, the Connected Intelligence Revolution promises great benefits for companies, individuals, and society, but it also introduces great dangers. In a global economy, with rapidly changing technologies, ensuring that no one gets left behind is a complex, multifaceted challenge that requires action and commitment at the individual, company, industry, national, and international levels. I believe that the actions I have outlined in this paper are reasonable, realistic, and can result in a positive outcome for all stakeholders.

¹ Iansiti, M., & Lakhani, K.R. (2020). Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Boston, Mass: Harvard Business Review Press.

²Asimov, Isaac (1950). “Runaround”. I, Robot (The Isaac Asimov Collection ed.). New York City: Doubleday. p. 40.

³Engineering and Physical Sciences Research Council. (n.d.). Principles of robotics. Retrieved from https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ on 7July 2020.

⁴Nadella, S. (2016, June 28). The Partnership of the Future. Slate. Retrieved from https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html on 7 July 2020.

⁵Friedman, M. (1970, September 13). The Social Responsibility of Business is to Increase its Profits. New York Times Magazine. Retrieved from http://www.umich.edu/~thecore/doc/Friedman.pdf on 8 July 2020.

⁶Yang, J.L. (2013, August 26). Maximizing shareholder value: The goal that changed corporate America. The Washington Post. Retrieved from https://www.washingtonpost.com/business/economy/maximizing-shareholder-value-the-goal-that-changed-corporate-america/2013/08/26/26e9ca8e-ed74-11e2-9008-61e94a7ea20d_story.html?utm_term=.e1b68b18b924 on 8 July 2020.

⁷Business Roundtable (2019, August 19). Business Roundtable Redefines the Purpose of a Corporation to Promote ‘An Economy That Serves All Americans’. Business Roundtable website. Retrieved from https://www.businessroundtable.org/business-roundtable-redefines-the-purpose-of-a-corporation-to-promote-an-economy-that-serves-all-americans on 8 July 2020.

⁸Hsu, T. (2020, May 31). Corporate Voices Get Behind ‘Black Lives Matter’ Cause. The New York Times. Retrieved from https://www.nytimes.com/2020/05/31/business/media/companies-marketing-black-lives-matter-george-floyd.html on 8 July 2020.

⁹Wong, Q. (2020, July 7). Facebook ad boycott: Why big brands ‘hit pause on hate’. CNET News. Retrieved from https://www.cnet.com/news/facebook-ad-boycott-campaign-calls-on-global-companies/ on 8 July 2020.

¹⁰Dialani, P. (2020, June 3). COVID-19 Accelerating the Adoption of Robotics in Supply Chains. Analytics Insight. Retrieved from https://www.analyticsinsight.net/covid-19-accelerating-the-adoption-of-robotics-in-supply-chains/ on 8 July 2020.

¹¹King, S. (2020, May 11). COVID-19 Why COVID-19 Is Accelerating The Adoption Of AI And Research Tech. Forbes. Retrieved from https://www.forbes.com/sites/steveking/2020/05/11/why-covid-19-is-accelerating-the-adoption-of-ai-and-research-tech/#19a302232140 on 8 July 2020.

¹²Weber, L. (2020, June 30). Microsoft Aims to Train 25 Million Workers Free in 2020. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/microsoft-aims-to-train-25-million-workers-free-in-2020-11593529201 on 8 July 2020.

¹³Magid, L. (2020, July 2). Facial recognition loses support as bias claims rise. East Bay Times. Retrieved from https://www.eastbaytimes.com/2020/07/02/magid-facial-recognition-accused-in-racial-bias-being-pulled-back-by-cities-and-companies/ on 8 July 2020.

¹⁴O’Brien, C. (2020, July 8). Facebook civil rights audit urges ‘mandatory’ algorithmic bias detection. VentureBeat. Retrieved from https://venturebeat.com/2020/07/08/facebook-civil-rights-audit-urges-mandatory-algorithmic-bias-detection/ on 9 July 2020.

¹⁵O’Neil, C. (2016). Weapons of math destruction. How big data increases inequality and threatens democracy. London: Penguin Books.

¹⁶Delcker, J. (2019, January 2). Finland’s grand AI experiment. Politico. Retrieved from https://www.politico.eu/article/finland-one-percent-ai-artificial-intelligence-courses-learning-training/ on 9 July 2020.

¹⁷Ministry of Economic Affairs and Employment. (2017). Finland’s Age of Artificial Intelligence. (Report №47/2017). Retrieved from http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/160391/TEMrap_47_2017_verkkojulkaisu.pdf?sequence=1&isAllowed=y

¹⁸Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing on 9 July 2020.

¹⁹Hao, K. (2019, February 13). Police across the US are training crime-predicting AIs on falsified data. MIT Technology Review. Retrieved from https://www.technologyreview.com/2019/02/13/137444/predictive-policing-algorithms-ai-crime-dirty-data/ on 9 July 2020.

²⁰Gilbert, B. (2020, June 30). Facial-recognition software fails to correctly identify people ‘96% of the time,’ Detroit police chief says. Business Insider. Retrieved from https://www.businessinsider.com/facial-recognition-fails-96-of-the-time-detroit-police-chief-2020-6 on 9 July 2020.

²¹Woollacott, E. (2020, July 9). UK And Australia To Probe Clearview AI Facial Recognition. Forbes. Retrieved from https://www.forbes.com/sites/emmawoollacott/2020/07/09/uk-and-australia-to-probe-clearview-ai-facial-recognition/#66a862103674 on 9 July 2020.

²²New, J. (2019, September 23). How to Fix the Algorithmic Accountability Act. Center for Data Innovation. Retrieved from https://www.datainnovation.org/2019/09/how-to-fix-the-algorithmic-accountability-act/ on 9 July 2020.

ClearPurpose

Tales and Tools for Sound Strategies

Russell McGuire

Written by

Strategist, Entrepreneur, Executive, Advisor, Mentor, Inventor, Innovator, Visionary, Author, Writer, Blogger, Husband, Father, Brother, Son, Christian

ClearPurpose

Through ClearPurpose, we share our experience, tools, and methodologies to approach strategy development with discipline and structure, making it easier to achieve clarity, gain consensus, and communicate coherently. Note: As an Amazon Associate we earn from qualifying purchases.

Russell McGuire

Written by

Strategist, Entrepreneur, Executive, Advisor, Mentor, Inventor, Innovator, Visionary, Author, Writer, Blogger, Husband, Father, Brother, Son, Christian

ClearPurpose

Through ClearPurpose, we share our experience, tools, and methodologies to approach strategy development with discipline and structure, making it easier to achieve clarity, gain consensus, and communicate coherently. Note: As an Amazon Associate we earn from qualifying purchases.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store