Ethical AI — A framework for AI reliance and AI scaling at large

Kory Farooquie
nextgenninja
Published in
10 min readOct 25, 2019

Can you trust your AI? Actually, the question that really should be asked is — “How do you know that the decision that an AI algorithm is taking is, in fact, the right one? And furthermore “Do you ever question it?”

Google Maps may find an alternate route for us based on the traffic ahead and will suggest we take that route in order to get to our destination faster. We don’t question the re-routing suggestion or the validity of the actual traffic incident ahead, we simply adhere to it because we trust it.

Complex AI systems powered by immense computing power, inputting vast amounts of data while providing just in time predictive analytics and prescriptive decisions, like Google Maps, have become part of our everyday lives. As our reliance on these AI systems grows so must our expectations of the accuracy and more importantly fairness of their outcomes.

In the case of Google Maps, the risk may be lower but AI without ethics represents the potential to cause significant damage across the large variety of use cases where it is being implemented. Compas, the AI algorithm used by virtually every law enforcement agency to track recidivism, the likelihood of recommitting a crime, was found to be unfairly targeting black people. What would happen if a Tesla’s vision recognition software got hacked into to retrain the car to recognize a STOP sign as a 45mph speed limit sign? The risks associated with AI can be devastating while not leveraging it will ensure the irrelevance of a business.

Increasing consumer demands and reliance on data-driven decisions to ensure competitive advantages are becoming integral parts of business strategy. Global enterprise leaders are increasingly focusing on leveraging AI and making it a key focal point for future spend. With that said, the lack of an ability to have oversight on AI-driven outcomes poses the single biggest impediment to scaling such solutions across the enterprise. AI-based systems are mimicking and even accelerating human decision-making capabilities, but how accurate, and equally important, how ethical or fair are those decisions? In a recently released survey from tax and advisory firm, PwC, where they asked around 250 senior business executives about AI, 80% of US CEOs thought that AI will significantly change the way they do business in the next 5 years. Ironically though, only 38% of the same respondents felt that the AI decisions were aligned with their corporate values and furthermore only 25% actually considered the ethical implications of AI solutions.

The dilemma that stands before us, of the untapped potential of AI and its limitless possibilities against the backdrop of the ethical what-ifs, maybe the single most critical thing businesses address before rolling out AI in their eco-systems. Unlike their human counterparts, AI algorithms are not created with a moral code to adhere to. There is no list of Commandments or idealistic examples to emulate. There is no way to simulate a conscience and the lack of ability to bake in every single potential exception, from the onset, makes these algorithms quite inflexible.

So how does an enterprise ensure that the AI algorithms that can help them make decisions, cut costs, increase profits and ensure competitive advantage stay true to the values espoused by that enterprise? To do this, enterprise leaders must define a new process that governs the accuracy and ethics of their AI hypotheses with the goal of earning and maintaining user trust gained through the outcomes of those hypotheses.

“Trust takes years to build, seconds to break, and forever to repair.” — Unknown

Build with trust in mind

Ronald Reagan famously said to the Soviets quoting one of their own proverbs, “Trust, but verify.” In this new age of AI, the speed to market for newer and newer AI systems is being fueled by unparalleled consumer demands and expectations. The possibilities that Artificial Intelligence has to meet and exceed those demands is arguably the biggest asset enterprise leaders can leverage today. This speed to market is dangerous in that it doesn’t hold the AI algorithm accountable for creating a narrative that may not represent a whole truth. The hype surrounding the limitless possibilities of AI leads us to inherently trust the outcome of an AI decision and not account for the bias that the AI decision may entail. The impact of this machine bias is causing enterprise leaders to scale back AI implementations without a sound strategy in place. There needs to be more checks and balances on the outcomes of AI decisions, more oversight and a way to understand the why and how of an AI decision. Trust but verify, yet in this case, we are missing the ability to verify.

To address this and ensure an ethical outcome of their AI hypotheses, enterprise leaders must take a step back and create a strategy around the implementations of AI in their businesses. First and foremost, enterprise leaders must ensure that the data supplied represents the whole truth and that there exists a sound governance framework to ensure accountability and adherence to the following 6 pillars for ethical-AI:

1. Value Alignment — Does the intended outcome align with the values of the brand?

2. AI Risk Mitigation — What are the policies in place in the case of a wrong decision?

3. Compliance & Regulation — Is the AI algorithm aligned to meet compliance guidelines?

4. Security & Privacy — How robust is the security to ensure manipulation is prevented and data is protected?

5. Bias & Fairness — Are the decisions accounting for fairness and equality?

6. Decision Forensics & Traceability — Is there a way to audit how the algorithm arrived at a decision?

Holistic Data Foundation

The foundation of any good decision is the completeness of all the knowledge required to conclude a sound decision. This issue is paramount to an AI system because there is no way to bake in an empathetic nature in the decisioning algorithm. The AI algorithm is arriving at a conclusion based on the bias it was constructed under, the data it is being fed and the intelligence it is then building through exceptions. If the construction of the algorithm does not align with the corporate values of the business and/or the data does not account for all of the permutations that this AI system may encounter, the intelligence it is building will be skewed towards the nature of the collected data. More data is not better data, in fact, organizations need to intentionally collect data ethically while minimizing bias prior to embarking on their AI initiatives. This holistic data will build a strong foundation and avoid many of the machine bias-related issues when it can be strategically applied to meet the precise needs of an AI hypothesis. Moreover, cross-pollinating data across silos to create data platforms will allow enterprises to solve complex problems in the future.

With ethical data collection and management as its foundation, an ethical AI framework can help an organization take into account such factors as diversity, privacy, demographics, prosperity, and humanity in order to minimize risk and fully harness AI’s potential towards their intended use-cases.

AI Governance Framework

With a strong data foundation in place, enterprise leaders must then focus on an end to end enterprise AI governance strategy and build a framework that ensures the 6 pillars of ethical AI are adhered to. This governance framework will manage and monitor the pillars to ensure the following outcomes:

Accountability of an AI Decision

  • All AI-based decisions must be accountable for their outcomes
  • Enterprise leaders in place to oversee these AI implementations must also be held responsible for the decisions of these systems

Business and AI Value Alignment

  • Do the AI decisions align with the corporate values of the enterprise?
  • Do they represent the brand and what impact do the decisions have on the brand?

Continuous Improvement

  • How can a cycle of monitoring and improving outcomes be implemented to ensure the desired outcomes of AI implementations?

Checks & Balances

  • Is the AI decision auditable?
  • Can a human “pull the plug”?
  • Is AI being leveraged or has the enterprise lost control?

Security & Privacy

  • What measures are in place to ensure an AI decision cannot be manipulated with malicious intent (internal or external)?
  • What monitoring and correction mechanisms are in place to ensure the sanctity of an AI implementation?
  • What measures are in place to ensure that algorithms and AI systems are not a privacy risk, especially in areas of facial and voice recognition?
  • Are there safeguards in place in the systems that can lead to data exploitation such as social ranking and identity tracing?

Most organizations today may already have an enterprise governance strategy in place. This mechanism works in the background and has historically ensured enterprise accountability with respect to compliance and regulations and adherence to the intended outcomes.

As enterprises begin embarking on a new AI journey, a new and dedicated end-to-end AI governance framework must be built from the ground up with resources that understand the dynamic nature of data and AI and can appreciate the benefits of AI and the iterative process required to build its intelligence. This AI governance framework should ensure coverage over all people, processes, tools, and technologies that will be part of a business’s AI initiative.

6 Pillars of Ethical AI

Once a strong holistic data foundation and a sound governance framework are in place, enterprise leaders can embark on their AI transformation journeys. The holistic data foundation will ensure the AI hypotheses being addressed are supplied with the whole truth while an active and dedicated AI governance framework will ensure that AI implementations execute on their intended goals and work for the human workforce and not vise-versa.

1. Value Alignment

AI algorithms must replicate the values and culture of the enterprise in order to truly bring valuable outcomes. This value alignment must be addressed during the design and construction process of the decisioning algorithm(s).

A rush towards leveraging AI can miss this crucial point and create a customer-facing outcome that challenges the reputation and image of the brand it is intended to represent.

2. AI Risk Mitigation

Enterprise AI governance must account for and monitor specific risks associated with AI-based decisions. Examples of some of the associated risks include:

  • The risks associated with errors in the outcomes of AI decisions
  • A bias associated with an AI decision
  • A risk-based on the failure to provide a decision or provide an incorrect decision

3. Compliance & Regulation

Traditionally the Compliance functionality of an organization has acted to audit an outcome after an action has already occurred and flag it for correction. In order to tackle the exponentially large-scale implications of AI-based systems, the Compliance checkpoint for an AI system needs to work proactively and not reactively to ensure AI-based outcomes stay in line with established rules and regulations. This active monitoring versus reactive addressing inherently changes the traditional nature of the compliance function of the governance framework.

4. Security & Privacy

Just as exponentially advantageous as AI algorithms can be for an enterprise, they can be equally exponentially destructive in the case of any malicious intent that posed from outside the organization or within. An enterprise governance framework must set the guidelines and constantly monitor the health and sanctity of AI systems against internal and external threats. This constant monitoring will ensure that the development and training lifecycle of algorithms cannot be ‘tricked’ into learning paths that will lead to not just business but also potential ethical violations.

Privacy is a human right and today this right is facing its greatest threat from businesses as well as from government. The speed to market of software and products that capture our data can lead to flawed designs and controls or open gateways that would allow the extraction of that data with malicious intent. AI systems can ensure a huge improvement in security and privacy if designed correctly and inversely cause exponential havoc if they are not built with a governing framework. AI is also inherently adept at utilizing large data sets for analysis and is arguably the only way to process big data in a reasonable amount of time. This can lead to implications for privacy in terms of data exploitation, tracking & identification, social profiling, and reverse engineering.

5. Bias & Fairness

Bias and fairness are relative terms. This is probably the trickiest part of ethical AI as it is intertwined with not only the enterprise’s customer-facing image but also the sentiments of the society and culture at large. In order to address this challenge when constructing an AI algorithm, enterprise leaders may choose to conform with the norms of their representative society or choose to create an idealistic outcome or one that is actually heavily skewed towards a certain bias. This tricky aspect of an ethical AI framework, on one hand, represents the primary reason for the discussion of ethical AI, and on the other hand, can be the most manipulative aspect of AI.

6. Decision Forensics, Traceability & Control

An AI system where human operators cannot determine the reasoning behind an outcome becomes a “black box” in terms of the lack of insight into the reasoning towards a decision. As enterprises create the AI algorithms that they can leverage for their advantage, they must also create the ability to provide oversight and control over these self-learning systems. Business stakeholders must retain the ultimate decisioning power to control these AI systems and not become victims of dystopian outcomes. Enterprise leaders must also placate the concerns of audit functions within the enterprise to explain the AI outcomes and reign in control of these systems.

Conclusion

Those enterprise leaders that are implementing or at the very least experimenting with AI implementations represent the forward-thinking approach that will enable the survival of their businesses. Those enterprise leaders that are still contemplating the how’s and what’s of AI are today laggards amongst their industry counterparts. AI is now an integral part of any businesses’ corporate strategy. As such, it represents an untapped opportunity to gain competitive advantage but also represents a huge risk if implemented without a proper strategy.

In order to fully capitalize on the potential that AI represents, enterprise leaders should build a holistic data foundation, an aggressively monitored and dedicated AI governance framework and finally adhere to the 6 pillars for ethical AI implementations.

Kory Farooquie is a public speaker and host of #NextGenNinja, a talk show featuring Enterprise and Entrepreneurial leaders that are defining what the world looks like for future generations. Kory provides Innovation consulting and leadership development via the NextGen.Ninja (www.nextgen.ninja) platform.

--

--

Kory Farooquie
nextgenninja

Serial Entrepreneur | CEO iNVATERRA.com | Host of NextGen.Ninja | Author | Public Speaker | Emerging Technologies Evangelist | Analyst