Don’t Let the Race to Embrace AI Overshadow the Needs to Audit Your Advancements

Nielsen
Nielsen Forward/
Published in
7 min readDec 13, 2019

By Kevin M. Alvero, CISA, CFE, SVP, Nielsen Internal Audit

Artificial intelligence (AI) represents a huge opportunity for businesses and organizations everywhere, and those that tap into it as it sooner rather than later will be much better positioned to address the challenges and opportunities of tomorrow.

At Nielsen, we continue to lean on AI more heavily to harness the power of data, but that’s just one way in which AI is helping our organization. As companies explore new ways to leverage AI, they need to ensure that they’re managing any associated risks along the way. That’s why our internal audit function has embraced the process of auditing AI. Organizations have to be vigilant to ensure that they are investing the right resources in the right places to capitalize on the opportunities AI represents, that they’re accurately assessing AI-related risks, and that AI projects remain consistent with the organization’s mission, values, culture and larger technology strategy.

Internal audit should be a valuable ally in this effort, but AI poses a number of challenges from an auditing perspective. First, the very definition of AI is somewhat ambiguous. Meanwhile, organizations are aggressively seeking to develop or acquire AI capabilities for a broad range of activities, and AI programs and processes can be highly complex and technical. Finally, there is a lot of hype out there about what AI is and what it can do.

The practice of auditing AI is still being developed. With regard to advanced AI technologies, such as machine learning, there are no universal standards governing AI. The standards, along with frameworks to audit against them, are still being written. ISACA (the Information Systems Audit and Control Association) has issued guidance on applying its existing COBIT framework (use to govern and manage enterprise IT) to AI, and the Institute of Internal Auditors has proposed an approach as well. Meanwhile, an array of organizations, including NIST, ISO and ASTM, are drafting standards for AI. As the practice of auditing AI continues to evolve, however, it has become clear that internal audit needs to remain clearly focused on strategy and governance — ensuring that the organization has a sound AI strategy and a robust governance structure in place to support the execution of that strategy.

Defining AI

Without universal agreement about what AI is and what types of technologies should be considered artificial intelligence, it’s difficult to establish standards for auditing AI. Not only do opinions about what AI is vary, there are varied beliefs about how long it’s been around. Some take a broad view and insist that AI has been around far longer than most people think. Others define it more narrowly, which fuels their stance that AI doesn’t really exist yet and that so-called AI technologies are actually misunderstood as such.

Technologies that are commonly considered to fall into the realm of AI include: deep learning, machine learning, image recognition, natural language processing, cognitive computing, intelligence amplification, cognitive augmentation, machine augmented intelligence, augmented intelligence, and more, but sources disagree about how to group and describe them. For example, some consider robotic process automation (RPA) to be a form of AI because it can execute highly complex algorithms across large and diverse data, which seemingly simulates intelligence because of the way it is programmed to make decisions. On the other hand, others, including ISACA, disagree, stating that “AI does not operate based on a set of predetermined rules,” and that such rule-based functionality is “a hallmark of traditional software engineering.”

To take it a step further, AI itself can be grouped and sub-grouped by technology type. For example, when the Institute of Internal Auditors published its Global Perspectives and Insights on the subject of auditing AI in 2017, it cited the four-type categorization of AI (i.e., I-Reactive Machines, II-Limited Memory, III-Theory of Mind, IV-Self Awareness). Protiviti, meanwhile, chose to group machine learning, deep learning and natural language processing for a 2019 study and termed those technologies collectively as “advanced artificial intelligence.”

The topic of what, exactly, AI is might seem like something for sticklers to haggle over, but it is actually very relevant to internal audit of AI and the audit function’s ability to deliver value and meet stakeholder expectations. When considering what technologies fall under the umbrella of AI for internal audit purposes, the most important factor is understanding how an organization defines AI for itself. By being proactive and initiating open conversations, internal audit can answer the question, What do we, as an organization, mean when we say “AI”? Obviously, this alignment is important to manage stakeholder expectations as it relates to the scope of auditing of AI. However, it also may provide internal audit insight into whether an organization’s definition of AI is broad enough — or narrow enough — for it to be able to perceive risk in the marketplace.

Auditing AI

Despite the varying notions of the nature of AI itself, when it comes to the practice of auditing AI, sources of prevailing guidance generally agree that internal audit should remain focused on strategy and governance.

Strategy

Without a clearly articulated and regularly reviewed strategy, investments in AI capabilities will yield disappointing results, and worse, they could result in financial and/or reputational damage to an organization. Internal audit should be interested in confirming the existence of a documented AI strategy and assessing its strength based on the following considerations:

  • Does it clearly express the intended result of AI activities?
  • Does it include plans to identify and address AI threats and opportunities?
  • Was it developed collaboratively between business and technology leaders?
  • Is it consistent/compatible with the mission, values and culture of the organization?
  • Is it reviewed and updated?
  • Does it consider the supporting competencies needed to leverage AI?

Organizations need their internal audit departments to ask these types of questions, and not just once, but repeatedly. Research continues to show that organizations want their internal audit departments to be more forward-looking and provide more value as it relates to assessing strategic risks. As it relates to supporting competencies, it should be noted that the top concern among board members and C-level leaders for 2019 is that their existing operations and infrastructure will be unable to adjust to meet performance expectations among “born digital” competitors. As such, internal auditors can and should address the question — on an ongoing basis — of whether the organization’s AI strategy is appropriate and can realistically be carried out.

Governance

As with any other major system, proper governance structures need to be established for AI initiatives to ensure that there is proper control and accountability and to determine if AI projects perform as expected and accomplish their objectives.

Again, there is no tried-and-true template to follow to manage AI governance. “The playbook,” writes one expert, “has yet to be written.” Nevertheless, internal auditors should explore the care that business leaders take to develop a robust governance structure in support of AI applications, beginning with big data.

Big data forms the foundation of AI capability, which means that organizations should pay special attention to the governance structures for their data. Internal audit should understand how organizations are ensuring that their data infrastructure has the capacity to accommodate the size and complexity of AI activity set forth in the AI strategy. At the same time, internal audit should understand how the organization manages risks to data quality and consistency, including controls around data collection, access rights, retention, taxonomy (i.e., naming) and editing and processing rules. Internal audit should also consider security, cyber resilience and business continuity and assess the extent to which the organization is prepared to handle threats to the accuracy, completeness, and availability of data.

AI value and performance is also dependent on the quality and accuracy of the algorithms that define the processes that AI performs on big data. Documented methodologies for algorithm development, as well as quality controls, must exist to ensure that these algorithms are written correctly, free from bias and use data appropriately. Internal audit should also understand how AI system decisions are being validated and whether those decisions could be defended if they were to be challenged.

In addition to governance around data and AI algorithms, governance structures should also be examined to determine whether:

  • Accountability, responsibility and oversight are clearly established.
  • Policies and procedures are properly documented and followed.
  • Those with AI responsibilities have the necessary skills and expertise.
  • AI activities and AI-related decisions and actions are consistent with the organization’s values, and ethical, social and legal responsibilities.
  • Third-party risk management procedures are performed around any vendors.

The Audit Department

In order to audit AI effectively, internal audit functions must ensure that they have (or can acquire) sufficient resources, knowledge and skill to perform audit procedures around AI, even if they do not necessarily have expert-level knowledge on staff. This has proved to be a challenge. According to the 2018 North American Pulse of Internal Audit, 78% of chief audit executives indicated it was extremely or very difficult to recruit individuals with data mining and analytics competencies. Nevertheless, internal audit functions should work to steadily increase their AI expertise through training and talent recruitment, because organizational dependence on AI is only going to increase in the coming years.

Success in auditing AI is not directly dependent on technical expertise. Success depends on assessing strategy, governance, risk and process quality — all of which internal audit excels at from an independent, cross-departmental point of view. These are things that internal audit should remain focused on when it comes to AI.

The Time to Focus on Auditing AI is Now

Despite a rapidly changing marketplace and the sometimes hazy nature of AI, internal auditors can provide valuable, fundamentally sound assurance that the organizations they serve are pointing their AI investments in the right direction, accounting for risks and opportunities, and executing on their business objectives as it relates to AI. The sooner internal auditors do this, the better, simply because AI, in all its various forms, is not going away. It’s gaining momentum. Internal audit functions should become as accustomed to auditing and delivering assurance around AI as they are other business processes because in the coming years, it will become increasingly difficult to find an area of the business that does not leverage AI in some way. Focusing on strategy and governance will enable auditors to do this in spite of rapidly emerging technologies and risks.

--

--