AI by Design

Edward Vandenberg
8 min readApr 13, 2019

--

Edward H. Vandenberg, April, 2019

It’s fun to talk about Artificial Intelligence and there certainly is a lot to talk about. But strangely, there is an information ‘gap’ being created in the current wave (one of a few since the 1950’s) of interest and some implementations of AI application. The gap is generated by our friendly consulting firms and software vendors who have labeled whatever it is they’ve had on the shelf for many years, an AI application, and the people who sell it, AI experts. This is not helpful to learners and veterans alike, working to build applications that have a flavor of what I will call ‘classic AI’. Classic AI was envisioned by the founders of this science, such as Marvin Minsky, Herbert Simon, Alan Touring and others, starting with a workshop at Dartmouth College held in the summer if 1956. [i] (Classic AI has been formalized into coursework conducted by Stuart Russell and Peter Norvig based on their academic text book (see end note). Classic AI has existed largely intact before Watson, before Cloud, before deep learning and before many of the top consulting firms were even engaged in IT consulting. It is in the spirit of Minsky, Simon, Touring AI that I share my comments on a design approach to AI applications.

Rational Agents are rational

As a starting point, it’s useful to replace our usage of the term AI with ‘Rational Agent’. There are certainly other approaches and philosophies from linguistics and cognitive science etc. But a Rational Agent is the most reasonable to use in designing practical applications for business settings. A Rational Agent has a complete definition: Rational is “the power of comprehending, inferring, or thinking especially in orderly ways”[ii]. Building a Rational Agent has form and shape provided by the founders of AI and elaborated by Russell and Norvig. ‘AI by Design’ is an approach that I am presenting here to help multi-disciplinary teams develop rational agents.

A design approach to rational agents aims to address the information gap by creating a practical methodology that can be consumed by novice practitioners and business sponsors and understood by quantitative scientists, data engineers, and application developers. The methodology is described by an info-graphic that borrows from the quantitative tools and automation algorithms which truly are the components of AI applications. In the context of AI, these tools support “Intelligent Augmentation”

Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of. [iii]

Also included are ‘actuators’ that make rational decisions happen in applications. Note that this is only an illustrative sample of these tools and more are being developed as the focus of AI becomes more wide-spread. Most importantly, this method of design is agnostic to a specific technology, data type or algorithm. The proposition here is that the meaning of AI is being driven more by a particular vendor’s technology, an algorithm that has captured public attention, or the use of some exotic data that seems very intelligent. AI is none of this and all of this but we must take a broader and systematic look at the components and how they aid human decision making in order to assemble a working application that is informed by the knowledge of data science, human inference, IT and the business domains involved.

Ladder of Inference

The Ladder of Inference is at the core of the design template[iv]. This was developed by the late psychologist Chris Argyris Ph.D., a Professor Emeritus at Harvard Business School, known for his work on learning organizations and organizational development. [v] (A study of Argyris work is very helpful in partially unwinding the analysis, data, and strategies that must be addressed when looking to replace or augment human decision making and actions).

The Ladder of Inference info-graphic, juxtaposed with techniques and data types (see Design Framework for AI) helps ground the analysis in what is to be addressed that is ‘human’ and how some limited steps in the ladder can be enhanced or replace by what is ‘artificially human’: an ensemble of algorithms, harvested data, massaged and heavily processed, all assembled into a run-time application that has both an API and an ‘HPI’ or Human Programming Interface.

Design Framework for AI

After working with clients in this space for several years, I assembled the components of this framework to help talk about how machine learning and data could help knowledge workers make better and faster routine cognitive decisions in high transaction environments. (At this level of AI, purely non-routine cognitive decisions are only made by humans, though some applications of AI approach the edge or can seem). After researching decision-making for claims adjusters as an analytics leader for a large P&C insurance firm in 2014, I sketched out this info-graphic in a solution document. I later shared this info-graphic with consulting clients and validated the general interest in such a level of analysis. I first published this version to the International Institute of Analytics for a webinar in January of 2019.[vi]

While simplistic and somewhat superficial, the picture is meant to be consumed by AI project team members having mixed levels of knowledge in a ‘Joint Application Development’ (JAD) session where one does not aim for an exhaustive approach but a starting point for discussion and open dialogue where all can reasonably participate.

The framework implies what Michael Jordan calls Intelligent Infrastructure, II.[ii] II (in my own words) is the joining of algorithms, each with their own inputs, transformations and outputs into an ‘ensemble’ that fits into an existing system, supplied by transaction data and other sources implies an architecture that must be designed almost in parallel to ensure that the rational agent can practically run within its intended business process. Notably, as some of these tools do not have an obvious score type output (such as Game Theory) and the path to deploying these approaches is complex. II also includes the Human Programming Interface where knowledge workers consume the output within a structured process and take some action. Alternatively, the action could be automatic, with human oversight, if only to curate the data and validate the statistical controls of automated rules/decisions. This is an especially complex aspect of AI design and beyond the scope of this paper. Note that in other treatments, this ecosystem might be called the Cyber-Physical System[vii]

Foundation of AI

Classical AI is a robust approach to creating practical applications. Going back to the roots of AI and its founders, this design template will help your AI team develop and support new and better routine cognitive decisions that delight your customer and take the burden of boring repetitive work off of your employees. That means more quality time doing what humans love to do and are better at than any machine that can ever be developed, at least in our lifetimes. For a good explanation of the difference between routine and non-routine cognitive tasks and other task types, I borrow from MIT in The Quarterly Journal of Economics, November 2003. [viii]

The Design Framework presented here is really a way to extract knowledge from diverse teams to give you traction in AI application development; a methodology that envisions an entirely new process with new tools, roles, and outcomes. Vendors, consultants, point solutions and killer algorithms are certainly not excluded from this mix but they are not the starting point in designing an AI system.

Lastly, I will leave the reader with a quote from Marvin Minsky, one of the principal founders of AI:

If you “understand” something in only one way, then you scarcely understand it at all — because when you get stuck, you’ll have nowhere to go. But if you represent something in several ways, then when you get frustrated enough, you can switch among different points of view, until you find one that works for you.”[ix]

Why is this important?

We need a new and visual language to AI design that simplifies the components so that they are comprehensible to a broad team of ‘designers’ (see “Who are the designers?” in a separate article). AI is a mash of techniques, data, people, processes, and experts who each have their own language. A design schematic is a logical simplifying approach that will accelerate the development and deployment of AI application and keep your team engaged and contributing, regardless of the ‘native’ language of their particular role.

About the Author

Edward (Ted)Vandenberg) is a consultant to the financial services industry working in Machine Learning/AI since 2003. Find his posts on Linked In and at www.theanalyticexecutive.com.

[i] Artificial Intelligence, A Modern Approach Third Edition. Stuart Russell and Peter Norvig. Pearson Education, Inc. Upper Saddle River, NJ, 2010.

[ii] Science, Society, and Values: Toward a Sociology of Objectivity. Sal P. Restivo, Associated University Presses, Cranbury, NJ, 1994.

[iii] “Artificial Intelligence — The Revolution Hasn’t Happened Yet”. Michael Jordan, Medium April 18, 2018 https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

[iv] The Ladder of Inference was first put forward by organizational psychologist Chris Argyris and used by Peter Senge in The Fifth _Discipline: The Art and Practice of the Learning Organization.

[v] THE SKILL CONTENT OF RECENT TECHNOLOGICAL CHANGE: AN EMPIRICAL EXPLORATION © 2003 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology. The Quarterly Journal of Economics, November 2003

[vi] https://en.wikipedia.org/wiki/Chris_Argyris

[vii] AI Transformation Journey: The Design of a Rational Agenda https://www.brighttalk.com/webcast/14379/348870

[viii] https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

[ix] “The Rise of Intelligent Cyber-Physical Systems.” IEEE Computer Society, October 2018

[x] Marvin Minsky. “Introduction”. The Emotion Machine, Common Sense Thinking about Artificial Intelligence, and the Future of the Mind” ( New York, NY. Simon & Shuster, 2006), p 6.

--

--

Edward Vandenberg

I am an analytic executive providing guidance for the insurance industry. I specialize in designing Decision Agents for insurance apps (real AI)