Sydney Swaine-Simon
Mar 8, 2018 · 10 min read
Participants of the Dartmouth Summer Research Project on Artificial Intelligence. Some of the participants include Marvin Minsky Claude Shannon and Ray Solomonoff || Photo Source Achievement.org

This article has been co-written with Abhishek Gupta, AI Ethics Researcher at District 3 and McGill University

District 3’s AI District has the mission to explore the impact of AI in society, provide recommendations on the ethical development of AI, identify ways that the technology could be misused and find ways to leverage novel technologies to enhance standards of living. The aim of this series of articles is to assess how AI will impact the financial sector. We continue our series with a brief history on how AI has been used in the financial sector.

Many of the current issues discussed in the field of Artificial intelligence are the same ones that were brought up 50 years ago and in certain cases close to 90 years. For example, if you look into the history of automation, there has always been a concern that it would lead to job loss. In fact, just before the great depression of the 1930’s, the New York Times published an article that speculated it could cause a recession.

In 1928, The New York Times wrote this article which argued that automation was creating unemployment || Photo Source:New York Times

There was also a concern in the financial service field. In the 1950’s, there was a lot of excitement for the potential of automation as it could help accountants and bankers speed up their work by helping them analyze, calculate and process their work at an efficiency not possible by humans. There was a huge growth in paperwork after World War 2 which created a growing need for systems that could automate some of the work [1]. However, researchers also recognized the risk that it could lead to job loss [2].

The financial sector is one of the first domains to drive interest in using artificial intelligence, even before high computing machines were available. In the 1960’s, a lot of research focused on Bayesian statistics, a method used heavily in machine learning. Some of its use cases included stock market prediction and auditing. It wasn’t until the 1980’s until the majority of commercialization opportunities were explored with expert systems. During that time, over two thirds of Fortune 1000 companies had at least one AI project being developed [3].

For the scope of this article, we will focus on research done after the 1956 Dartmouth Summer Research Project on Artificial Intelligence. For many, this conference was the official creation of the artificial intelligence field. We will also focus on the work done during the periods of 1956–1974 and 1982–1994, as these were two periods of time where AI research funding was abundant and the subject was popular. If you are interested in learning more about the AI Winter, we have provided some links in the reference section. We will also focus on a limited amount of AI techniques, primarily those that were the most popular at the time. This could help us to predict the expected response to future trending AI technologies (For example, recently there has been a lot of interest in Generative Adversarial Networks and capsule networks.)

Statistics based modelling and the rise of modern day finance (Bayesian Modelling)

Most of the AI research done in the 50’s and 60’s did not focus on its use in the financial field. However, a lot of the mathematics that is now used in current AI solutions originated in the early 1900’s. The origins of using advanced mathematics in the financial field started with the release of Louis Bachelier’s thesis Théorie de la Spéculation (Theory of Speculation) in 1900. His thesis is recognized as being one of the first papers to explore the use of mathematics as a method to evaluate stocks. With Bachelier’s work, the rise of statistical modelling marked the beginning of primitive AI in the financial world.

Louis Bachelier: the godfather of advanced financial mathematics: Photo take from Images Des Mathématique

During the 1960’s, the application of Bayesian statistics become quite popular due to the work of Robert Schlaifer. Schlaifer’s contribution to the field of statistics was in the domain of Bayesian Decision Theory. In other words, using Bayesian statistics to make informed decisions based on probabilities. Schlaifer’s background was not in mathematics, however, he was very good at applying the technique in different use cases. In 1959, he wrote a well received book called Probability and Statistics for Business Decision, which helped increase the popularity of research in this domain leading to more research exploring the potential of statistics in the business world [4].

After the release of Schlaifer’s book, there was an increase in the number of publications looking into its use in different domains. For example, John A. Tracy explored how it could be used for auditing. His research published in 1969 argued that auditors used a Bayesian approach in their assessment of companies. Therefore, it is feasible to create a mathematical model which could help auditors accurately assess the value of assets [5]. Tracy’s work was backed by another researcher at the time, James E. Sorensen who also attempted to use Bayesian Analysis in auditing. His conclusions were similar to Tracy and argued that Bayesian statistics prediction power should be another tool used by accountants as a method to help improve their judgments.[6]

Bayesian statistics are still used today, often by Quantitative Analysts. However, from the our perspective, the role still plays a similar one as suggested back in the 1960’s. It is a tool to have, but it should not be the only one used when doing an assessment.

The 80’s and the rise of Expert Systems

The parallel interface machine was one of the outputs of the 5th generation computer project. This system specialized in knowledge information processing|| Image Source: ISPJ Computer Museum Website

During the 80’s, the Japanese government created the Japanese 5th generation Computer Project. It was Japan’s desire to prove that the country can be a leader in the domain of computer technology, as historically it usually had to play catch up to the progress done by the UK and the US. Over the period of 10 years, Japan invested $400 million into their initiative. Many countries responded to this project by generating their own funding programs, including the Alvey Programme in the UK which invested £350 million in 5 years, and the Strategic Computing Initiative by DARPA, which spent $1 billion in 10 years.

Expert Systems Finance Field

During the 1980’s, there were multiple techniques used in building AI for finance, from Artificial Neural Networks[7] to Fuzzy Systems[8]. However, a lot of the interest was in knowledge based systems or Expert Systems. Expert systems were created and popularized in the 1960’s and 1970’s through their use in DENDRAL, a system that could help determine the molecular structure of unknown compounds, as well MYCIN, a system which could diagnose meningitis. However, it wasn’t until the 1980’s that the majority of commercialization attempts were done in the financial field. For example, Dupont had built 100 expert systems which helped them save them close to $10 million a year [3].

One of the first programs to be hypothesized for market prediction was the Protrader expert system. This system which was designed by K.C Chen from the School of Business at California State University and Ting-peng Lian of the University of Illinois. Chen and Lian were able to predict the 87 point drop in Dow Jones Industrial Average in 1986 using their system (although this could have been an overfitting error). The major functions of the system were to monitor premiums in the market, determine the optimum investment strategy, execute transactions when appropriate and modify the knowledge base through a learning mechanism.[9]

Image Source: Chen, Liang (1989). Protrader an Expert System for Program Trading

Expert Systems and Decision Support Systems were seen to have different applications in auditing. One of them, called EDP-XPERT, was developed in 1981 by Hansen and Messier. The system performed well on case studies but didn’t perform well during audits. The main limitation was the technical and upkeep cost associated with running the system.[10]

One of the biggest drivers of Expert System Development was in its use of providing tailored financial plans. One of the first systems to do so was called PlanPower created by Applied Expert Systems (APEX ) which was conceptualized in 1982 and commercially shipped in 1986 provided tailored financial plans for individuals with incomes over $75 thousand. The second iteration of their product was a more robust version called the “Client Profiling System”. This product was used by companies in the insurance banks and brokerage sectors and was licensed for $100 thousand per year. This system could provide a financial plan for incomes between $25 thousand to $200 thousand[11].

Another system, developed by Chase Lincoln First Bank and Arthur D. Little Inc. was able to do investment planning, debt planning, retirement planning, education planning, life-insurance planning, budget recommendations, income tax planning and savings achievement for other major financial goals. If a client was interested in receiving a report it would cost them $300 [11].

The possibility of using AI in financial fraud detection was something which garnered a lot of interest in the 1990’s. Due to the nature of this crime, it was hypothesized that an intelligent system would be able to sift through a lot of data and identify discrepancies. One such system which was sponsored by the U.S Department of Treasury was the FinCEN Artificial Intelligence system (FAIS). This system, which was put into service in 1993 could be used to determine incidents of money laundering[12].

The architecture of FAIS || Photo Source: AAAI FinCEN FAIS Report

The system would use a combination of different AI technologies, primarily rule-based and blackboard systems. The genius of the system was its capability of analyzing a large amounts data quickly. At the time, all transactions were handwritten and therefore would require human effort to be able to identify patterns of money laundering. FAIS was able to do handwriting analysis and identify discrepancies which made it possible to review over 200,000 transactions per week. Over the period of 2 years, FAIS was able to identify 400 potential cases of money laundering equaling $1 billion [12].

What led to the reduction of Expert Systems in Finance?

There were many things that caused the failure of adoption of Expert Systems in the industry and you can see a lot of similarities with the use of AI today. The expectations of Expert Systems did not meet reality. They were either too complicated to use and the product did not match the client’s needs. They also assumed that AI could solve all problems without thinking whether it was a good use case [3].

For example, General Electric created an expert system called Commercial Loan Analysis Support System (CLASS) which was used to assess evaluation of commercial loan applications. After testing the system, they were able to secure an agreement with two large New York firms to beta test it. Even though the system performed exactly the way it was intended, there was no established person within the firms to manage the system. It required the financial analysis department of the bank to enter the necessary data, and unfortunately, it became a low priority for them. Efforts to restart the CLASS system failed and the project died [13].

Lessons learned on avoiding another AI winter

Although there likely won’t be another AI winter at the same scale as in the past, we need to make sure we don’t repeat the same mistakes|| Photo found on 4ever.eu

Surprisingly, not a lot of historical research on AI exists in the financial service sector. This shortage of information could cause us to make the same mistakes as our predecessors. What strategy can the present generation of companies take so that the technology meets expectations? Before deciding to use AI in your company, consider the following:

  • Are you applying AI assuming it can solve all your problems?
  • Is the AI technique that you are using a good design for your use case? For example, there are many companies now exploring the use of Deep Learning. Unfortunately, if you want to build a good model, it requires a lot of data which you may not have.
  • Will your customers want to use your product? Does it have a bad user experience?

Hopefully with this historical perspective, we can shed a light on the current trends, things to look out for and what the future could hold.

The next article in this series will focus on on the AI and finance ecosystem. This will be followed by an article discussed the Integrations barriers of using AI in the finance sector.


Thanks for reading and feel free to share your thoughts in the comments section!

For more information on the work that Abhishek Gupta does, please visit his website

If you’d like to follow Sydney Swaine Simon, please visit his website


Additional Reading on the AI Winter:

References:

[1] Keenoy, (1958), The impact of automation on the Field of Accounting

[2] Edwards(1959) The effect of automation on accounting jobs

[3] Durkin (2002) Expert Systems: History and Applications

[4] Green (1963) Bayesian Decision Theory in Pricing Strategy

[5] Tracy (1969) Bayesian Statistical Methods in Auditing

[6] Sorensen (1969) Bayesian Analysis in Auditing

[7] Department of Economy, California University (1988) Economic prediction using neural networks: the case of IBM daily stock returns

[8] Buckley (1987) The fuzzy mathematics of finance

[9]Chen, Liang (1989), Protrader: An Expert System for Strock Trading

[10] Messier, Hansen (1992) A Case study and Field Evaluation of EDP-XPERT

[11] Brown et al (1990), Expert Systems for Personal Financial Planning

[12] Golberg et al (1995) The FinCEN Artificial Intelligence Systems: Identifying Potential Money Laundering from Reports of Large Cash Transactions

[13] Duchessi, O’Keefe (1992) Contrasting successful and unsuccessful expert systems

District 3

Based at Concordia University, we work closely with the Montreal innovation ecosystem to ensure every innovator and entrepreneur can create an impact.

Thanks to Abhishek Gupta

Sydney Swaine-Simon

Written by

Cofounder @ District 3, Cofounder @ NeuroTechX . Lover of Innovation, AI, NeuroTech, Entrepreneurship with the goal of improving human kind.

District 3

Based at Concordia University, we work closely with the Montreal innovation ecosystem to ensure every innovator and entrepreneur can create an impact.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade