Raising Artificial Intelligence (AI)

A series on building AI for Stock & Securities Trading, Part I.

Growing up, when we saw that first blinking cursor on our Apple ][ or our TRS-80 (or any other early-days computer), we were constantly told how “Artificial Intelligence” is “right around the corner” and “maybe 10 years, at the most, away from reality”.

Wikipedia has a thorough and thoughtful timeline about AI. There are many main points, for the purpose of this article, which must be discussed before we dig into the actual process of raising an AI.

A Brief History of AI


In 1913, Bertrand Russell and Alfred North Whitehead essentially created (although some snooty historians say “revolutionized”) formal logic in their Principia Mathematica.

Seven years later, in 1920, Ludwig Wittgenstein got involved, with his colleague and built the philosophical constructs which laid the foundation for Alonzo Church to construct what — in my opinion — is the keystone of Artificial Intelligence, the Lambda Calculus.

In 1943, Warren Sturgis McCulloch and Walter Pitts postulated and provided something resembling code, built on Lambda Calculus, for the first artificial neural networks.

From there, it was a short hop to 1944, where John von Neumann and economist Oskar Morgenstern built an AI around game theory in Theory of Games and Economic Behavior.

In 1950, Alan Turing came up with the Turing Test, which clearly gets a lot of attention and caused people to start thinking about what it means to be an “AI”.

Then, in 1958, something miraculous happened; John McCarthy created Lisp, which — if you are not familiar, is the fundamental underpinning of everything functional and reliable in the world of AI programming and development.

Python’s best ideas? Lisp.

R’s incredible reductions and mapping? Lisp.

Lutz Mueller later created a functional version of Lisp called NewLISP, which — to this day — remains my favorite version of Lisp ever created. I even maintain the albeit outdated github repository for Artful Code and have written many projects in NewLISP since I fell in love with it in 2004.

Any language which is sufficiently complex enough to create anything even resembling Artificial Intelligence either stole or borrowed strongly the mechanisms from Lisp.

Programming in Lisp in 1960 would have been like owning a Tesla Model S in 1960. It’s so far advanced beyond any other language of the time as to be sent from the future. Even today, the idea of code being a list and a list being a map which can be reduced… it's magical.

But, I digress…

I will leave it to the reader to go through the timeline on Wikipedia if they so choose, as it is vast and detailed, but I do want to point out three more events which made a giant impact on functional Artificial Intelligence:

  1. In 1991, just a year after the First National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford, Danny Hillis designed the “connection machine” (which would later be named Thinking Machines Corporation).
  2. In 2002, the iRobot Roomba is born and, while it may seem insignificant, it ushers in a whole universe of people who can now tangibly see the possibility of AI.
  3. In 2011, Apple releases Siri during the same year IBM’s Watson beats two of the best human players at Jeopardy.

And, no, I did not mention Terminator… not because it isn’t a great movie, but because it did not catapult the world of AI into the next decade as the above three examples did.

AI Is Just So F’n Stupid…

It is now 2019. We know far too much about neural networks and about how AI should work.

We can train a system to recognize an obstacle and even avoid it. We can build a neural network which can answer basic questions (“what color is the sky”) and even some complex ones (“who is the drummer for the Beatles”).

We can create lexical parsers, rules-of-thumb knowledge (like Cycorp, founded by Douglas Lenat), complex systems with complex decision trees and so much more.

Yet, in the midst of all this, there are two incredibly elusive targets. Both are necessary and together sufficient to manifest what we would consider a functional, usable Artificial Intelligence. Both are also, quite oddly, very abstract to most programmers.

Heuristics

HackerEarth.com/csec-heuristics-0

Wikipedia defines heuristics as:

Any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational, but instead sufficient for reaching an immediate goal. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples that employ heuristics include using a rule of thumb, an educated guess, an intuitive judgment, a guesstimate, profiling, or common sense.

Meaning

Meaning from Nomensa

When I say meaning, I am referring to it in the ontological sense of the word. I am talking about what it means to “be” and the idea of existing.

This is incredibly important because being is about history, about experience, about context and about the relationship between the object of being (the AI in this case) and every other object or concept around that being.

Heuristics & Meaning: Two sides of the AI sword


Take a moment to consider this possibility.

We, as individuals, are not only the sum of all our experiences, but we exist completely in relationship to everything around us.

Everything.

We are not atomic or isolated. Our very existence is in relationship to the rest of the universe. Our joy is derived from external factors. Our pain from external sources.

And, while we may manifest said joy and pain internally, through our own thoughts, we are a reactive engine based on our heuristics and our meaning.

Why then is it so unbelievably common for everyone creating an “AI” to start with an empty shell (i.e. a neural network) and start to “train” their system on the branches (i.e. decisions) made during input, usually by humans, during the processing of information into that empty shell?

Meaning; the shell is empty, then it becomes a compilation of human experiences, trained by other’s heuristics, and derives their meaning from the world it is trained against.

Building AI that actually works…


If you believe the above, the logical conclusion is conceptually simple, but endlessly difficult to implement. The Artificial Intelligence would have to be born, with no knowledge other than a basic understanding of self (i.e. breathing without thinking about it and perceiving without knowledge of what perception means) and then it would have to acquire all knowledge — from heuristics to meaning — as it grows from nothing and into a realized being.

Implementing this is difficult, if not impossible, without massive financial resources, incredible computational resources, enormous development resources and overwhelming amounts of information.

Yet, this is the only path to building true Artificial Intelligence and not brittle systems which break when lexical problems and ontological inconsistencies are passed through their “neural network” understanding of the works around them.

© That’s Really AI

This means the AI must, from the beginning, have no knowledge of a language.

Like us… the AI must learn a language. It must learn good and bad and right and wrong and loving and hateful and everything we learned as organic engines along the way.

Ironically, without the physical self, there are huge shortcuts in this learning process but there are also huge gaps, as the AI cannot really know what it means to “walk” or other non-abstract physical concepts. For those concepts, the AI must approximate the knowledge as best it can.

But, again I digress…

AI and Trading Stocks


I’ve spent my entire career building what was once called “expert systems” and then “agents” and then “machine learning” and then “big data” and then “artificial intelligence”. Everything I’ve ever touched in my career has been, at the center of it all, about creating autonomous software which could learn, react, function and perform without human intervention. From the first network security company to the natural language question-answering system to the autonomous stock market trading platform, it’s all predicated around the idea of removing the human from repetitive tasks and letting the technology perform those tasks.

In other words, I — like almost all programmers — am fundamentally lazy and want the computer to do the “boring, repetitive stuff” for me.

The stock market, however, functions in a far more complex way than simply “stopping hackers” or “answering questions”. The stock market, with all its intricate details, is a complex, multi-layered system which behaves like a combination of high school social circles, a worldwide game of telephone, a high-speed auction house and every season of Gossip Girl combined.

Trading stocks is a complex business;

  1. Elements of an auction house and a spread between buy and sell prices.
  2. Elements of fear and greed.
  3. Elements of superstition.
  4. Elements of pure emotion.
  5. Elements of history and historical changes.
  6. Elements of fundamentals for every company and sector.
  7. Elements of mathematics and charts and graphs.
  8. Elements of global politics and news.

And none of this includes the technology-related problems of capturing quotes and information, storing them and actually placing trades on a brokerage.

Given all this, it is no wonder that;

  • most new traders lose, on average, 38% of their portfolio in the first year
  • most hedge funds fail in the first 18 months
  • most hedge funds take 4 years before they are profitable
  • most quant funds lost 14% in 2018
  • more hedge funds were disbanded last year than were started in the previous 2 years

Trading stocks is complex. It is not a computer problem. It is a very human problem, where one person can track — at most — approximately 20 companies and their average success rate is 2 out of 5 on those 20 companies.

Yet, there are around 5,500 stocks in the US Stock Market (across all exchanges) which can be traded each day, not including commodities, currencies and over the counter securities (OTC).

AI and Trading Stocks on “Emotion”

How then, is it even possible one could build an AI which could trade stocks?

I am reminded of the old joke about starting a company in Silicon Valley (or a Winery, or many other things, as the joke varies by industry).

Question:

How do you make a small fortune in a startup in Silicon Valley?

Answer:

Start with a large fortune.

I should also start by saying, “I cheated.” My partner is someone with several degrees, with a strong background in psychology and child development. She’s literally designed to understand how the human mind works as it is assimilating information and what can go wrong during assimilation or processing of said information.

She also calls our AI “Hal”, which I can never know if she’s making a positive comment about how advanced it is compared to other AI systems or if she’s warning me that it will, inevitably, someday kill me by sending me out of an airlock and into space.

But, I digress…

Hal 9000 “Eye” from FissionMetroid101

When we first started creating the system, we knew we could not just “trade stocks” (see above), but we would have to build a system with a priori knowledge of everything, but also with a posteriori knowledge.

Meaning; the AI would need to not only have the concepts of the world available to it, based on the numbered list above (i.e. fear and greed), but would also have to have a historical or heuristic understanding of those concepts as well as have an understanding of the meaning of these issues.

After all, what does greed even mean to an AI?

I would argue the answer is unknowable to us as humans.

We do not and can never process information the same way an AI does. So why, then, is it always the case we try so hard to jam our thinking into the AI’s perspective of the world around it? Why do we “create in our own image” instead of allowing the AI to figure it out for itself?

As a teenager in the 1980s, I can assure you, my parents would let me and my friends wander aimlessly for hours each summer. If we were hurt, we’d be far from home and “figure it out”. And I personally believe I’m better for having gone through that.

This is why our AI started empty, without any knowledge of anything, including stocks and trading.

This is why our AI has every piece of recorded and digitized information from 1989 to right now. Where right now is defined as microseconds ago, not days ago.

This is why our Amazon AWS bill averaged hundreds of thousands of dollars per month for far too many months when the AI was “learning”.

This is also why, from our perspective, we saw over 18% returns in January, 2019, far outpacing the S&P500 and during the best January in market history.

I believe there are no shortcuts in creating functional AI.

I also believe the world can benefit from our experience in many ways (one of which is financial and for the investors in our private Hedge Fund).


P&L For a Single Trade of “LAD” on February 13, 2019

I can’t wait to tell you more and share other examples.


DDI Featured Data Science Courses:

*DDI may receive affiliate commission from these links. We appreciate your continued support.