Automating High-Level Economic Thinking using Deep Learning

Carlos E. Perez
Intuition Machine
Published in
9 min readDec 16, 2017

--

Photo by G. Crescoli on Unsplash

with Saurabh Mishra

The explosion in big data and Deep Learning (DL) has delivered many success cases across various systems such as autonomous driving cars, image recognition, natural language programming (NLP), handwriting recognition, and scientific exploration such as the study of galaxies, experimental high energy physics or molecular drug design; however, the application of these technologies has been limited to help us solve some of the most persistent social and economic challenges of our times. How do we bridge the gap between economic decision-making and the state-of-the-art data analytics found in the field of Deep Learning? This article explores this area in greater depth from the perspective of policy and investment decision-makers.

Economic decision-making requires highly non-linear, complex, and dynamic thought process. Take the case of facial recognition tasks found in vision problems. People’s faces do not change over time. On the contrary, economic systems are embedded with colossal number of dynamic components including agents, institutions, and constructs that vary over time. Many of these components are unaccounted for in statistics. Furthermore, there are dependencies that are not well understood. For example: a political crisis in one country could trigger an economic crisis in another country; the global financial crisis in 2008 brought down the global economy which stemmed from one component such as complex credit default swaps in the US mortgage industry. The impacts of one “too big to fail” entity i.e. Lehman Brothers resulted in a cascade of failures that resulted in the loss of more than 7 million jobs and $22 trillion from the global economy, the impacts of which the world is still dealing with today, a decade later.

The positive attribute of deep neural networks is that they produce highly non-linear approximation functions between the input and the output layer that could be useful for highly complex tasks, however, the uncertainty in DL approach is that we have no idea between the connection between nodes and hidden layers. On one hand, simple linear regression has very clear interpretability but have terrible accuracy in such instances. DL provides state of the art accuracy but terrible interpretability, in its current form. These trade-offs make decision-makers skeptical of adopting DL in their domains.

Automation of Economic Modelling using Deep Learning

DL may offer opportunities to enhance economic modeling capabilities. Standard economic models can be grouped in broadly two categories. First, models based on equilibrium conditions such as Dynamic Stochastic General Equilibrium (DSGE) and Computable General Equilibrium (CGE) Models. In these models, there are considerable assumptions including marketing clearing conditions. However, in reality, such complex, non-linear system such as the world economy does not operate an equilibrium where supply and demand meets. The second approach that analysts take is a data driven approach, traditionally, Gaussian state space models such as Kalman Filters, VAR, GARCH, ARMA, etc. In such approaches, there is a limitation of an n-order markov chain process with short, or long-term time-dependent feedbacks. These methods are so widely used that adopting any new methods will require a shift in organizational culture and perception.

These traditional statistical approaches have other limitations too: (a) high-dimensional non-linear data does not exist and cannot be used in such instances; (b) heavy parameterization based on prior beliefs makes them not very scalable and dependent on our belief system; (c) last but not the least, all of these models have largely failed to accommodate the complexity of the real-world. These modelling approaches have failed to predict important economic events, such as the global financial crisis of 2008, or many large-scale structural shifts in economies, failures to accurately represent mass-movements in nations or financial markets. For example, a recent Economist article (A mean feat) showed that the random number generator performed marginally better than IMF DSGE model forecasts for country level GDP growth forecasts.

Levels of Automation

In a previous article about Embodied Learning, we mentioned Judea Pearl’s classification of different capabilities of causal analysis. Most of machine learning is stuck at Level 1 where there is a static presentation of data and learning that is motivated by the notion of curve fitting and optimization. It is just ridiculously primitive, yet a majority of data science and statistics is based on this primitive notion. However, with Deep Learning, we can move to a more advanced form of data analytics. If you look at the levels, you will see that Judea Pearl’s level 3 classification is exactly what a scientist does to analyze data. In Deep Learning, we are essentially automating this Gednaken experiment. Of course, we are still a long way from reaching the general intelligence of the human mind. However, this prescription with regards to machine learning can give you an idea of where Deep Learning is presently in relationship to the more traditional level.

It is also instructive to understand that there exists a spectrum of automation and that it is illuminating to distinguish the different varieties. For this, we can learn from the Society of Automation Engineering (SAE). SAE has an international standard which defines six levels of driving automation (SAE J3016). This can be useful in classifying the levels of automation in domains other than self-driving cars. A broader prescription is as follows:

Level 0 (Manual Process)

The absence of any automation.

Level 1 (Attended Process)

Users are aware of the initiation and completion of the performance of each automated task. The user may undo a task in the event of incorrect execution. Users, however, are responsible for the correct sequencing of tasks.

Level 2 (Attended Multiple Processes)

Users are aware of the initiation and completion of a composite of tasks. The user however is not responsible for the correct sequencing of tasks. An example will be the booking of a hotel, car and flight. The exact ordering of the booking may not be a concern of the user. However, failure of the performance of this task may require more extensive manual remedial actions. An unfortunate example of a failed remedial action is the re-accommodation of United Airlines’ paying customer.

Level 3 (Unattended Process)

Users are only notified in exceptional situations and are required to do the work in these conditions. An example of this is in systems that continuously monitor security of a network. Practitioners take action depending on the severity of the event.

Level 4 (Intelligent Process)

Users are responsible for defining the end goals of automation, however all aspects of the process execution as well as the handling of in-flight exceptional conditions are handled by the automation. The automation is capable of performing appropriate compensating action in events of in-flight failure. The user however is still responsible for identifying the specific context in which automation can be safely applied to.

Level 5 (Fully Automated Process)

This is a final and future state where human involvement is no longer required in the processes. This of course may not be the final level because it does not assume that the process is capable of optimizing itself to make improvements.

Level 6 (Self Optimizing Process)

This is an automation that requires no human involvement and is also capable of improving itself over time. This level goes beyond the SAE requirements but may be required in certain high performance competitive environments such as Robocar races and stock trading.

In general, we can apply Deep Learning technologies in different levels of automating the process of economic analysis. Each level requires greater levels of sophistication to perform, however the above levels is a good map to find where we can introduce automation in our own workflow process.

Deep Learning automation can also be classified in terms of providing assistive or generative capabilities to its users. A good example of assistive automation is the auto-focus capability that you find in today’s cameras. An example of generative automation is the artistic style transfer apps like Prisma that you find in your smart phone. So in the context of economic modeling, Deep Learning can assist in the exploration of data as well as provide more accurate model simulations using its generative capabilities. Thus, evaluating Deep Learning automation involve multiple dimensions.

Decision-Making and Deep Resource Allocation

DL has been used in current practices from perhaps this “shallow learning” perspective of Level 1 or Level 2 autonomy for these real-world economic cases of interest. They have been used for examples in (a) regression problems: predictions for financial asset classes or broader macroeconomic outcomes, (b) classification and labelling problems: recent studies for mortgage risk (DL for Mortgage Risk), and (c ) for proxy indicators: satellite imagery data has been used to map local area poverty estimation (Stanford Study), or night-lights to asses power outages, or twitter data to asses traditional economic variables such as unemployment rate or track the contagion of viral flu epidemic etc..

Resource Allocation

Recent research at Google’s DeepMind has explored the problem of resource allocation using Deep Learning methods. Economics at its very essence requires the presence of scarcity and thus is about the optimization of resource allocation.

There are many pressing resource-allocation challenges decision-makers care about. For example, a policy-maker wants to mitigate risks of climate-change and wonders which location, technologies, and communities he/she shall invest in to make the country resilient to climate impacts. Investors that have multi-billion-dollar assets in locations around the world want to monitor how their asset might be affected by potential economic, financial, or political risks. Banks want to have an early warning signal of an impending financial heating up to have hedging or arbitrage opportunities.

More importantly, economic-thinking people are concerned with accounting for asymmetric information (certain agents have more information about the market than others), causality or endogeneity (which factors are influencing the output internally within the workings of the model system), or multicollinearity (dependencies with the multi-variate independent variables that may lead to spurious correlations and unreliable model forecasts). In order for DL to reach broader audience of decision-makers who care about such important economic challenges, there remain many missing links. First, there is no large enough training data set for real-world economic cases that is available. Second, labels are not clearly defined in economic cases unlike image recognition tasks that can have human annotated labels. Third, DL framework in its current form does not address concerns of causality, endogeneity, multicollinearity etc..

There is a nascent yet growing body of literature that is exploring this new area of research of causal analysis in the confines of deep neural networks. Functional Causal Model (FCM) which is a generative model, as well as structured variational approximators parameterized by recurrent neural networks for nonlinear state space models are some new approaches that open a window for such tasks. Such model innovations have significance for high-level decision-making that is even less well understood. It has been shown that human decision-making is limited by computational complexity, and given the resource constraints imposed on decision-makers, such new model approaches will require new theories of decision-making, that do not exist currently.

Causal Analysis and Decision Support

Given these limitations of DL, there is still hope for DL to address these challenges by providing a completely new paradigm to view some of the challenges society faces. Economic theory suggests that AI will raise the value of human judgement: as predictions become cheaper, human beings will have the opportunity to exercise greater weighting of costs and benefits to make better judgement (see Aggarwal et al 2017). These structural changes in computational capacity, large-scale data, and DL will not exclude human beings, but perhaps give power to high level decision makers strengthening pre-existing norms and organizational culture.

DL will transform monitoring technologies for decision-makers interested in various economic resource-allocation problems. But the real question is, will DL resources end up further polarizing the “haves” and “have-nots” or actually provide a paradigm shift, where democratizing AI will work to bridge the gap between such inequalities?

Wait for our next articles as we share with you some of our ongoing R&D and ground-breaking results.

Exploit Deep Learning: The Deep Learning AI Playbook

--

--