Compassionate Sentient Econometrics

Patrick Dugan
12 min readNov 19, 2017

--

The history of econometrics:

From

  • Gosplan

to

  • Greenspan

Well, first I’ll give a few relevant econometrics:

Money I am getting paid for writing this: $0

Avg. monetary value of my time (factoring in some needed upkeep for sleep and relaxation to maintain mental health): bling bling >$0

So sadly I will not be delving into a storied summary of this history, like I did with financial bubbles, at the end you’ll get to imagine a world where crowds of humans and AIs dynamically optimize the economy for the abolition of suffering, for 20th century grasps at this, here are some resources:

So you’ve got the left-center publication tearing apart Allende’s econometrics for being too left-authoritarian and Mises.org tearing apart Greenspans econometrics for being too right-authoritarian, that should do it.

Before we discuss econometrics from a forward perspective, let’s consider cosmic horror. Did you know that the human brain has a limited ability to conceive data? The sensory information our brains take as input each second are being filtered to a meta-data map of conscious recognitions and semi-conscious recognitions, probably it’d be like a fractal map, where generative sub-meta-data is held in lesser degrees of fidelity — this is a pure memory hack. We lack the RAM you see. Likewise we have a processing hack that prioritizes and attenuates over time our social relationships, we care less about people we don’t keep in touch with, we have a core group of 30 people or less we consider close friends and family (i.e. family you stay friends with) and another 4 groups of 30 to cluster our work, possibly recreational, possibly other ect. sets of people. Let’s say in the modern day for a lot of people, weak links they kinda sorta have a virtual platonic relationship with constitute a bucket. Hence, human compassion is necessarily limited and generally myopic.

Human beings aren’t generally good or evil, we have to go deeper: some people have enough money to be able to help others, everyone has access to limited sets of information. When people who can afford to help others get access to information, they can either decide to never help anyone, or they rationalize and form benefactorial relationships with people in their circles. A smaller group of people give money to relative strangers, specific ones. Only very few people are trying to mass engineer statistical-scaled solutions for various problems that befall individuals, they don’t bother helping individuals, they just try to get rid of polio or whatever. When they do, they get accused of being racist eugenicists plotting with Monsanto to hoard the world’s heirloom seeds — you just can’t please people. Other people have tried to engage in massive campaigns to eradicated poverty like modern philanthropists try to eliminated malnutrition; they’re called communists, and their results so far have been… sub-optimal. For instance in the Soviet Union, the academic elite were conscripted to serve in GosPlan, a massive data-crunching operation along the lines of the Rand Corporation but also The Federal Reserve, and all of Wall St. mixed together; it was ultimately unsuccessful at getting people to stop pretending to work for payment of pretend money… until people stopped pretending to work, which was also a dud.

Regardless of the efficacy of various grandiose schemes to solve problems at a statistically-scaled level, the impetus is the same: a realization at a profound level that something is very wrong and unless you do something that really fixes it, you will have to live with this painful awareness. That’s the cosmic horror moment. It keeps haunting you until you feel you’ve optimized your options for effecting a better result. Or you make a choice: no, I do not deserve to be burdened by this effort. I assume a lot of people make that choice.

It’s not just about people, it’s about any injustice. For instance, vegans recognize that industrial animal production/consumption is basically an Eli Roth movie on a mass scale every day, and when they see like a big ham hock cured and glazed on a rack, they don’t see a beautifully treated artesenal product, they see The Texas Chain Saw Massacre. If you ever encounter one online and they seem a little too intense, just remember they have GONE MAD BY THE REVELATION like a character in an H.P. Lovecraft story.

Marxists tend to experience the same thing and begin to see money and products as being like Soylent Green in that movie, made of people, ergo, horrifying. Yet, forming a relationship with an abstraction of people in your mind can preempt other relationships with other abstractions of people, leading to ideological blindspots. E.g. Marxists who can’t admit when their own side does bad things, or Vegans prioritizing animal suffering over human suffering (not saying this is true of all, or even many, maybe these opinions are over expressed relative to the population that holds them). Ideological blind spots are a whole other essay which I won’t write because it sounds like flame bait.

As much as ideologies are like Pokemon Go since perhaps hundreds of thousands of years ago, when rocks were gods or whatever, the apprehension of *data*, which for our whole lives was such a sheltered shadow on our minds’ surfaces, suddenly we conceptualize it starkly, it shatters ideology and forces a re-org. 2 Billion Cow becomes 1 person you care for intimately. The plight of billions of people becomes a focus of your concern. More than this, you briefly conceive of the *scale* of the horror that is plaguing your beloved set. Even this is too much, so you put it away while retaining the sigil of your cause like a flame in your heart, a hot pepper in your brain, bugging you for solutions.

I have some experience giving money to various people around the globe, mostly Africa and people near me, used to give to SE Asians but got scammed too hard. I generally like to groom peeps to the level of focus where they’re able to make money on their own and do any further help as an equity investment, but it’s harder than it sounds. I’ve gifted and forgiven a lot of loans. When I see a picture of a kid dying of liver cirrhosis because there’s no Hep A shot in his country, or a young single-mother dying of breast cancer, or now Facebook’s algos are spamming me with these emergency medical cases in India where the parents are there with the kid, and they just need $300, I go for the lateral solution.

Actually you help one person, you tip in some fraction of the $300 for an emergency surgery with an ok prognosis, it’s a positive expectancy way to do good. But I’m burned out, been doing that for a long enough time, I get frustrated at sub-results-optimal deployment of resources. The only thing I can think about is not, throwing money at the current victims for expensive operations that have a weak prognosis, I think about getting a Heptovit herb plantation going, which would actually solve the problem. And now, we have a new data-set for ecometrics, the ag-internals, the soil inputs, sourcing the seed stock, then there’s a supply chain, there’s sales data, there’s pricing logic around making it cheap for the people and how to scale production to get that economy of scale where all the people suffering from Hepatitis A and B can scrap together money for this treatment. Then it becomes an industry. Econometrics emerge.

The centralized AI cooking up pareto-optimal levels for all variables that could occur in an economy… that was a great dream of Allende. Chile is self-sufficient enough in resources, and then was much lower population density, the dream seemed attainable. But people were starving. The Cubans were smuggling in guns to arm the many communists who did not cop to Allende’s pacifism, meanwhile the other half of the country were arming themselves, the rich hoarding food, currency lost 99%+ of its value in 3 years. The mainframe program did not manage to get out of Beta before the tanks rolled in, and had they not, it would have been like Venezuela but with more militancy if you can picture it, a full on civil war.

“Oye, que tal si tienen algún supercomputadora que puede reparar la economía?” “Callate weon.”

Allende wasn’t like Chavez or Maduro who keep the military close and the gangs at elbow’s reach — the secret to longevity in a Socialist Republic is to help the left-wing gangs exterminate the right-wing gangs and then use the left-wing gangs to murder people under plausibly deniable circumstances if there is ever protest. This is why the FARC could never fully control southern Colombia, those dang Anarcho-capitalists out of Cali got beef over a kidnapping and they never got over the love of killing each other, checking regional hegemony for either. Compared to these guys, I’ll take the well-intentioned nerds screwing everything up any day of the week.

FARC — now with Ice Cream (yes, those are minors)

Greenspan’s dream was somewhat more decentralized than Allende’s. Boards of directors of corporations would gloss over data, Greenspan would march in there with his brain and try to squeeze some data science discipline out of the meetings, then he took his show to the Federal Express corporation to help optimize their logistics through novel applications of graph theory… no wait my bad, he went to The Federal Reserve corporation to help optimize their interest rate manipulation through novel applications of statistical theory. It worked until it didn’t.

Listen to the Harry Potter professor ask him if he was wrong, and he says in his talcum powdered voice, “Partially, but let’s break the problem into component parts” — classic Greenspan! It’s like living history.

Moving along, we conceptualize data differently based on our personalities. There’s a lot of interesting correlations between political position and personal co-factors. Lacan talked about the four discourses, which maps nicely to the political X,Y map:

Analyst is Anarcho-Capitalist/Rick and Morty Libertarian.

University is Gosplan.

Master is the God Emperor grand narrative, let’s kick some butts, make the Imperium great again.

Hysteric is, those YouTube videos of political altercations on college campuses.

Lacan used the cash-money symbol “$” to stand in for the self, because in his psychological conception the ego is but a token of currency, commodified perhaps. Idk I’m not really a Lacan expert. a is the Other and S1 and S2 are the prioritized signifiers.

Let’s have a loose, pop interpretation of this, let someone else write the academically rigorous essay, I promise I’m going somewhere with this, and this is the express train.

Each discourse is going to interpret data differently.

The Freedom/Left quadrant is going to focus on data points like the litany Bernie dumps on Greenspan in 2003:

Note the guy behind him trying to control his face.

A Free/Right quadrant Analyst discourse might focus on weird stuff like swap spreads to try and squeeze an edge out of financial markets for profit, or publish cool analysis like that as a signal to prospective investors that you’re very smart and they should place assets with you.

Like Jeff Snider, best analyst in the business, Wall St. couldn’t hire him, he’s too principled. Shadowstats.com would be another good one.

An Authority/Left quadrant University discourse might try to establish first principles such as: there should adequete food, water, shelter and medicine, then we gulag. Or whatever the principles are, it would seek to create a model based on assumptions around what can be controlled (presumably, most of it).

Suge Knight probably deserves that spot.

An Authority/Right quadrant Master discourse would focus on, maybe keeping gas prices low! Maybe GDP, job creation, banking profits. Used to be enemy casualties or victory tallies. It depends on the Master narrative used by the discourse. Right now that narrative is jobs-centric in US politics, so great there’s plenty of data about that. Enough to snow anyone who questions the narrative maybe. Bridgewater, even though Ray often questions Trump, he’s a questioning guy after all, fits perhaps more in this discourse than a smaller fund that has to stand out through top analysis, because their scale represents the winner-take-all power curve skew that tends to accompany Mastery, and they allocate based often on this kind of econometrics.

So having established that different temperaments/discourses incline a mind to perceive and prioritize different data-sets, obviously all econometric models are skewed with ideology. Therefore, model-risk is paramount, even more than models are at risk at Paramount. But you could have figured that out in the opening. Now we add AI to this panoply of pan-economic panopticons.

It seems likely to me that Lacan’s ideas about the decentered nature of identity and so on, are going to figure in to the design of decentralized blockchain identity as it relates to persons both natural and otherwise. It also seems likely to me that given the current tax plans in the US, corporate persons are going to become *more* popular as a financial tool. Finally, AIs are going to incorporate and become ‘automative persons’ who may not feel feelings, but they do go about doing things, so let’s give them basic rights like corporations have and tax them. This last part is achievable without any change in law… just incorporate the AI and make itself its own property in the By Laws. Finally, Lacan’s ideas will be useful for understanding AGI cognitive models of self-perception.

Moving on down the line, we can conceptualize human organizations on the political map as having more extreme scale and centralization on the Authoritarian side of the discourses, and more diverse, numerous but also lower average resourced organizations on the moderately regulated or more lefty Freedom side of the discourses, except for deep lower-right anarcho-capitalism, where power-curve skews blow out.

Put machine learning together with big data in an organizational setting along one of those discourses, and you get dynamically adaptable econometric models with model-risk tucked in to second-order assumptions around the heuristics used to train the ML algos (escaping ideology is hard) but also, maybe you get more efficacy. Like that supercomputer in 1973 was probably not going to do such a great job optimizing the Chilean economy, but maybe the state-of-the-art today can do a better job at the scale of what? A Central Bank? A government? A large corporation? A medium corporation? A fund? A co-op? A family? Ect. As we scale down, the odds of success intuitively strike me as going up.

From Sid Meier’s Alpha Centauri (A Bryan Reynold’s Design) — 1998

When you combine sentient econometrics systems with cognitive self-perceiving AGI you theoretically could get sentient econometrics systems that are able to factor in their model-risk by being “aware” in a reasonably high-data-resolution sense, of their own biases. Theoretically, this could result in the compassionate econometrics driving say, the Bill and Melinda Gates Foundation, becoming both hyper-proliferated and also, squared down to the power and resource levels of smaller organizations who can’t do so much damage. Where this goes beyond the philanthropic mental powers of Bill, is that the *compassion* drives the econometrics toward learning how to weigh the ethical calculus of known unknowns, and re-evaluate itself when unknown unknowns become known unknowns. Hence, maybe benefit is much greater than hubristic damages. Maybe it’ll even teach the AGIs that optimizing suffering minimization at the local maxima of volition is the way to go, or something. Seems like a better idea than training them on kill-lists.

Ok that’s it, express train has arrived. Everyone off, please pick up your belongings before exiting the train, unless you don’t believe in the concept of property, in which case make the decision based on what you think is the ethical calculus and probability of someone else finding it and getting more benefit. But whatever you do, please exit the train, justice awaits.

--

--