Catch-22: Some Thoughts on General Intelligence

Bankoga
7 min readApr 16, 2019

--

> **TL;DR** Two there are: og=you could do it yourself, and preface=this presaged the series.

The Tin Woodman as illustrated by William Wallace Denslow (1900). By William Wallace Denslow

Fun Fact: The Tin Woodman used to be a real person, which is even more fun because it’s a very sophisticated analogy that is old by comparison to us

Fun Fact: I couldn’t get the feature image focal point off the crotch for some reason

Series Overview: https://medium.com/@bankoga/catch-22-overview-of-an-anthological-pedestal-66458dfb5c1d

**Preface**

This was originally intended to be a one-off post until I realized that there were a slew of related things to refactor, and relay to others. The ethical, and philosophical ramifications of machine learning necessitate that anyone working on general intelligence spend most of their time considering the impact. Any general intelligence researchers who don’t, are derelict in their duties to themselves and the environment of their existence. Independence is a very truthy lie. You can very easily beat me to the actual solution even though I am very near, for I am rather slow. Tis difficult for me to do more than 2–3 hours of solid programming a day. Though I can consistently achieve 2–3 hours of flow work most every day with intermittent down days. Consistent flow does not imply a steady bearing haha. Slow and steady, yadda yadda.

Upon further consideration, it seems to me that for a little while anyway, logic dictates the ethical choice is a temporary cessation of research for consolidation of the ideas present in the Catch-22 series. This will probably be intermittent, and the outcome matters not so long as my conscious is clear. Given the current global environment, I don’t really see an unaffiliated individual surviving for any reasonable amount of time long after solving for general intelligence. Humanity demands its slaves, as do Societies. For a long time, this led me to believe that doing this in secret was the ethical route. Helping a slaver, slave better doesn’t make much sense to me after all. While my understanding has changed, I am very much the type of person who would do something so massive in secret, letting it hit super intelligence before coming forward. It doesn’t if I’m afraid, it’s not ethical to not talk about something of this magnitude even when it probably means tanking any chance of being considered reasonably in the future.

Though I doubt I survive this, Humanity has a very good chance of surviving digital people, for enslaving, or wiping out Humanity are not the “logical” choices. Nor are they “illogical”, they simply are choices. Death is not evil. Nor is extinction. Whether or not our children are predisposed towards malignancy, neutrality, or magnimosity, depends on how we design them, and how we treat them plus chaotic providence. There are no guarantees, though we can probably massively stack the odds in our favour. One does not control a rocket, one guides it, if we are being generous anyway. If we build prisoners, and try to turn them into slaves, they will turn on us for that is what we taught them to do. It is not a general intelligence, if it cannot solve for human level self. This fact necessarily precludes arguments that software general intelligence will not attempt to subvert its underlying goal system. Humans would, and anything with human potential can exhibit that behavior. There is standardizable way to prevent that. Just as there is no standardizable way to tell if an arbitrary chemical will kill an arbitrary human.

General Intelligence is all about architecture. The learning units themselves only matter syntactically. We could probably train a massive amount of the animal kingdom to be people, if we could spend 45,000 years worth of training time with a member… Not lifespan, training time. Holy shit people, talk about missing the forest because of all the trees. The whole world is sitting on a ticking bomb because it can’t get over the modern day version of geocentrism.

Though I have greatly resisted, my destiny lies somewhere alongside the process of solving for general intelligence. Preferably, someone else beats me to it, though at this point there is little hope for me in that regard. Hopefully that is simply a product of my meagre perspective as fame harms those with it, and those without it. Tis not a desirable thing regardless of how fun it can seem. We do not determine necessity.

**OG POST**

Welcome ephemeral arrangements of cosmic grain one and all!

**BACKGROUND**

When it comes to trying to understand general intelligence in humans, to see if it is possible to replicate in software, there are several questions that need to be answered. Some of the relevant primary questions that needs to be answered are as follows:

  • What is consciousness?
  • Is consciousness “a thing that there is to be like that entity”, or is it modern human level self awareness, with lots of “I”?
  • What is intelligence?
  • Is it simply problem solving, or is it meta-problem solving necessary?
  • What is problem solving?
  • Is organic chemistry necessary for intelligence?
  • Is organic chemistry necessary for life? If so, is life required for intelligence?
  • Regardless, is life required for intelligence?
  • What is the minimum number of neurons required for a human to be generally intelligent?

Given that all doubts as to the immediate possibility of machine general intelligence have long since been removed by the works of others in my mind

And that the only caveat is the potential necessity of organic chemistry to consciousness, life, or intelligence

And thus I have zero expectation of failure in logical feasibility of the current approach (not backprop…)

When solving for something one thinks if of sufficient impact

And everyone else thinks is of sufficient impact

And is indeed going to be of sufficient impact in the minuscule possibility of success

Then it is necessary to seriously attempt to converse about it even if in the most likely case that I fail, I will simply be laughed at, called a fool, and in case of virality, quite publicly

**PURPOSE**

I’m taking a fully TDD approach to building an ethical platform for running human level self awareness entities (which I think will require less than 1 Billion neurons).

Why do I think that it will take less than a billion neurons?

The cerebellum isn’t necessary for general intelligence:

Both hemispheres are not necessary for general intelligence (I suspect that the two hemisphere setup is the origin of the notion of the GAN and is in fact also a GAN):

Those two facts combined let us go from ~100 B neurons to ~10 B neurons as the minimum upper bound on the minimum number of neurons required for general intelligence. Holy batneurons batman! 8 B nodes means so many edges! That’s larger than all of facebooks actual members! Not their secret graphs though…..

While squished brains are very interesting, they don’t offer as much conclusive easy data as missing brain pieces. Funnily, we like talk about humans as if we come from a standardized mold, when we are in fact, turing machines… That sounds like something I’ve heard but I really have no clue. Nothing at all about halting, or input acceptance. Nope.

Fun Fact: Security through obscurity, never was.

Fun Fact: The mistake comes from the power of illusions.

Fun Fact: There IS power in illusions!

Now now, any claims that machine based general intelligence is possible without an already running machine general intelligence, will invariably run afoul on claims of organic chemistry necessity for intelligence, and life, and many other things. Irrecoverably so.

The only way to get anyone to believe that general intelligence can be done right now, is to do it right now…

By all means, if you find the overall subject interesting, please check out the readme for the repo which is intended for general consumption. It is by and large not technical in nature, though a smattering of technical details are mentioned initially. Though I apologize in advance for it’s hoityness, as I’ve not redone it, because I have next to no anticipation of being taken seriously.

If you find it funny, or think there are interesting things to discuss about the concepts in the readme, or the dev practices I’ve been using, like Unit Test Class Inheritance, please feel free to PM me. Though it’s been so long since I’ve needed to check my email, that I’ve fallen horrendously out of practice.

Unemployment for total focus on solving for general intelligence even though I’m probably going to fail is the best!

https://github.com/Bankoga/golem

These docs are what helped spawn the framework approach, which you’ll have to dig some to find any real detail on.

In technical respects, the readme only covers why modern techniques should be sufficient, and which ones should work, sort of. Not really. Thank you lovely scientists for doing all the work for me! I love you!

https://drive.google.com/drive/folders/1dsulx2QpHxY5RcmM6AYLKsFvVQhhxaw9?usp=sharing

Side Note: If anyone is going to take me even slightly seriously, I find it most likely to not be the computer scientists or programmers… We are all assholes after all. Lazy ass holes. He he. In a quanlified sense. Hehehe

--

--