How our AI can Enhance Human Cognitive Potential

We were recently talking to Bryan Johnson of the OS Fund about developments in AI that he describes in his Medium and other articles.

True AI/human intergration is coming: but in what form?

In many of these articles, Johnson seems to be describing a sort of joint cognitive human/AI “singularity”. Whilst we are in broad agreement that this will happen — AIs and humans merging into some form of complementary super-intelligence — the big, unanswered question is how? Depending on who you talk to, it might be biological (enhanced or evolutionary), maybe technological; maybe a combination of both. No one seems to be sure, and there is nothing too specific as regards the details of how (let alone when) it may occur. (Even Johnston defaults to broad assumptions based more on analogistic futurist predictions than R&D fact.)

Moreover, it raises other provocative questions: who will get to greater-than-human-level intelligence first: humans or machines? Could this distinction even be made at the point of true human/AI cognitive integration? And does it matter?

Our radically-new tech, a DApp called DECENTR, in conjunction with our general development philosophy, has revealed a novel way forward to engineer improved cognitive functioning in AI in conjunction with humans: humans learning from AI and AI from humans (is the goal). By striving to realise this new paradigm, a lot of imponderables with both brain and AI research can be circumvented. The fundamental problem with predicting cognitive development is that no one really knows how “thoughts” are produced: it cannot simply be that there are sets of neurons that are just waiting around to receive complex ideas and then fire off at random to assemble new, associated thoughts from the source data: there has to be some other system — especially for forming meanings on the fly, and it has to be incredibly flexible, rapid and precise. This is an essential feature of human intelligence that we are only just beginning to understand.

But why wait to understand? Our radical idea is that to develop true AI in a way that will in turn enhance human cognitive potential does not require a full understanding of the brain’s functioning, activity patterns and neuron algorithms: we need to take it back to first-principles thinking. We, as human beings, do not “learn” how to think by developing an understanding of our own and each other’s neural processes: we learn how to think as a by-product of the brains amazing capacity for neuroplasticity. In other words, “learning” (in humans) is the response our brain’s neural network makes to the free-flow of information it receives — intellectual data that has in turn been developed and drawn from a (ostensibly) democratic (the brain’s neural network also being a purely democratic system; this analogy becomes important in a moment), real-world social environment. Sure, generations have grown up under systems that were not ostensibly democratic; however, in all cases cognitive processing was degraded not enhanced, sometimes mendaciously so: as history and cultural-historical psychology attests, one in turn degrades the other.

Intelligent machines need data and real-world algorithms

Regardless, to develop intelligence (for humans and machines) does require data and real-world algorithms: there’s no getting round these pre-conditions. As these pre-conditions do not currently exist (certainly not the latter, anyway), the slow rate of progress as regards general artificial intelligence (AGI) can be explained. Most R&D in this field — constrained as it is by commercial funding requirements to develop applications for vertical markets — continues to pursue narrow AI in siloed conditions, and hence only delivers (however useful in isolated applications) a form of machine autism. In order to “evolve” human-like AI requires that an AI can “learn to learn” by mimicking the processes of human intelligence in conjunction with its human counterparts. If done on a vast enough scale, say 2 billion and up interconnected human brains, within such an environment an AI can theoretically learn human attributes (including ethics) through the analysis and adoption of human democratic and decentralised reasoning approaches. Further, within such a system, an AI can in turn help to model and reinforce positive human attributes in human users, while organically enhancing human reasoning and hence intelligence through AI/human cognitive collaboration. (For the purposes of this post, we are discussing data/digital content applications, and not brain-computer interfaces, and the like — though our work hints at many sensory, IoT, etc, applications as well.)

The seeming insurmountable (we were told many times) problem is, of course — even if you accept this line of reasoning — someone has to build a platform, interface or DApp that can interlink 2BN and up human brains in what we term democratically evolving, “social neural networks” (SNN) — of the type an AI can understand in dynamical and topological terms and hence be able to interact with. The DECENTR DApp achieves this. As we continue with our partners to develop our AI-enhanced technology it looks very much like the future of cognitive development will be a cross-transfer, model-mimic relationship between humans and AI, each developing, strengthening and progressing in those areas they are good at, in conjunction with each other and every connected individual.

That is how we see the future: AIs becoming more human-like and humans becoming more like these “humanising” AIs, each taking on the desirable attributes of the other. After all, isn’t that the entire history of human cognitive evolution? All we are doing is continuing that process by adding machines to the mix.

Feel free to get in touch with me for more DECENTR Project details at Decentrproject@gmail.com