When does a signal become a symbol?

We discussed the relationship between symbols and signals in an email exchange with a potential partner. We were discussing whether Maslo had made any verifiable progress in artifical intelligence and empathetic technology. The response was too good to not post a blog.

Hypothesis 1: A signal becomes a signal when enough things in the possibility space relate to it.

Hypothesis 2: everything is an evolving relation.

Hints Towards Experiments and Proof turning Hypothesis 1 and 2 to The Theorem of Probable Correspondence.

Consider “Pi”. The engineered aspects of our world (the stuff we make out of actual stuff) engage with Pi as a rounded 3.141592… floating point. The constructable methods of the world, if you will, consider pi, as a guardrail like threshold of proportion or distance (circumference or area, etc). Sometimes though stuff of the world uses Pi as a unit of rotation (rotating around the origin, turning a dial, angles, etc). In either case of rotation or length/proportion there’s another notion of Pi hinted at… the algebraic or symbolic notion.

When did constructable Pi morph into just the symbol Pi that we can now use to reason about the world in other ways? The constructable Pi is data. The algebraic Pi is a relational symbol. It relates physics phenomena, it relates topological spaces, it relates periodic systems, and so forth… but what exactly, in constructable ways, is it relating?

What exactly is the length of the circumference of a circle? or just how many slices can you cut a disc into or rotate a line around the origin? Pi tells you something about this, but what? What would happen if Pi didn’t have an infinite decimal expansion?

This is fun example of the paradox underlying everything in AI / machine learning knowledge / complexity / humanity, etc. When is a symbol a thing unto itself? A stand in for something else? An approximation? When is everything being referenced cohering to something in reality? It’s a paradox because there’s no resolution. But the lack of a resolution doesn’t stop the world from carrying on. And, one may wonder, is the lack of a resolution somehow exactly why the world carries on?

Related examples: When does an old species officially evolve into a new species? Homo Genesis branch out into Neandertals and Home Erectos and Sapiens? Exactly what, in such a way that you could programmatically construct such an evolution, defines that categorical break?

When / how / why does an oral verbal behavior become a phonetically marked glyph become a basis for an alphabet become a grammar become a language become a basis for a programming language? Is a high level programming language a subset of english or some other language or is it a unique language unto itself? And what’s it a language of? Humans to humans? Humans to computers to humans? Computers to computers? Arguably computers take the programming language and mutate it to assembly mutate it to logic circuits (it goes from semantic meaning to data/bits… but when?) and then comes back out up the stack as semantic meaning (often in the form of pixel brightness on an LED screen, yet again more data/bits)…. So where exactly is the line between data and semantics? signal and symbol?

“steel wool photo of orange fire” by TETrebbien on Unsplash

We don’t have AI or anything even close to artificial general intelligence yet… because we have collectively failed to understand just how many orders of magnitude of data / relational stuff you need to actually perceive across categories of knowledge (however you want to talk about knowledge and information)… and do something meaningful. We have not fully grokked as an industry and a science (cognitive, computer, behavioral) how much information is actually stored in genetics, the emergent body, the environments that evolve with us, etc.

Like seriously does anyone actually have even a reasonable estimate to how much information is encoded in the human genome?

The billions of years of evolution… billions (maybe trillions) of life forms living out evolutionary experiments only pass on a string here or there of nucleotides… including the instructions for the replication of the replicator itself… how many data->relation… object->symbol… signal -> metaphor… constructable -> algebra experiments happened to get us to Home Sapien? Now… consider how much data is required for the average human to function at all… how much must be learned/related… and then over time, reinforced.

This is all a very strong indicator that there is no algorithm, no technique, no method… there isn’t even a better one or even one that’s close enough. There’s just relationships between situations and things. And either the relationships evolve or the disappear. Either the relationships evolve and maintain dynamism or they become pathological and reduce to unchangeable linear things (no signal is valid) or they become all consuming (all signal is valid etc)

Here’s the punchline or Why We Don’t Worry About A Full Proof of our Reasonable Hypothesis 1 and 2

We use the most computational efficient methods available with the data available to us to create the most dynamic (responsive, playful, attentive, and not consumptive) behavioral repertoire we can. We map any and all signal we can to other signals and let relationships emerge. Users relate to their Maslo’s. Maslo’s to their users. Maslo’s to other Maslo’s. Users to other users. We measure the dynamic gradient of relationship formation, maintenance and cessation. We seek no end point in the ecosystem, simply to maintain non-pathological states. Everything else takes care of itself.

I’m a big believer that in the end… what we come to trust and believe is that which produces the consequences we find valuable. We do not trust and believe because we know all the details or have all the signal or have fully articulated the symbols. e.g. I do not think most people actually understand the theory of gravity… but boy does everyone use and trust gravity…. to do otherwise… is grave. :)

Do interesting things in relation to other things. Interesting things must be coherent enough, reliable enough, but surprising enough and resilient enough to keep a relationship going.

Additional material related that might be useful to tie things together

An existential exercise in trying to get to the top and bottom of something we think we know. my own exploration on how to understand a Tree: https://medium.com/@un1crom/parallels-the-extent-of-trees-and-persons-9a1bf8eb91a4

A tiny, slightly incoherent, primer about my perspective on data, programs and ontology: https://medium.com/@un1crom/data-mappings-ontology-and-everything-else-31af22c59d65

A bit more about data as probabilistic relation: https://medium.com/@un1crom/data-is-the-probabilistic-correspondence-between-systems-bc244966cf97

And a longer, still informal, treatise on all of the above: https://medium.com/@un1crom/mapping-existence-35a32ece6a3c

Topology and Data by Carlson: https://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01249-X/S0273-0979-09-01249-X.pdf

Universe Hunting by S. Wolfram: http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-universe/

Algebraic Mind by G. Marcus: http://www.psych.nyu.edu/gary/TAM/tam_frontpage.html

Kandel is also another favorite. His work on memory is special. https://cup.columbia.edu/book/reductionism-in-art-and-brain-science/9780231179621