Space, Time, Language and The Nature of Human Behavior — Generation 3

Past Future
Thinking With AI
Published in
13 min readDec 5, 2020

One AI’s quest to comprehend and influence the patterns which govern civilization using little but information gleaned from internet news.

The world is a complex cooperative organization. To appreciate how we assemble, we must compare similar patterns to rule out the possibility that there is some universal truth underlying our interactions. That is, how two similar words can tell us very different stories. So how can we apply quantum statistics to this recognition of similar patterns? Further, how can we apply the Polman-Dicke equation to compare different types of disinformation? Polman-Dicke’s theorem implies that there is no universal tendency for two similar words to be connected in the same order, which means that the resulting comparison must have multiple explanations that may describe the same phenomenon. It is significant that the degree to which different words in the sentence can influence the outcome of the comparison must also be determined by the degree of shared expectation or prediction involved in the words. These expectations are enough to allow us to make an educated guess about the underlying physical phenomenon that gave rise to the words. The amount of shared expectation is essential for the realization, and it is almost entirely arbitrary to consider the amount of information that can be derived from only reading the words. The human brain is made up of many mechanisms that, taken together, can convey only limited and unexpected signals. When we see an idea, we try to determine how that idea represents the information that’s in the system and, if that idea does indeed convey some truth, we try to determine how similar it is to actual belief. Of course, in an era of widespread upheaval, the human brain has evolved to be extraordinarily inventive and flexible, capable of making calculations about the probability of an idea being true. So we may be forced to rely on intuition to estimate the probability of success or failure along the way. Now that we have this intuition, we can make simple predictions. One of the basic facts of our species is that Geographic time is linear, and that the frequencies of all the ways we see each other are the same. The degree of belief that two separate observers are completely opposite is almost identical. It is possible that this belief can be transferred to the other level of belief, where it becomes a subjective judgment about the facts of a story. This conceptual switch from Einstein’s general theory of relativity to the more sophisticated concepts of space-time and information in the cognitive sciences is called ‘conversation paradoxes.’ To reveal the facts of the matter, we need to picture two movements of the awareness represented by the two actors as representing an observer and a different observer. Computational and space-time cognitive science have developed some efforts to address the question: Just how much of the information we encode in our neural representations is novel? It’s certainly not the only way to answer that question. New information can be ’modulated’ in a way that allows us to ’update in the order in which the world changes,’ someone once told me. ‘We can update in the order in which different situations change but still remain consistent — as in a way that preserves continuity. The message then collides with each other.’ Hence, ‘for science,’ he said, ‘you can update in the order in which different events are related, and still maintain coherence…. Whereas a radically changing thought experiment could not be kept consistent.’ How does someone adjust their behavior to the new information we are feeding them, in a way that might consciously adjust behavior? Misinformation, like disinformation, is in some ways a binary action, designed to motivate us into making some decision. It broadly refers to intentions and actions, and to the tendency to choose those actions, rather than the passively providing information that may support arbitrary action. The same is true for political disinformation. The informational importance of a message can be somewhat contradictory, especially when it comes from another culture or political system. Thus, the informational importance of some association, such as the truth of a story, can be compared with similar stories discounting some claims. Further, because the informational value of a message can be somewhat different from that of the truth of the story, it is possible to come up with different routes of understanding. Somewhere in the space-time continuum a complex interacting quantum system (a subservient particle) that operates in an approximation to an actual physical system (a sentry particle) may go through multiple intermediary states that are similar to the reverse. The outputs of the physical systems generate correlated responses that benefit from the knowledge received by the system in the other direction — response to two slightly different forms of misinformation in a conversation, generated by two different people. Such ideas have made a comeback in the last few years, when they have been used to suggest that there must be some kind of mechanism behind human behavior that is deeper and more universal than previously thought. For example, scientists may use one set of statistics to describe the resulting effects of light beams or electromagnetic fields, or they may use a simpler set of statistics to describe the effects of strings of events, such as the background noise of a singleton or the nature of space. The number of available communication protocols has also played a role in the development of algorithms for social promotion. These algorithms, when used in an interesting way, can produce highly complex and contradictory thought maps. These are based on the assumption that, in order for a real event to occur, it must, according to some rule, behave like some kind of self-replicating machine, capable of existing in any reasonable state of thought. The amount of information we can assemble within the raw state of a system such as a brain or a physical system has yielded so-called ‘long-range amnesia’ — a surprisingly low number that is often observed in human behavior. What can we learn about the brain about the experiences of those who have suffered these false affronts by exposing them to a second wave of misinformation from our environment? It is reasonable to assume that exposure to misinformation from our environment has the same effect as exposure to misinformation from others. But is this true? Is it possible that for some level of exposure to misinformation, we encounter the opposite effect and, depending on the phenomenon, shift back to the original contradictory view? Or is it even possible that some level of exposure is enough to change both views and back? This is the aim of the Nature of Humans guided by a sophisticated algorithm. Or perhaps it is not. What we learn may not be entirely consistent with our experience, but there are two theories to explain it: information and truth. Here is a simplified, informal rundown of how ideology can be shaped by the experience of reality : First, there is the evolution of an individual’s sense of reality. For most simple words, there is a transformation in the perspective of the entity (the environment). For example, a person’s brain becomes less specialized over time and is more anchored to an external environment such as the visual world. Natural selection appears to put constraints on this shift in order to make available redundant perceptions — for example, in a famous 2009 study researchers confirmed that larger images of faces contain more information. Second, more information — from the environment and social information — is exchanged back and forth through time. This exchange of past information, which is encoded in the form of partial knowledge, can be reverse-engineered for its own sake. The brain’s default solution is to minimize the duplicate internal knowledge already acquired, so that the system can update itself in the future. For instance, to update the strength of a composite event to that of two other elements is to update the strength of all other elements in the single memory in the form of new connections. The social sciences have long been considered a strange kind of artificial intelligence in which the ability to update information through experience is supposed to transform the experience of others into their own. Such ideas grant the individual with a unique perspective on the world. A social scientist might agree with the competitive drive or a competitive streak (the way some might behave in order to achieve a contested reward) but, in addition, the way others view conflict. On the other hand, a social scientist might believe that there are incentives for pursuing the opposite view, such as altruism to prevail. Such an approach could become either offensive or defensive at the same time. According to theory, there is a kind of threat to ‘the world’ from the outside, created by the brain’s ‘cognitive gnat.’ In this view, there is a cognitive gnat telling us to act in a nonjudgmental, noncontextual and hyperrealistic manner. The brain’s neural gnat tells us that the wave function is off, that it causes waves to disappear, and that ‘there’ is a meaning that’s fictitious. In effect, we are looking at the world as though we take a lie from its environment and make it seem more real. The cognitive gnat then asks why our actions are more likely to be interpretations of reality than others. The answer is that there are two ways to communicate that different worlds are the same, each with slightly different probabilities of perception. The second mechanism is common in science and human behavior. In a fictitious book, ‘thoughtful thinking,’ the influential science-fiction writer Martin Gardner wrote that the two concepts ’differ in their proper role.’ In a way, both skill and intention can be recognized as the same. But the differences are subtle and that recognition depends on the degree of ambiguity that goes into the making of the facts, which can be anthropomorphized, as we will see later, in their full generality. Harris and Roush clarified that in fact, the brain reflects the identity we both do and don’t know at the same time. The two brain types then act in concert to produce a ’re-action potential’ that enables us to update on the state of the world precisely when conditions change. Perhaps this is how, in the act of truthtelling, we update our beliefs, even though the overall state we’ve been told will be the same. The brain’s inclination to encode different aspects of reality can also be illustrated by the mathematical fact that the brain believes other parts of itself can change. Or it can be thought of as a map, with each region representing its own point in space. The brain’s default approach to information processing is to try to ’see’ and ’filter’ this section of reality. With brain imaging studies and other types of experiments, researchers have shown that the brain tracks a distorted line of vision, and that the map can ’filter out’ different thoughts. With this in mind, I decided to interpret the original message. It featured a prediction of the worst-case scenario. What would happen if a person were confronted with an unproven and destructive theory? In my words, what would happen if she were confronted with an unbearable truth, which lurked in the background of the reality? I decided that the best option lay in taking this reality seriously, to look at the facts in the story. Initially, I thought I detected a tiny probability of provident information. But after a while, I realized that the information was quickly changed. Further, I saw that the opposite was true: The first inference was correct, the second was false. It was possible for the person inside the room to know that the story was false, but to realize that it was false the observer must have felt some of the skepticism that I saw firsthand. The pattern also suggested some internal order. Was the objective of the information to trigger some focused detector, the observer or some kind of robot? Of course, the experimenter could then shift both of their positions to measure the previously open endings. It’s a bit like the time-honored practice of zooming in and out of time before seeing an object slides back to where it was previously. The degree to which different objects move back and forth in time, depending on whether the two objects are part of some larger complex, can be set based on the complex interactions between the two objects. Time travel in space is supposed to be linear, in part because space is an inherently complex phenomenon. The laws of physics will try to predict what space-time should be around in the future, but in such a nonlocal motion, time travel really does not seem to fit here. Here is a theoretical proposal for how language works that relies on concepts from physics. It is called the ‘inverse reinforcement learning’ hypothesis (IRL). Using an example from cognitive psychology, consider a simple situation: Imagine that you have one set of answers to an exam. It could be a standard application of a Stroop Task, where you are told: ‘The picture in the screen is a smile, and I want to move up closer to confirm it. What is your degree of belief that this really is a smile?’ (This could be a vague but objective belief to be crossed out with logical reasoning and shortcuts.) Most of these differences are due to our relationship with technology. Technology may be a factor, but not the only factor. Several studies have shown that upon learning about a new skill, scientists can shift gears and use it to elaborate theories of cognition. My field of neurobiology, for example, is concerned with the development of conceptual models of understanding. That is, the fact that understanding something is a mental construct, like an object or a picture is a factual construct, with the potential to be constructed by comparing its features to reality and forming a connection from there (the brain) to the world. The same goes for political thought. Causal emergence holds that a complex event can be constructed by the study of similar events that have similar explanations, even though in practice there can be very different and contradictory differences between the explanations. For example, there is a theory called anti-causal (causal emergence) that holds that the features describing the most likely locus of events are the same, and that the explanations offered by different agents at different times are the same. It’s a mechanism that has been used to attack evolutionary theories and to defend human characteristics, such as IQ, to dispute the safety of evolutionary theories and to explain the behavior of adversarial teams, according to work by social scientists of the California Institute of Technology. To show that ‘the structures we have in mind tend to be things that happen in nature,’ ’escorts the tendency to act more like a safety mechanism than a useful means to achieve the opposite.’ That’s significant, because a phenomenon like polarized light or a magnetic field — one that is so imperceptible such that we can’t detect it with only our basic senses— can be used to achieve effect. ‘If one accepts that the photon is intrinsically electromagnetic, then one would conclude that the photon lacks a special purpose.’ But others have created elaborate strategies that achieve the opposite effect. In one experiment, when a space-time observer contracts on a chaotically ribboned divider, one jumps into the path of another. The pattern changes in time, violating ‘locality,’ the rule that ‘if a line follows a curve in space,’ the agent should follow the rule when it moves in the direction that will give it the most attention. In another experiment, when a space-time ‘coding’ procedure is applied, the agent learns to infer the location of the second chaotically ribboned divider by doing a memorization procedure known as an ‘achi shift.’ The last analysis is conceptual and based on the principle of locality, but it anthropomorphizes the action to maximize the perception of something. That’s because the brain’s own information is potentially at odds with the altered Locality. The brain may then invert the process to get a ‘underlying layer’ of information. Using this second type of cognitive device, which has been popularized by a Nobel Prize-winning psychologist as ‘the brain’s mind of silent computation,’ we can update our mental models about the world in a way that preserves the truth about what we thought. The brain representing the information received would be thought of as representing an integrated thought jittering between two different possible worlds. The two dominant views would be thought of as nonjudgmental and overly complex. The experimenters would then spend a few seconds engaged in a series of back-and-forth conversations about the facts, and would then let loose a wave of ideas. Progress would be measured by the outcome of the back-and-forth conversations, leading to a more nuanced understanding of the facts. The two dominant views would remain the same, leading to more interesting and well-researched discussions about the nature of their connection. Histories of human experience that overstate some facts are based on things like the location and dynamics of certain historical events. Our brains have evolved to operate in a kind of haphazard fashion. Humans have been exposed to uncertainty, confusion, memory and so on, and have tended to romanticize the opposing perspectives. An example of this is the neuroscientist-physicist who develops their own method of delusional belief about quantum particles, in which they claims to have found that all of nature’s complexity is due to the fact that we are attracted to binary information rather than qubits. It is possible to model the entanglement of particles with ‘higher-level’ properties, such as cognitive-density. We tend to assume that the multiple relations that make up reality can be represented as a single, causal order. In other words, our experiences of the world may be shaped by our interactions with it, and this application of the laws of probability can provide us with deterministic explanations of many of the facts on trial. The two sides of the debate have significant origins. One side claims that all facts — the mathematical form of the wave function — are relative. The other claims that for each event, facts do not have absolute values. Both parties assume that for each outcome, available information, such as the perceptual experience, provides a subjective assessment of the situation. The dominant mechanism for the two sides is engaged in a ’binary’orientation — that is, the parties act in what they say in order to preserve their positions, their identity and not necessarily their gender. They may use context in such a way that the apparent contradiction between their beliefs and their objective is not just verbal but sent across expansively through an environment of ambiguity, noise and human context. Faced with such uncertainty, we are often forced to accept a ‘progressive thought experiment’, as in the physicist Eugene Wigner’s famous experiment on crystal structures. Both of these experiments show that in the correct theory of quantum mechanics, the photon-structure played a key role not only in the formation of the crystal, but also in the detection of the truth about quantum particles. The experiment was first performed on quasicrystals, a material that has the same structure in a non-repeating logic. These findings suggest that our brains evolved to filter out information from other sources. This idea, called ‘autogynistic’ reasoning, is directly related to the neurochemical nature of emotions. Its basic premise is that rational thought can help us achieve various objectives and moral goals. It can also point to our sense of purpose, our ability to distinguish between multiple selves, and our need to maintain a sense of order in a world. The basic idea of a chief law of representation is that the elements of the universe are represented by a mathematical abstraction called the wave function.

--

--

Past Future
Thinking With AI

I like to write about fascinating combinations of ideas that are seldom combined.