“Why the CTMU as I see it is the Greatest Theory Ever Conceived, Eternal, Irreplaceable and as it specifically applies to Physics”

Luke Skywalker
65 min readJan 29, 2018

--

The Cognitive-Theoretic Model of the Universe as the Nexus of Spirituality and Cosmology:

An Interdisciplinary Approach to Reality

Christopher Michael Langan

1. General Introduction

Since the dawn of our species, human beings have been asking difficult questions about themselves, the universe and the nature of existence, but have lacked a unified conceptual framework strong and broad enough to yield the answers. Enter the

Cognitive-Theoretic Model of the Universe (CTMU).

This will be laid out in two parts. Part 1 will be the philosophy and the logic of how the theory is developed that should change how we view and use science forever. Part 2 is strictly physics and what the CTMU says about physics at least in the basics. So 1. Philosophy as it relates to science. 2. As it relates to physics such as resolving the quantum non locality paradox and a new interpretation of quantum physics with a new model fundamental conceptualization more fundamental than the atom and the ultimate conclusions or solutions come at the end such as non-locality with some mixed at or hinted in throughout when I get to that part. But all of that is preceded by some preliminary rudimentary physics on things such as time and how physics relates to the expression of our minds as Langan lays out and briefed here so that nothing much is to be assumed but is still just an overview in the best way I could summarize and brief it while reaching some grand conclusions specific to physics without getting into too much detail the best I could. So again Part 1. The logic of the CTMU 2. As it is applied to physics with a new model of space-time.

Before I get into the rest of elaborating on the physics part I would like to try again in another approach in one sentence or one phrase(As I agreed to for clarification) what the theory even is because there is another maybe even better way to put it then I did before that’s more clear: “The CTMU is a theory that says all of the physical outcomes from the original formation of the universe and all its content physical or non-physical are outcomes of logical structure and it is through these absolute rules of logic that everything in existence conforms to on some level, and is how we can use these absolute rules of logic-because everything by the nature of its existence conforms to these rules of logical structure in being created- the CTMU lays out as logical axioms to solve any problem in science including in the realm of quantum physics and actually solve anything we haven’t or can’t directly observe by-passing empirical induction where the predictions that flow from such solutions will be subject to empirical confirmation later on.”

In the CTMU everything in existence embedded in reality conforms to logical structure of some kind and the CTMU proposes to get to the bottom of the absolute rules of logic. These rules of logic can even be applied to quantum mechanics and that not even HUP-Heisenberg Uncertainty violates these logical principles. For example in getting to the bottom of it with logic to an absolute invariant level that would apply to all reality and all physical structure and even simultaneous electrons that can’t seem to be tracked in one place at one specific location with the perfect momentum in one moment in time cannot violate this fundamental rule of logic which truth is based on called the Tautology principle:

For example, take the sentential tautology “X v ~X” (X OR NOT-X). Applied to perception, this means that when something is seen or observed, it is not seen in conjunction with its absence(Such as an electron); if it were, then two contradictory perceptions would coincide, resulting in a “splitting off” of perceptual realities. In effect, either the consciousness of the perceiver would split into two separate cognitive realities in a case of chain-reactive dissociation, or the perceiver himself would physically split along with physical reality. When “X v ~X” is composed with other tautologies (or itself) by substitution, the stakes are exactly the same; any violation of the compound tautology would split perceptual and cognitive reality with disastrous implications for its integrity.28

Also: Even so called “nonstandard” logics, e.g. modal, fuzzy and many-valued logics, must be expressed in terms of fundamental two-valued logic to make sense. In short, two-valued logic is something without which reality could not exist. If it were eliminated, then true and false, real and unreal, and existence and nonexistence could not be distinguished, and the merest act of perception or cognition would be utterly impossible.

The CTMU says these rules of logic it lays out can be used to investigate the substance of science and that this principle no electron if it exists in space time can violate which this is a good start to solving everything.

The only problem results is when things get really complex lots of details and informational complexity increases in the various scientific phenomena we are studying whether its cell organelles in cell biology or complex interaction of electrons and making sense of quantum gravity and trying to unify that with relativity or any number of things such as trying to find the cause and/or for cancer which is what Langan means when he says:

“As complexity rises and predicates become theories, tautology and truth become harder to recognize. Because universality and specificity are at odds in practice if not in principle, they are subject to a kind of “logical decoherence” associated with relational stratification. Because predicates are not always tautological, they are subject to various kinds of ambiguity; as they become increasingly specific and complex, it becomes harder to locally monitor the heritability of consistency and locally keep track of the truth property in the course of attribution(Such as the HUP Principle in QM) (or even after the fact). Undecidability,26 LSAT intractability and NP-completeness, predicate ambiguity and the Lowenheim-Skolem theorem, observational ambiguity and the Duhem-Quine thesis27 …these are some of the problems that emerge once the truth predicate “decoheres” with respect to complex attributive mappings. It is for reasons like these that the philosophy of science has fallen back on falsificationist doctrine, giving up on the tautological basis of logic, effectively demoting truth to provisional status, and discouraging full appreciation of the tautological-syntactic level of scientific inquiry even in logic and philosophy themselves. In fact, the validity of scientific theories and of science as a whole absolutely depends on the existence of a fundamental reality-theoretic framework spanning all of science…a fundamental syntax from which all scientific and mathematical languages, and the extended cognitive language of perception itself, can be grammatically unfolded, cross-related and validated. Tautology, the theoretical basis of truth as embodied in sentential logic, is obviously the core of this syntax. Accordingly, reality theory must be developed through amplification of this tautological syntax”…. Here is an example of Langan saying how tautologies and these absolute philosophical rules of logic can be used to solve scientific paradoxes in various theoretical contexts and settings of any kind of scientific phenomena even under a microscope or through a telescope whatever your looking at:

“By converting tautologies into other tautologies, the rules of inference of sentential logic convert cognitive-perceptual invariants into other such invariants. To pursue this agenda in reality theory, we must identify principles that describe how the looping structure of logical tautology is manifest in various reality-theoretic settings and contexts on various levels of description and interpretation; that way, we can verify its preservation under the operations of theoretic reduction and extension. I.e., we must adjoin generalized principles of loop structure to logical syntax in such a way that more and more of reality is thereby explained and comprehensiveness is achieved.”

To amplify this and hit this home how the CTMU proposes and uses logic to study science we haven’t even observed including metaphysics and solve anything in science and the idea that everything physical conforms to this logical structure consider his first answer in the CTMU Q and A of what the CTMU even is and how its constructed which this one response should show how the CTMU changes science as we know it but this one answer about the CTMU may even change your entire outlook on studying science as it did me!:

Chris, I’m not a mathematician or physicist by any stretch, but I am a curious person and would like to know more about the CTMU (Cognitive-Theoretic Model of the Universe). I am particularly interested in the theological aspects. Can you please explain what the CTMU is all about in language that even I can understand?

A: Thanks for your interest, but the truth is the CTMU isn’t all that difficult for even a layperson to understand. So sit back, relax, kick off your shoes and open your mind…

Scientific theories are mental constructs that have objective reality as their content. According to the scientific method, science puts objective content first, letting theories be determined by observation. But the phrase “a theory of reality” contains two key nouns, theory and reality, and science is really about both. Because all theories have certain necessary logical properties that are abstract and mathematical, and therefore independent of observation — it is these very properties that let us recognize and understand our world in conceptual terms — we could just as well start with these properties and see what they might tell us about objective reality. Just as scientific observation makes demands on theories, the logic of theories makes demands on scientific observation, and these demands tell us in a general way what we may observe about the universe.

In other words, a comprehensive theory of reality is not just about observation, but about theories and their logical requirements. Since theories are mental constructs, and mental means “of the mind”, this can be rephrased as follows: mind and reality are linked in mutual dependence at the most basic level of understanding. This linkage of mind and reality is what a TOE(Theory of Everything) is really about. The CTMU is such a theory; instead of being a mathematical description of specific observations (like all established scientific theories), it is a “metatheory” about the general relationship between theories and observations…i.e., about science or knowledge itself. Thus, it can credibly lay claim to the title of TOE.

Mind and reality — the abstract and the concrete, the subjective and the objective, the internal and the external — are linked together in a certain way, and this linkage is the real substance of “reality theory”. Just as scientific observation determines theories, the logical requirements of theories to some extent determine scientific observation. Since reality always has the ability to surprise us, the task of scientific observation can never be completed with absolute certainty, and this means that a comprehensive theory of reality cannot be based on scientific observation alone. Instead, it must be based on the process of making scientific observations in general, and this process is based on the relationship of mind and reality. So the CTMU is essentially a theory of the relationship between mind and reality.

In explaining this relationship, the CTMU shows that reality possesses a complex property akin to self-awareness. That is, just as the mind is real, reality is in some respects like a mind. But when we attempt to answer the obvious question “whose mind?”, the answer turns out to be a mathematical and scientific definition of God. This implies that we all exist in what can be called “the Mind of God”, and that our individual minds are parts of God’s Mind. They are not as powerful as God’s Mind, for they are only parts thereof; yet, they are directly connected to the greatest source of knowledge and power that exists. This connection of our minds to the Mind of God, which is like the connection of parts to a whole, is what we sometimes call the soul or spirit, and it is the most crucial and essential part of being human.

Thus, the attempt to formulate a comprehensive theory of reality, the CTMU, finally leads to spiritual understanding, producing a basis for the unification of science and theology. The traditional Cartesian divider between body and mind, science and spirituality, is penetrated by logical reasoning of a higher order than ordinary scientific reasoning, but no less scientific than any other kind of mathematical truth. Accordingly, it serves as the long-awaited gateway between science and humanism, a bridge of reason over what has long seemed an impassable gulf.

It’s worth taking a close look at this if not from the authors someone else’s perspective to see what I mean when this answer and the CTMU is capable of changing humanities entire outlook on science or knowledge itself-Which as I said I would interject my own thoughts periodically as the CTMU hit me as how I have come to understand it and make sense of what the CTMU is trying to tell us-as if from another dimension:

Scientific theories are mental constructs that have objective reality as their content. As I don’t think this is debatable the idea of a theory is something we came up with from our own minds and hypothesized on our own. Consider Einstein’s thought experiments but any theory is this way. And the theory is conjecturing about objective reality which is things we can observe and touch and feel and see and experimentally test in the laboratory and using technology and microscopes etc. We theorize and form a mental construct about something we observe.

According to the scientific method, science puts objective content first, letting theories be determined by observation. The order in which science is done is observing phenomena then thinking about it abstractly in deep thought trying to make sense of it and forming a theory about it. But in doing this science says our theories are determined by what we can observe or the data we test for and observe. From this data the scientists modifies the theory to accommodate the data observed and if the data invalidates the theory then the theory is thrown out. Which is what this is saying. Our theories are determined by the observed data we test for and let that determine truth content and the validity of our theories. And there is nothing wrong with that however…

But the phrase “a theory of reality” contains two key nouns, theory and reality, and science is really about both. Because all theories have certain necessary logical properties that are abstract and mathematical, and therefore independent of observation — it is these very properties that let us recognize and understand our world in conceptual terms. Science isn’t just about what we can observe but theorizing about it. Our theories is our actual acquired body of knowledge. Science if it is a process and means anything at all isn’t just go look at stuff for we can all see the universe outside of us. This is indisputable science is about both what we can observe and theorize as that’s where the mind and intelligence comes in and our historical respect for brilliance. We all have observed gravity and motion and acceleration but it wasn’t until Newton came along and actually theorized about what he observed and formed equations about it that did it matter. No one can just look at light and observe it and see the theory of relativity that took Einstein’s theory and equations. So science is about both theory (our mental constructs) and reality (what we observe the empirical data).

Because all theories have certain necessary logical properties that are abstract and mathematical, and therefore independent of observation — it is these very properties that let us recognize and understand our world in conceptual terms. This is very true and insightful when one thinks about it. Newton’s theory of motion doesn’t exist without logical properties and abstract mathematical reasoning, F=MA. The theory doesn’t exist without that equation. That is the only way we can understand what Newton means. Same with Einstein’s E=MC². Similar to every theory and established scientific law they all have necessary logical properties that require ABSTRACT thought and reasoning to figure this out and put it together. For Newton to come up with F=MA and Einstein to figure E=MC² took a brilliant mind and why intelligence matters in the scientific process and very deep and abstract thought and advanced mental reasoning that’s not so concrete and easily laid out. It takes a mind to do this kind of thinking and is more than just observing and look at things. These properties of our theories are independent of observation. The logical and mathematical properties of our theories and equations are completely separate from observation for if they weren’t we could all just observe motion and know this. And it is these properties of our theories and logical reasoning that we can know anything at all and understand out world in conceptual terms. So now that we now this here’s the idea of how the CTMU can change science as we know it:

We could just as well start with these properties and see what they might tell us about objective reality. Just as scientific observation makes demands on theories, the logic of theories makes demands on scientific observation, and these demands tell us in a general way what we may observe about the universe. So here’s the main idea, we could essentially this is saying we can also instead of observing something first and thinking about it and letting all our theories be determined by just empirical data and the reason this runs into problems is- because of empirical induction assuming the uniformity of nature-There are things we can’t observe. Also there are things we can’t just make sense of and understand by mere observation even when observing it. So science has its limits. So to get around this and circumvent this problem here Langan is saying for instance in Einstein’s case would could’ve have just started with E=MC² and us that to tell us what we can observe. Einstein had to observe light and think about it deeply to come up with the equation and Newton likewise had to observe gravity and motion but this is saying Newton or Einstein never having observed these things and tested for them or in relativity with the bending of light we can just start with the logic and math and if it’s based on correct absolute rules of logic and mathematical reasoning use that to tell us what we will observe before or never having observed it. We can flip science around backwards the process and use that to investigate reality. We can start by developing rules of logic and math without observing anything and have that be the decider of what is true or how the empirical data will go. Same with a mathematical proof of God even though we don’t claim to have seen or known to have seen God the idea is the same. We can start with developing the math and logical reasoning never observing anything then use these rules to apply it to everything we observe and a theory developed this way is without flaw error or at the least any assumptions. It just takes a brilliant mind to figure out how to do this. That’s why when Langan writes: the logic of theories makes demands on scientific observation, and these demands tell us in a general way what we may observe about the universe. With this idea our knowledge and understanding isn’t just limited to what we can directly observe. We can continue to further and further develop the logic of theories to deepen our explanatory scope and the CTMU is the theory that says how to do this. Any number of scientists or mathematicians or philosophers that get their hands on this kind of knowledge can use it to solve any number of problems they are working on that Langan doesn’t solve in his lifetime why Langan elsewhere in the CTMU Q and A page writes: “Unlike other TOEs, the CTMU does not purport to be a “complete” theory; there are too many physical details and undecidable mathematical theorems to be accounted for (enough to occupy whole future generations of mathematicians and scientists), and merely stating a hypothetical relationship among families of subatomic particles is only a small part of the explanatory task before us. Instead, the CTMU is merely designed to be consistent and comprehensive at a high level of generality, a level above that at which most other TOEs are prematurely aimed.

The good news is that a new model of physical spacetime, and thus a whole new context for addressing the usual round of quantum cosmological problems, has emerged from the CTMU’s direct attack on deeper philosophical issues. The point is the CTMU is eternal and irreplaceable because these rules of logic it uses to investigate reality can be used by any scientists and applied to any problem solving problems in particle physics forever. This is what a theory of everything would do that is truly comprehensive unless the only other way would be to actually describe all of the informational data and everything science can ever observe forever in the universe and know everything at once which is humanly impossible. Why the CTMU defines reality as: “we can refine the definition of reality as follows: “Reality is the perceptual aggregate including (1) all scientific observations that ever were and ever will be, and (2) the entire abstract and/or cognitive explanatory infrastructure of perception” (where the abstract is a syntactic generalization of the concrete standing for ideas, concepts or cognitive structures distributing over physical instances which conform to them as content conforms to syntax).”-Pg.16.

One would have to be God to know everything there is at once Why the CTMU defines God as: “This implies that we all exist in what can be called “the Mind of God”, and that our individual minds are parts of God’s Mind. They are not as powerful as God’s Mind, for they are only parts thereof; yet, they are directly connected to the greatest source of knowledge and power that exists.” And also about knowing everything “Since reality always has the ability to surprise us, the task of scientific observation can never be completed with absolute certainty, and this means that a comprehensive theory of reality cannot be based on scientific observation alone.” So there we have God as defined as the greatest source of power and knowledge that exists which if God exists would be the greatest being in existence there could be. Then we have the idea that why science will always be able to occupy us as the CTMU also shows reality is in a constant state of self-creation which includes free will take from the 56 page paper “Telic recursion is a fundamental process that tends to maximize a cosmic self-selection parameter, generalized utility, over a set of possible syntax-state relationships in light of the self-configurative freedom of the universe.”-Pg.35. In other words information is always being added to the universe Which Langan has also been recorded in an interview saying “The classical Laplacian-deterministic worldview is all but dead. As reality is affected by every possible kind of ambiguity, uncertainty, indeterminacy, and undecidability, no theory of reality can ever be complete. In principle, this makes any such theory a permanent work-in-progress in which a very great deal always remains to be done. Exploration must continue.

However, a theory of reality can still be comprehensive, classifying knowledge in the large rather than determining it exhaustively.” The point is there is always more to learn which again is why Since reality always has the ability to surprise us, the task of scientific observation can never be completed with absolute certainty. But back to the original point of the logic of theories makes demands on scientific observation, and these demands tell us in a general way what we may observe about the universe. The idea is this how the CTMU changes our outlook on science: Newton by observing motion and acceleration could quantify things such as the acceleration of gravity or how force accelerates motion. He observed it and then thought about it for a while as Newton himself is quoted as saying “If I have done the public any service, it is due to my patient thought.”
Read more at: https://www.brainyquote.com/quotes/isaac_newton_135550

The main idea is Newton as though Newton figured out F=MA and it was shown to successfully work by measurement and experimental data we can say that the logic of theories makes demands on what we observe. F=MA was correct by experimental data. From this knowledge and any valid scientific theory that was confirmed by observation we can say the or just as well say instead of letting our theories be determined by observation Newton could’ve just started with F=MA without observing anything and know he was right just by mathematical and philosophical rules of reasoning and logic. This idea shows we can flip it and start with logic first. The CTMU shows just how to do this. The CTMU is saying Newton could’ve never observed motion and still come up with it and be correct. And that F=MA does make a demand on what we will observe. Through these logical requirements we can look at them if they are done with correct logic which logic is the valid rules of thought and use these concepts to tell us what we will observe in the universe before we have observed it and know we are right and use it to increase our knowledge on what we can observe. We can just develop the logic and math to do this. F=MA makes a demand on what we observe in the universe. Same with any logical principles if they can be done according to absolute invariant rules of logic. The Central hypothesis of the theory then one could say is everything observable in the universe conforms to certain rules of logic and that nothing is random and it is through these rules of logic we can classify all knowledge in the universe and successfully describe anything because everything physical by its existence conforms to such rules of logic.

In other words, a comprehensive theory of reality is not just about observation, but about theories and their logical requirements. Since theories are mental constructs, and mental means “of the mind”, this can be rephrased as follows: mind and reality are linked in mutual dependence at the most basic level of understanding. This linkage of mind and reality is what a TOE(Theory of Everything) is really about. The CTMU is such a theory; instead of being a mathematical description of specific observations (like all established scientific theories), it is a “metatheory” about the general relationship between theories and observations…i.e., about science or knowledge itself. Thus, it can credibly lay claim to the title of TOE. This how saying how then the CTMU was developed and what it means to be a theory of everything and why it can credibly lay claim to such a title and be used as so to apply to anything science can get its hands on.

And so this is the idea of how it is developed as such and is without assumptions and stripping away constraints and actually developed backwards from science and does it the opposite way and flips the process on its head as we were just describing. Instead of our theories be determined by observation our theories determine observation: “This precludes a neat series of cumulative definitions, which is possible in any case only by taking for granted the content and wherewithal of theorization (unfortunately, one can take nothing for granted in reality theory). As we will see below, the recursive nature of the CTMU is unavoidable. Secondly, the CTMU is developed “backwards” with respect to the usual deductive theories of science and mathematics, by first peeling away constraints and only then using the results to deduce facts about content. Most theories begin with axioms, hypotheses and rules of inference, extract implications, logically or empirically test these implications, and then add or revise axioms, theorems or hypotheses. The CTMU does the opposite, stripping away assumptions and “rebuilding reality” while adding no assumptions back.”-Pg.15 So the CTMU is without assumption developing inescapable rules of logic that can apply to describing anything in existence qualifying it as a theory of everything. And that’s what most of the 56 page paper is about. This shows that science cannot be divorced from philosophy.

The CTMU and Physics (The physics here is obviously summed up but is also edited from how this is formatted from this same section on page 17–18 of the 19 page document summarizing the CTMU I printed off and put together of a Hodge podge different answers or responses of Chris about the CTMU in general of what it is and has to offer and why it’s important or anyone should care. This here is strictly physics a little more in depth from the messy copy and paste format of this part on page 17 and 18 of that with the new definition of space, time, matter and motion.)

The avowed goal of physics is to produce what is sometimes called a “Theory of Everything” or TOE. As presently conceived, the TOE is thought to consist of one equation describing a single “superforce” unifying all the forces of nature (gravity, electromagnetism, and the strong and weak nuclear forces).But this is actually an over simplification; every equation must be embedded in a Theory, and theories require models for their proper interpretation. Unfortunately, the currently available theory and model lack three important properties: closure, Consistency and comprehensivity. That is, they are not self-contained; they suffer from various intractable paradoxes; and they conspicuously exclude or neglect various crucial factors, including subjective ones like consciousness and emotion. Since the excluded factors fall as squarely under the heading everything as the included ones, a real TOE has no business omitting them. So as now envisioned by physicists, the TOE is misnamed as a “theory of everything”. The CTMU, on the other hand, is a TOE framework in which “everything” really means everything

Whereas the currently-envisioned TOE emphasizes objective reality at the expense of its subjective counterpart (mind), the CTMU places mind on the agenda at the outset. It does this not by making assumptions, but by eliminating the erroneous scientific assumption that mind and objective reality can be even tentatively separated. To do this, it exploits not just what we know of objective reality–the so-called “everything” of the standard TOE–but also what we know of the first word in “TOE”, namely theory. In other words, it brings the logic of formalized theories to bear on reality theory. Although this is a mathematically obvious move, it has been almost completely overlooked in the physical and mathematical sciences. By correcting this error, the CTMU warrants description as a theory of the relationship between the

Mind of the theorist and the objective reality about which it theorizes, completing the program of subjective-objective unification already inherent in certain aspects of the formalisms of relativity and quantum mechanics. In the process, it also brings the quantum and classical realms of physics into the sort of intimate contact that can only be provided by a fundamentally new model of physical and metaphysical reality…a model truly worthy of being called a “new paradigm”. Fundamental to this new model are revisions of basic physical concepts including space, time, matter and motion. Space, once a featureless medium aimlessly proliferating through cosmic expansion, becomes a distributed syntactic structure iteratively reborn of matter and subject to conspansive evacuation and rescaling. Time, previously envisioned as a quasi-spatial linear dimension along which the cosmos hurtles like a runaway locomotive, becomes the means by which the universe self-configures… an SCSPL-grammatical symphony of logico-linguistic transformations played by the self-creating cosmos. Lumps of matter, no longer the inert pawns of external laws of physics, become SCSPL syntactic operators containing within themselves the syntactic rules by which they internally process each other to create new states of physical reality. And motion, once seen as the passage of material attribute-ensembles through adjacent infinitesimal cells of empty space displaying them as content, becomes an iterative, self-simulative sequence of endomorphic self-projections by moving bodies themselves

SCSPL relates space, time and object by means of conspansive duality and conspansion, an SCSPL-grammatical process featuring an alternation between dual phases of existence associated with design and actualization and related to the familiar wave-particle duality of quantum mechanics.-Pg. 1 of Abstract.

By distributing the design phase of reality over the actualization phase, conspansive spacetime also provides a distributed mechanism for Intelligent Design, adjoining to the restrictive principle of natural selection a basic means of generating information and complexity (This is how in the theory intelligent design and evolution go together as evolution is added or natural selection is added information whether it be the human genome or what have you into biological life. Its added information into the universe. Evolution is not random but added information as it evolves which it must as things evolve organs and such and it’s also a design process guided by cosmic evolution in the CTMU is a process called telic causation.-also page 1.

Duality is a ubiquitous concept in mathematics, appearing in fields from logic and the theory of categories to geometry and analysis. The duality relation is symmetric; if dualizing proposition A yields proposition B, then dualizing B yields A.

Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O) duality; the second, time-space (or T-S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated”( it is possible to apply a logical-Pg.41 transformation which inverts this picture, turning it “outside-in”. This results in a “distributed subjectivization” in which everything occurs inside the objects , effectively putting every object “inside” every other one in a generalized way and thereby placing the contents of space on the same footing as that formerly occupied by the containing space itself. In effect, the universe becomes a “selfsimulation” running inside its own contents); by its material contents. (But where communication happens on all scales, the distinction between inside and outside is not so clear. The languages communicated among language users and processors, and the languages embodied by users and processors themselves, occupy an overall medium with a unified communicative syntax largely indifferent to the distinction. The laws that govern a system may be reposed in the space that contains its objects, or in the objects themselves. Classical physics reposes everything in space, applying spatial concepts like vectors and tensors to fields outside the objects. However, it is possible to apply a logical 41 transformation which inverts this picture, turning it “outside-in”. This results in a “distributed subjectivization” in which everything occurs inside the objects; the objects are simply defined to consistently internalize their interactions, effectively putting every object “inside” every other one in a generalized way and thereby placing the contents of space on the same footing as that formerly occupied by the containing space itself. Vectors and tensors then become descriptors of the internal syntactic properties and states of objects. In effect, the universe becomes a “selfsimulation” running inside its own contents. This view, which is complementary to the conventional geometric one, is called transductive algebra. The “dual” relationship between geometry and transductive algebra is called conspansive duality. In conjunction with other principles including hology and SCSPLinfocognitive-telic reducibility, conspansive duality can afford fresh insight on the nature of reality and the physical world. One simply takes the conventional picture, turns it outside-in, puts the two pictures together, and extracts the implications).

We can now through the CTMU classify these four elements of physics space, time, matter and motion and all of reality and perception at large which in the CTMU is contained in the HCS which is distributed over the entire universe; short for Human-Cognitive-Syntax. This contains everything physics studies and more:

“The primary transducers of the overall language of science are scientists, and their transductive syntax consists of the syntax of generalized scientific observation and theorization, i.e. perception and cognition. We may therefore partition or stratify this syntax according to the nature of the logical and nonlogical elements incorporated in syntactic rules. For example, we might develop four classes corresponding to the fundamental trio space, time and object, a class containing the rules of logic and mathematics, a class consisting of the perceptual qualia in terms of which we define and extract experience, meaning and utility from perceptual and cognitive reality, and a class accounting for more nebulous feelings and emotions integral to the determination of utility for qualic relationships.44 For now, we might as well call these classes STOS, LMS, QPS and ETS, respectively standing for space-time-object syntax, logico-mathematical syntax, qualioperceptual syntax, and emo-telic syntax, along with a high-level interrelationship of these components to the structure of which all or some of them ultimately contribute. Together, these ingredients comprise the Human Cognitive-Perceptual Syntax or HCS”.-Pg.41-In the HCS would be contained all of the information content of the entire universe including all of physics or what physics studies.

For it seems that just ahead on the intellectual horizon looms a new science of metaphysics…a logical framework in which the importance of humankind is unthreatened by reductionism, this framework yields a new understanding of space and time. The logical structure of space-time provides the universe with the wherewithal of being, endowing it with self-creative freedom and permitting it to rise from a sea of undifferentiated ontological potential.

Thus, the universe “selects itself” from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general.

In this ostensibly inanimate, impersonal universe, a garden is a miracle. All the more so is a garden slug, an animal that can extract sufficient energy from the garden’s vegetable matter to move from place to place under its own power. When one is in the right mood, watching the shimmering spotted slug slide over the mulch evokes the miracle of biology in all its splendor; the creature’s pulsating aliveness is hypnotic. But then one recovers his bearings and 85 realizes that this is only, after all, a garden slug, and that the ladder of biology goes much higher. The miracle of life has culminated in one’s own species, man. Unlike the slug, whose nervous system has barely enough complexity to let it interface with the environment, a man’s nervous system, nucleated by the adaptive and inventive human brain, can abstractly model its surroundings and project itself consciously and creatively through time.

To see an object change, one must recall its former state for comparison to its present state, and to do that, one must recall one’s former perception of it. Because perception: is an interaction between self and environment(This will show a good transition from the most basic work to his 56 Page paper of the definition of perception “perception is a sensory intersect of mind and reality”-pg.19) , this amounts to bringing one’s former self into conjunction with one’s present self. That past and present selves can be brought into conjunction across a temporal interval implies that momentary selves remain sufficiently alike to be conjoined; that they can intersect at any given moment to compare content means that the intersection is changeless. So when self is generalized as the intersection of all momentary selves, it acquires a property called time invariance. It is the rock of perception, the unchanging observation post from which the net of temporal connections is cast and to which it remains anchored.

Through learning, mental models of time evolve in time. As the brain’s neural connections are modified and the strengths of existing connections are adjusted to account for new information regarding both self and environment — as it learns — its model of time changes as a function of time. In other words, the model changes with that which is modeled. If the brain is smart enough, then it can model itself in the process of being changed, and depict its own learning process as a higher level of time. But even as the self-absorbs its educational history and deepens its reflexive understanding, it remains static at its core. Otherwise, it would lose temporal cohesion and fall apart. Since self is static, time too should possess a static description that does not change in the temporal flow it describes (if time were the water flowing in a river, then a static description of time would be analogous to the rocky banks that determine the river’s course).

Such a description arises by abstraction. As cognitive models become more sophisticated, cognition becomes increasingly abstract; concepts become increasingly 88 independent of the particular objects they describe. Among the first things to be abstracted are space and time. The most general abstract system incorporating both is a language. Although the term “language” usually refers to a natural language like English, it is actually more general(This is taken from an online chat with Langan describing what a language actually is: “Most people think that a language must be a natural, spoken or written language like English, French or German. But in mathematics and logic, the definition of language is far more general. Reality itself can be viewed as a language,”) mathematically, a formal language consists of three ingredients: a set of elements to be combined as strings (e.g., symbols, memes), a set of structural rules governing their arrangement in space, and a set of grammatical rules governing their transformations in time. Together, the latter two ingredients form the syntax of the language. It follows that neural, cognitive-perceptual, and physical systems can be described as languages, and the laws which govern them as their syntaxes. On a subjective level, time itself can be abstractly characterized as the grammar of the joint language of cognition and perception. The rules of this grammar are the general ingredients of subjective time.

Because time- is defined in terms of transformations among spatial arrangements of objects, it is conceptually entwined with space. Thus, it is actually part of a linguistic complex called spacetime. Spatiotemporal relations exist on 89 many levels; if level one consists of simple relationships of objects in space and time, then level two consists of relationships of such relationships, and so on. Because logic is stratified in much the same way, one can say that time is stratified in a manner corresponding to predicate logic. This must be true in any case, since any meaningful description of time is logically formulated. Spatiotemporal stratification allows time to be viewed on various scales corresponding to ascending series of contexts: e.g., personal awareness, interpersonal relationships, social evolution, evolutionary biology, and so on. The histories of people, institutions, cultures, and species are nested like Chinese boxes, with the abstract principles of each history occupying a level of temporal grammar corresponding to an order of predicate logic.

Because of the relation between self-awareness and temporal awareness, temporal stratification induces a stratification of self. What we have already described as the static intersect of momentary selves becomes a stratified relationship…a terrace of temporal vantages conducing to long-term self-integration. As the self becomes stratified, the 90 principles abstracted from higher orders of experience tend to be objectivized due to their generality, with science and philosophy among the results. Thus, the subjective and objective sides of reality — the self and the environment — tend to merge in a symmetric way. On one hand, the environment is absorbed by the self through experience, and the laws of nature are thereby abstracted; on the other hand, the self is projected onto the environment in such a way that it “selects” the laws of nature by analogy to its own internal laws (Langan gives a very good example of this in the CTMU Q and A “My question is this: If you could answer the question what is the mathematical difference between visible light and invisible light, i.e. ultraviolet rays, wouldn’t this answer the question concerning the importance of further study into what is defined as physical. After all how do you perceive ultraviolet rays — as a sunburn or plant growth. Therefore, although not visible there indeed may be other energy forms that coexist right where we are, having an impact on us, without our knowing its source. It is not visibly physical yet its effect on us is very physical.

A: Visible and UV light differ in frequency, or number of waves transmitted or received per second. Because light always travels at the same speed (c = ~300K km/sec), higher frequency means shorter waves:

lambda = c/frequency (where lambda = wavelength)

I.e., more energetic, higher-frequency light has a smaller wavelength than less energetic, lower-frequency light. Unfortunately, the tiny light sensors in our retinas, called rods and cones, cannot detect short-wavelength UV light.

Your question seems to be this: how can we call UV light “physical” when we cannot directly detect it? The answer is twofold but simple: we can call it “physical” because of (1) its perceptible physical effects on animals, plants, minerals and detection devices, and (2) our need to acknowledge the full definitions and logical implications of our perceptions and concepts.

Answer (2) is why reality is not merely “physical” in the concrete or material sense. In order to exist as a self-consistent perceptible entity, reality must ultimately make logical sense; our perceptions of it must conform to a coherent cognitive syntax containing the rules of perception and cognition and incorporating logic. This syntax tells us that if light exists below the maximum visible frequency, then in the absence of any extra constraints, it can exist above it as well.

Specifically, having identified the physical cause of light to be photon emission by subatomic oscillators called electrons, we are compelled to recognize the existence of “light” at whatever frequencies such oscillators may exhibit, right up through X and gamma radiation. The logical component of our cognitive syntax ultimately forces us to define and cross-relate all of the concepts in terms of which we apprehend reality, including light, in a logically consistent way”-This with the scientific example of UV rays and visible and invisible light is saying we can only know and understand reality through LOGIC period. And that’s all physical substance of the universe conforms to categories of LOGICAL SYNTAX. All of the stuff of science has to exist by the rules of logic somehow). Either way, the core self tends to intersect with the environment as momentary selves are intersected within it. This brings the subjective and objective phases of reality — and time — into closer correspondence, blurring the distinction between them from an analytic standpoint. As time grows more abstract, ways are sought to measure it, diagram it and analyze it numerically. This requires a universal depiction of space and time against which arbitrary processes can be differentially graphed and metered. Such a depiction was introduced by the Frenchman René Descartes in the first half of the 17th century. It was called analytic geometry, and it depicted time and the (P.91) dimensions of space as straight, mutually perpendicular axes. In analytic geometry, any set of numerically-scaled space and time axes associated with any set of properties or attributes defines a coordinate system for assigning numbers to points, and simple processes appear as the graphs of algebraic functions. A few decades later, Newton and Leibniz independently discovered a new kind of mathematics, the infinitesimal calculus, by which to numerically quantify the rates of such processes. These innovations, which laid the foundations of modern science and engineering, suffice to this day in many practical contexts. Even though garden-variety analytic geometry was technically superseded by the Theory of Relativity — which was itself constructed on an analytic-geometric foundation — it gives a very close approximation of relativity in most situations. Unfortunately, the conveniences of analytic geometry came at the price of mind-body dualism. This was Descartes’ idea that the self, or “mind”, was a nonphysical substance that could be left out of physical reasoning with impunity. For some purposes, this was true. But as we saw in the next- (P.92) to-last paragraph, the relationship of mind to reality is not that simple. While the temporal grammar of physics determines the neural laws of cognition, cognitive grammar projects itself onto physical reality in such a way as to determine the form that physical grammar must assume. Because the form of physical grammar limits the content of physical grammar, this makes cognition a potential factor in determining the laws of nature. In principle, cognitive and physical grammars may influence each other symmetrically. The symmetric influence of cognitive and physical grammars implies a directional symmetry of time. Although time is usually seen as a one-way street, it need not be; the mere fact that a street is marked “one way” does not stop it from being easily traveled in the unauthorized direction. Indeed, two-way time shows up in both quantum physics and relativity theory, the primary mainstays of modern physics. Thus, it is not physically warranted to say that cognition cannot influence the laws of physics because the laws of physics “precede cognition in time”. If we look at the situation from the other direction, we can as easily say that cognition “precedes” the laws of physics in reverse time…and 93 point to the strange bidirectional laws of particle physics to justify our position. These laws are of such a nature that they can as well be called laws of perception as laws of physics.

Before we get to the final word on time, there is one more aspect of physical grammar that must be considered. Physical reasoning sometimes requires a distinction between two kinds of time: ordinary time and cosmic time. With respect to observations made at normal velocities, ordinary time behaves in a way described by Newtonian analytic geometry; at higher velocities, and in the presence of strong gravitational fields, it behaves according to Einstein’s Special and General Theories of Relativity. But not long after Einstein formulated his General Theory, it was discovered that the universe, AKA spacetime, was expanding. Because cosmic expansion seems to imply that the universe began as a dimensionless point, the universe must have been created, and the creation event must have occurred on a higher level of time: cosmic time. Whereas ordinary time accommodates changes occurring within the spacetime manifold, this is obviously not so for the kind of time in which the manifold itself changes. 94 Now for the fly in the cosmological ointment. As we have seen, it is the nature of the cognitive self to formulate models incorporating ever-higher levels of change (or time). Obviously, the highest level of change is that characterizing the creation of reality. Prior to the moment of creation, the universe was not there; afterwards, the universe was there. This represents a sizable change indeed! Unfortunately, it also constitutes a sizable paradox. If the creation of reality was a real event, and if this event occurred in cosmic time, then cosmic time itself is real. But then cosmic time is an aspect of reality and can only have been created with reality. This implies that cosmic time, and in fact reality, must have created themselves! The idea that the universe created itself(The idea that the universe creates itself because there is nothing other than the universe to do this goes along with how the CTMU characterizing the universe as an SCSPL language or Self Configuring Self Processing Language. The idea that the universe created itself means its “Self-Configuring” and it has its own mind is at this point at least very plausible that we live in a conscious and intelligently evolving universe or in the CTMU this is called how evolution and natural selection is consciously guided and an intelligent design process “Intelligent Self-Design.” This is how evolution happens in the universe anywhere. It designs itself intelligently. This is in the 56 page paper of the CTMU “The CTMU has a meta-Darwinian message: the universe evolves by hological self-replication and self-selection. Furthermore, because the universe is natural, its self-selection amounts to a cosmic form of natural selection. But by the nature of this selection process, it also bears description as intelligent self-design”-Pg.50. Langan has written what the idea of an intelligent and conscious universe means and if this can be firmly proven and it can “(Christopher Michael Langan’s HIQ & A

Chronological order from bottom to top of page



Q: Many people conceive of the universe as a “supreme being”. The interconnectedness is evident in the consistency of physical laws and repetition of motif seen on a many levels. What is interesting is understanding the sentience of that supreme being. A drive toward growth, or self-actualization seems evident, but how can we prove a “will of God” that would go beyond a drive toward optimal actualization?

A: There are several ways that we can be logically certain that the being called “reality” or “the universe” is sentient.

1. *We’re* sentient. Because we live in the medium known as “reality”, and because any attribute supported by a medium exists throughout the medium in the form of potential (to be objectively actualized), sentience implicitly exists in reality.

2. Despite the principle of locality — the existence of separate locales and local systems within the universe — the universe is globally consistent. The aspect of a system which reflexively enforces global consistency is necessarily globally coherent, and that which is coherently reflexive (self-active, self-referential) is, in effect, “sentient”.

3. Because, by definition, there is nothing outside of reality that is sufficiently real to recognize the existence of reality, reality must distributively recognize its own existence; every time one object interacts with another within it, the objects “recognize” each other as things with which to interact. But that means that reality is distributively self-aware.

Now, given the absolute logical certainty that the universe is sentient (self-aware) — a certainty that nobody can possibly refute, as we see from the inevitability of 1–3 above — can we characterize its “will”?

Yes. First, what is will? That function of a sentient entity which forms intent prior to actualization. So by definition, the “will” of the universe is that function which determines how the universe will configure itself “in advance” of actualization. In cosmological terms, this function is just that which determines, among other things, the laws of mathematics and physics embodied by reality.

Such a function must, after all, exist. For without it, there would be no reason, from one moment to the next, why the laws of physics should not spontaneously change into one of the infinite number of other nomologies that might have arisen
. Concisely, this function is defined as that reflexive mapping which effects the nomological character and stability of reality. The “will of the universe”, AKA the “will of God”, AKA teleology, is the name of this function, which we have just concretely defined.

Does the universe “feel” its volition as do we? Well, let’s see. What the universe feels properly includes what *we* feel, plus much more (because we are merely parts of it). The universe therefore “feels” teleology far more powerfully than a mere human being “feels” an act of human will. The mechanism of its “feeling”? Well, there are a lot of those, including every human being, every animal, every plant, and every alien microbe on every planet in every star system in every galaxy in the cosmos. As you can well imagine, the impressions that get channeled to the universe through all of these “sense receptors” add up to very powerful sensations indeed.

In fact, these are the sensations that feed back to teleology to tell the universe how to self-actualize in the “optimal” way…i.e., so that it ends up with the “best feeling” possible. They have already told the universe how to configure the laws of math and physics; for more specific elements of configuration, the universe relies on US. Every decision we make, including our every act of will, we make on behalf of the universe. That’s why we should always make the very best decisions we can.)” brings a whole new meaning to bidirectional time, and thus to the idea that cognition may play a role in the creation of reality. As a self-creative mechanism for the universe is sought, it becomes apparent that cognition is the only process lending itself to plausible interpretation as a means of temporal feedback from present to past. Were cognition to play such a role, then in a literal sense, its most universal models of 95 temporal reality would become identical to the reality being modeled. Time would become cognition, and space: would become a system of geometric relations that evolves by distributed cognitive processing. Here comes the surprise: such a model exists. Appropriately enough, it is called the “Cognition-Theoretic Model of the Universe”, or CTMU for short. A cross between John Archibald Wheeler’s Participatory Universe and the Stephen Hawking-James Hartle “imaginary time” theory of cosmology proposed in Hawking’s phenomenal book A Brief History of Time, the CTMU resolves many of the most intractable paradoxes known to physical science while explaining recent data which indicate that the universe is expanding at an accelerating rate. Better yet, it bestows on human consciousness a level of meaning that was previously approached only by religion and mysticism. If it passes the test of time — and there are many good reasons to think that it will — then it will be the greatest step that humanity has yet taken towards a real understanding of its most (or least?) timeless mystery. And so the circle closes. Time becomes a cosmogonic 96 loop whereby the universe creates itself. The origin of our time concept, the self, becomes the origin of time itself. Our cognitive models of time become a model of time-ascognition. And the languages of cognition and physics become one self-configuring, self-processing language of which time is the unified grammar. Talk about “time out of mind”! And all this because of a little garden slug.

A New Interpretation of Quantum Mechanics

After standing for over two centuries as the last word in physics, the differential equations comprising the deterministic laws of Newtonian mechanics began to run into problems. One of these problems was called the Heisenberg Uncertainty Principle or HUP. The HUP has the effect of “blurring” space and time on very small scales by making it impossible to simultaneously measure with accuracy certain pairs of attributes of a particle of matter or packet of energy. Because of this blurring, Newton’s differential equations are insufficient to describe small-scale interactions of matter and energy. Therefore, in order to adapt the equations of classical mechanics to the nondeterministic, dualistic (wave-versus-particle) nature of matter and energy, the more or less ad hoc theory of quantum mechanics (QM) was hastily developed. QM identifies matter quanta with “probability waves” existing in ¥-dimensional complex Hilbert space, a Cartesian space defined over the field of complex numbers a+bi (where a and b are real numbers and i = Ö-1) instead of the pure real numbers, and replaces Hamilton’s classical equations of motion with Schrodinger’s wave equation. QM spelled the beginning of the end for Laplacian determinism, a philosophical outgrowth of Newtonianism which held that any temporal state of the universe could be fully predicted from a complete Cartesian description of any other. Not only uncertainty but freedom had reentered the physical arena

Unfortunately, the HUP was not the only quantum-mechanical problem for classical physics and its offshoots. Even worse was a phenomenon called EPR (Einstein-Podolsky-Rosen) nonlocality, according to which the conservation of certain physical quantities for pairs of correlated particles seems to require that information be instantaneously transmitted between them regardless of their distance from each other. The EPR paradox juxtaposes nonlocality to the conventional dynamical scenario in which anything transmitted between locations must move through a finite sequence of intervening positions in space and time. So basic is this scenario to the classical worldview that EPR nonlocality seems to hang over it like a Damoclean sword, poised to sunder it like a melon. Not only does no standard physical theory incorporating common notions of realism, induction and locality contain a resolution of this paradox — this much we know from a mathematical result called Bell’s theorem — but it seems that the very foundations of physical science must give way before a resolution can even be attempted!

If we add this descriptive kind of ambiguity to ambiguities of measurement, e.g. the Heisenberg Uncertainty Principle that governs the subatomic scale of reality, and the internal theoretical ambiguity captured by undecidability, we see that ambiguity is an inescapable ingredient of our knowledge of the world. It seems that math and science are…well, inexact sciences.

How, then, can we ever form a true picture of reality? There may be a way. For example, we could begin with the premise that such a picture exists, if only as a “limit” of theorization (ignoring for now the matter of showing that such a limit exists). Then we could educe categorical relationships involving the logical properties of this limit to arrive at a description of reality in terms of reality itself. In other words, we could build a self-referential theory of reality whose variables represent reality itself, and whose relationships are logical tautologies. Then we could add an instructive twist. Since logic consists of the rules of thought, i.e. of mind, what we would really be doing is interpreting reality in a generic theory of mind based on logic. By definition, the result would be a cognitive-theoretic model of the universe.

For example, modern physics is bedeviled by paradoxes involving the origin and directionality of time, the collapse of the quantum wave function, quantum nonlocality, and the containment problem of cosmology. Were someone to present a simple, elegant theory resolving these paradoxes without sacrificing the benefits of existing theories, the resolutions would carry more weight than any number of predictions.

The New Interpretation of QM-Sum over Futures and the Extended Superposition Principle

The superposition principle highlights certain problems with quantum mechanics. One problem is that quantum mechanics lacks a cogent model in which to interpret things like “mixed states” (waves alone are not sufficient). Another problem is that according to the uncertainty principle, 30 the last states of a pair of interacting particles are generally insufficient to fully determine their next states. This, of course, raises a question: how are their next states actually determined? What is the source of the extra tie-breaking measure of determinacy required to select their next events (“collapse their wave functions”)?

The answer is not, as some might suppose, “randomness”; randomness amounts to acausality, or alternatively, to informational incompressibility with respect to any distributed causal template or ingredient of causal syntax. Thus, it is either no explanation at all, or it implies the existence of a “cause” exceeding the representative capacity of distributed laws of causality. But the former is both absurd and unscientific, and the latter requires that some explicit allowance be made for higher orders of causation…more of an allowance than may readily be discerned in a simple, magical invocation of “randomness”. The superposition principle, like other aspects of quantum mechanics, is based on the assumption of physical Markovianism. 40 It refers to mixed states between adjacent events, ignoring the possibility of nonrandom temporally-extensive relationships not wholly attributable to distributed laws. By putting temporally remote events in extended descriptive contact with each other (Extended superposition “atemporally” distributes antecedent events over consequent events, thus putting spacetime in temporally-extended self-contact. In light of the Telic Principle (see below), this scenario involves a new interpretation of quantum theory, sum over futures. Sum over futures involves an atemporal generalization of “process”, telic recursion, through which the universe effects on-the-fly maximization of a global self-selection parameter, generalized utility.) [( Sum over futures involves an atemporal (existing or considered without relation to time) before determining the next state(In the Sum Over Futures Principle), Reality takes all of the past and future into account and then chooses a particular state on the basis of its own values), through which the universe effects on-the-fly (as time passes) maximization of a global (over all of Reality) self-selection parameter (measurement), generalized utility (what the universal Desire hopes to achieve).]

In extending the superposition concept to include nontrivial higher-order relationships, the Extended Superposition Principle opens the door to meaning and design. Because it also supports distribution relationships among states, events and syntactic strata, it makes cosmogony a distributed, coherent, ongoing event rather than a spent and discarded moment from the ancient history of the cosmos (This parallelism has powerful implications. When a human being dies, his entire history remains embedded in the timeless level of consciousness…the Deic level). Indeed, the usual justification for observer participation — that an observer in the present can perceptually collapse the wave functions of ancient (photon-emission) events — can be regarded as a consequence of this logical relationship.

Examined within the distinctive logical structure of this medium, time and causality turn out not to be confined to the familiar past-to-future direction. Instead, the futureto-past direction becomes important as well. The bidirectionality of time implies that the universe is in a state 81 of extended spatiotemporal self-superposition, each of its serial configurations, as defined at each successive moment of time, in contact with all others. At first glance, this seems to imply that the universe is completely determined…that as Laplace believed, every state of the universe is implicit in any state. But not only is such an assumption unnecessary, it would ultimately lead to intractable inconsistencies. In fact, the logical structure of spacetime provides the universe with the wherewithal of being, endowing it with self-creative freedom and permitting it to rise from a sea of undifferentiated ontological potential.

, the Extended Superposition Principle enables coherent cross-temporal telic feedback and thus plays a necessary role in cosmic self-configuration.

Why Physics needs Metaphysics and is incomplete without as they directly correlate

Although supersymmetry was eventually dropped because its 11-dimensional structure failed to explain subatomic chirality (whereby nature distinguishes between right- and left-handedness), its basic premises lived on in the form of 10-dimensional superstring theory. Again, the basic idea was to add additional dimensions to GR, slice and splice these extra dimensions in such a way that they manifest the basic features of quantum mechanics, and develop the implications in the context of a series of Big Bang phase transitions (“broken symmetries”) in which matter changes form as the hot early universe cools down (mathematically, these phase transitions are represented by the arrows in the series GàHà…àSU(3) x SU(2) x U(1)àSU(3) x U(1), where alphanumerics represent algebraic symmetry groups describing the behavioral regularities of different kinds of matter under the influence of different forces, and gravity is mysteriously missing)

Unfortunately, just as General Relativity did nothing to explain the origin of 4-D spacetime or its evident propensity to “expand” when there would seem to be nothing for it to expand into, string theory did nothing to explain the origin or meaning of the n-dimensional strings into which spacetime had evolved. Nor did it even uniquely or manageably characterize higher-dimensional spacetime structure; it required the same kind of nonstandard universe that was missing from GR in order to properly formulate quantum-scale dimensional curling, and eventually broke down into five (5) incompatible versions all relying on difficult and ill-connected kinds of mathematics that made even the simplest calculations, and extracting even the most basic physical predictions, virtually impossible. Worse yet, it was an unstratified low-order theory too weak to accommodate an explanation for quantum nonlocality or measurable cosmic expansion.

Recently, string theory has been absorbed by a jury-rigged patchwork called “membrane theory” or M-theory whose basic entity is a p-dimensional object called, one might almost suspect eponymically, a “p-brane” (no, this is not a joke). P-branes display mathematical properties called S- and T-duality which combine in a yet-higher-level duality called the Duality of Dualities (again, this is not a joke) that suggests a reciprocity between particle size and energy that could eventually link the largest and smallest scales of the universe, and thus realize the dream of uniting large-scale physics (GR) with small-scale physics (QM). In some respects, this is a promising insight; it applies broad logical properties of theories (e.g., duality) to what the theories “objectively” describe, thus linking reality in a deeper way to the mental process of theorization. At the same time, the “membranes” or “bubbles” that replace strings in this theory more readily lend themselves to certain constructive interpretations.

But in other ways, M-theory is just the same old lemon with a new coat of paint. Whether the basic objects of such theories are called strings, p-branes or bubble-branes, they lack sufficient structure and context to explain their own origins or cosmological implications, and are utterly helpless to resolve physical and cosmological paradoxes like quantum nonlocality and ex nihilo(something-from-nothing) cosmogony… paradoxes next to which the paradoxes of broken symmetry “resolved” by such theories resemble the unsightly warts on the nose of a charging rhinoceros. In short, such entities sometimes tend to look to those unschooled in their virtues like mathematical physics run wildly and expensively amok.

Alas, the truth is somewhat worse. Although physics has reached the point at which it can no longer credibly deny the importance of metaphysical criteria, it resists further metaphysical extension. Instead of acknowledging and dealing straightforwardly with its metaphysical dimension, it mislabels metaphysical issues as “scientific” issues and festoons them with increasingly arcane kinds of mathematics that hide its confusion regarding the underlying logic.

Now let us backtrack to the first part of this history, the part in which René Descartes physically objectivized Cartesian spaces in keeping with his thesis of mind-body dualism. Notice that all of the above models sustain the mind-body distinction to the extent that cognition is regarded as an incidental side effect or irrelevant epiphenomenon of objective laws; cognition is secondary even where space and time are considered non-independent. Yet not only is any theory meaningless in the absence of cognition, but the all-important theories of relativity and quantum mechanics, without benefit of explicit logical justification, both invoke higher-level constraints which determine the form or content of dynamical entities according to properties not of their own, but of entities that measure or interact with them. Because these higher-level constraints are cognitive in a generalized sense, GR and QM require a joint theoretical framework in which generalized cognition is a distributed feature of reality.

Let’s try to see this another way. In the standard objectivist view, the universe gives rise to a theorist who gives rise to a theory of the universe. Thus, while the universe creates the theory by way of a theorist, it is not beholden to the possibly mistaken theory that results. But while this is true as far as it goes, it cannot account for how the universe itself is created. To fill this gap, the CTMU Metaphysical Autology Principle or MAP states that because reality is an all-inclusive relation bound by a universal quantifier whose scope is unlimited up to relevance, there is nothing external to reality with sufficient relevance to have formed it; hence, the real universe must be self-configuring. And the Mind-Equals-Reality (M=R) Principle says that because the universe alone can provide the plan or syntax of its own self-creation, it is an “infocognitive” entity loosely analogous to a theorist in the process of introspective analysis. Unfortunately, since objectivist theories contain no room for these basic aspects of reality, they lack the expressive power to fully satisfy relativistic, cosmological or quantum-mechanical criteria.

In view of the vicious paradoxes to which this failing has led, it is only natural to ask whether there exists a generalization of spacetime that contains the missing self-referential dimension of physics. The answer, of course, is that one must exist, and any generalization that is comprehensive in an explanatory sense must explain why. In Noesis/ECE 139, the SCSPL paradigm of the CTMU was described to just this level of detail. Space and time were respectively identified as generalizations of information and cognition, and spacetime was described as a homogeneous self-referential medium called infocognition that evolves in a process called conspansion. Conspansive spacetime is defined to incorporate the fundamental concepts of GR and QM in a simple and direct way that effectively preempts the paradoxes left unresolved by either theory alone. Conspansive spacetime not only incorporates non-independent space and time axes, but logically absorbs the cognitive processes of the theorist regarding it. Since this includes any kind of theorist cognitively addressing any aspect of reality, scientific or otherwise, the CTMU offers an additional benefit of great promise to scientists and nonscientists alike: it naturally conduces to a unification of scientific and nonscientific (e.g. humanistic, artistic and religious) thought.

The Process of Conspansion and the Circles

(The fundamental concept or action that governs all motion of elementary particles)

A (Minkowski) spacetime diagram is a kind of “event lattice” in which nodes represent events and their connective worldlines represent the objects that interact in those events. The events occur at the foci of past and future light cones to which the worldlines are internal. If one could look down the time axis of such a diagram at a spacelike cross section, one would see something very much like a Venn diagram with circles corresponding to lightcone cross sections.

The areas inside the circles correspond to event potentials, and where events are governed by the laws of physics, to potential instantiations of physical law or “nomological syntax”.

That is, each circle depicts the “entangled quantum wavefunctions” of the objects which interacted with each other to generate it.

The small dots in the centers of the circles represent the initial events and objects from which the circles have arisen, As a result, new states are formed within the images of previous states

Once one of these intrinsically atemporal circles has “inner-expanded” across vast reaches of space and time

That is, the circular boundaries of the Venn circles can be construed as those of “potentialized” objects in the process of absorbing their spatiotemporal neighborhoods.

The outward growth (or by conspansive duality, mutual absorption) of the circles is called inner expansion, while the collapse of their objects in new events is called requantization.

Conspansion consists of two complementary processes, requantization and inner expansion. Requantization downsizes the content of Planck’s constant by applying a quantized scaling factor to successive layers of space corresponding to levels of distributed parallel computation. This inverse scaling factor 1/R is just the reciprocal of the cosmological scaling factor R, the ratio of the current apparent size dn(U) of the expanding universe to its original (Higgs condensation) size d0(U)=1. Meanwhile, inner expansion outwardly distributes the images of past events at the speed of light within progressively-requantized layers. As layers are rescaled, the rate of inner expansion, and the speed and wavelength of light, change with respect to d0(U) so that relationships among basic physical processes do not change…i.e., so as to effect nomological covariance. The thrust is to relativize space and time measurements so that spatial relations have different diameters and rates of diametric change from different spacetime vantages. This merely continues a long tradition in physics; just as Galileo relativized motion and Einstein relativized distances and durations to explain gravity, this is a relativization for conspansive “antigravity”

The circles themselves are called IEDs, short for inner expansive domains, and correspond to pairs of interactive syntactic operators involved in generalized-perceptual events (note the hological “evacuation” and mutual absorption of the operators). Spacetime can be illustrated in terms of a layering of such Venn diagrams, mutual contact among which is referred to as “extended superposition” (in the real world, the Venn diagrams are 3-dimensional rather than planar, the circles are spheres, and “layering” is defined accordingly). Extended superposition “atemporally” distributes antecedent events over consequent events, thus putting spacetime in temporally-extended self-contact.

while the twin dots where the circles overlap reflect the+

fact that any possible new event, or interaction between objects involved in the old events, must occur by mutual acquisition in the intersect. As a result, new states are formed within the images of previous states.

Since the event potentials and object potentials coincide, potential instantiations of law can be said to reside “inside” the objects, and can thus be regarded as functions of their internal rules or “object syntaxes”.

The circles themselves are called IEDs, short for inner expansive domains, and correspond to pairs of interactive syntactic operators

(in the real world, the Venn diagrams are 3-dimensional rather than planar, the circles are spheres, and “layering” is defined accordingly).

series of Venn diagrams in which circles inner-expand, interpenetrate and “collapse to points” at each interactive generalized-observational event. This scenario is general, applying even to macroscopic objects consisting of many particles of matter;

Because each circle is structurally self-distributed, nothing need be transmitted from one part of it to another; Conspansion thus affords a certain amount of relief from problems associated with so-called “quantum nonlocality”

a new circle appears within the old one by syntactic embedment, the circles are intrinsically

undefined in duration and are thus intrinsically atemporal. Time arises strictly as an ordinal

relationship among circles rather than within circles themselves.

As previously described, if the conspanding universe were projected in an internal plane, its evolution would look like ripples (infocognitive events) spreading outward on the surface of a pond, with new ripples starting in the intersects of their immediate ancestors. Just as in the pond, old ripples continue to spread outward in ever-deeper layers, carrying their virtual 0 diameters along with them. This is why we can collapse the past history of a cosmic particle by observing it in the present, and why, as surely as Newcomb’s demon, we can determine the past through regressive metric layers corresponding to a rising sequence of NeST strata leading to the stratum corresponding to the particle’s last determinant event. The deeper and further back in time we regress, the higher and more comprehensive the level of NeST that we reach, until finally, like John Wheeler himself, we achieve “observer participation” in the highest, most parallelized level of NeST…the level corresponding to the very birth of reality.

Conspansive domains interpenetrate against the background of past events at the inner expansion rate c, defined as the maximum ratio of distance to duration by the current scaling, and recollapse through quantum interaction. Conspansion thus defines a kind of “absolute time” metering and safeguarding causality. Interpenetration of conspansive domains, which involves a special logical operation called unisection (distributed intersection) combining aspects of the set-theoretic operations union and intersection, creates an infocognitive relation of sufficiently high order to effect quantum collapse. Output is selectively determined by ESP interference and reinforcement within and among metrical layers.

Accordingly, the universe as a whole must be treated as a static domain whose self and contents cannot “expand”, but only seem to expand because they are undergoing internal rescaling as a function of SCSPL grammar. The universe is not actually expanding in any absolute, externally-measurable sense; rather, its contents are shrinking relative to it, and to maintain local geometric and dynamical consistency, it appears to expand relative to them. Already introduced as conspansion (contraction qua expansion), this process reduces physical change to a form of “grammatical substitution” in which the geometrodynamic state of a spatial relation is differentially expressed within an ambient cognitive image of its previous state. By running this scenario backwards and regressing through time, we eventually arrive at the source of geometrodynamic and quantum-theoretic reality: a primeval conspansive domain consisting of pure physical potential embodied in the self-distributed “infocognitive syntax” of the physical universe…i.e., the laws of physics, which in turn reside in the more general HCS.

The microscopic implications of conspansion are in remarkable accord with basic physical criteria. In a self-distributed (perfectly self-similar) universe, every event should mirror the event that creates the universe itself. In terms of an implosive inversion of the standard (Big Bang) model, this means that every event should to some extent mirror the primal event consisting of a condensation of Higgs energy distributing elementary particles and their quantum attributes, including mass and relative velocity, throughout the universe. To borrow from evolutionary biology, spacetime ontogeny recapitulates cosmic phylogeny; every part of the universe should repeat the formative process of the universe itself.

Conspansion is not just a physical operation, but a logical one as well. Because physical objects unambiguously maintain their identities and physical properties as spacetime evolves, spacetime must directly obey the rules of 2VL (2-valued logic distinguishing what is true from what is false). Spacetime evolution can thus be straightforwardly depicted by Venn diagrams in which the truth attribute (I.e., scientific explanations and interpretations glue observations and equations together in a very poorly understood way. It often works like a charm…but why[((One of the main purposes of the CTMU is to explain why this works-That scientific observations go together with mathematical equations so well ex. E=MC² but why do observations and interpretations of the observations and equations that quantify the interpretations go together with such success? In other words why does science work? One of the goals of the CTMU is to make this scientific processes of such success in explaining reality…, better understood, changing science as we now know it and broaden its scope vastly! That’s what Langan is trying to say. Essentially explain why science works.))] ? One of the main purposes of reality theory is to answer this question. 12 The first thing to notice about this question is that it involves the process of attribution, and that the rules of attribution are set forth in stages by mathematical logic. The first stage is called sentential logic and contains the rules for ascribing the attributes true or false, respectively denoting inclusion or non-inclusion in arbitrary cognitive-perceptual systems. Reality theory is about the stage of attribution in which two predicates analogous to true and false, namely real and unreal, are ascribed to various statements about the real universe. Now we know that the closed, single-predicate definition of the Reality Principle is actually a closed descriptive manifold of linked definitions in principle containing the means of its own composition, attribution, recognition, processing and interpretation. So now we know that reality is more than just a linguistic self-contained syndiffeonic relation comprising a closed descriptive manifold of linked definitions containing the means of its own configuration, composition, attribution, recognition, processing and interpretation. Where sets contain their elements and attributes distributively describe their arguments, this implies a dual relationship between topological containment and descriptive attribution as modeled through Venn diagrams-of the circles but attribution is important it assigns by attribution the circles or events its properties that the circles topologically contain-. The point of this paraenthesis is to show why the word “attribution” is so important and why Langan uses it so much to describe things and what he means when he does in describing what is true.), a high-order metapredicate of any physical predicate, corresponds to topological inclusion in a spatial domain-The domain of the circle or venn-shpere light cone cross section-corresponding to specific physical attributes. I.e., to be true, an effect must be not only logically but topologically contained((Inside the circle of the event to be true)) by the cause{The cause of the event is…” instead, the information required for (e.g.) spin conservation is distributed over their joint ancestral IED…the virtual 0-diameter spatiotemporal image of the event that spawned both particles}; to inherit properties determined by an antecedent event, objects involved in consequent events must appear within its logical and spatiotemporal image[The circle or light cone cross section venn-sphere.] In short, logic equals spacetime topology.

In a conspansive universe, the spacetime metric undergoes constant rescaling. Whereas Einstein required a generalization of Cartesian space embodying higher-order geometric properties like spacetime curvature, conspansion requires a yet higher order of generalization in which even relativistic properties, e.g. spacetime curvature inhering in the gravitational field, can be progressively rescaled. Where physical fields of force control or program dynamical geometry, and programming is logically stratified as in NeST, fields become layered stacks of parallel distributive programming that decompose into field strata (conspansive layers) related by an intrinsic requantization function inhering in, and logically inherited from, the most primitive and connective layer of the stack. This “storage process” by which infocognitive spacetime records its logical history (This parallelism has powerful implications. When a human being dies, his entire history remains embedded in the timeless level of consciousness…the Deic level) is called metrical layering (note that since storage is effected by inner-expansive domains which are internally atemporal, this is to some extent a misnomer reflecting weaknesses in standard models of computation).

The metrical layering concept does not involve complicated reasoning. It suffices to note that distributed (as in “event images are outwardly distributed in layers of parallel computation by inner expansion”) effectively means “of 0 intrinsic diameter” with respect to the distributed attribute. If an attribute Corresponding to a logical relation of any order is distributed over a mathematical or physical domain, then interior points of the domain are undifferentiated with respect to it, and it need not be transmitted among them (really the definition of a circle or the properties of a circle here. Which remember domain is — domain (lightcone cross section, Venn sphere)-Domain is the inside of a circle and the fact the as it says the interior points of the domain are undifferentiated and it need not be transmitted among them is because in a circle it consists of an infinite related interior points that aren’t differentiated so therefore interconnected and homogeneous so nothing is transmitted by skipping space because of the undifferentiated interconnected points inside the circle as it appears to in non-locality((information appears to skip space instantly)).). Where space and time exist only with respect to logical distinctions among attributes, metrical differentiation can occur within inner-expansive domains (IEDs) only upon the introduction of consequent attributes relative to which position is redefined in an overlying metrical layer, and what we usually call “the metric” is a function of the total relationship among all layers.

The spacetime metric thus amounts to a Venn-diagrammatic conspansive history in which every conspansive domain (lightcone cross section, Venn sphere) has virtual 0 diameter with respect to distributed attributes, despite apparent nonzero diameter with respect to metrical relations among subsequent events. What appears to be nonlocal transmission of information can thus seem to occur. Nevertheless, the CTMU is a localistic theory in every sense of the word; information is never exchanged “faster than conspansion”, i.e. faster than light (the CTMU’s unique explanation of quantum nonlocality within a localistic model is what entitles it to call itself a consistent “extension” of relativity theory, to which the locality principle is fundamental).

Thus, one perceives the model’s evolution as a conspansive overlay of physically-parametrized Venn diagrams directly through the time (SCSPL grammar) axis

To make things even simpler: the CTMU equates reality to logic, logic to mind, and (by transitivity of equality) reality to mind (Because quantum-scale objects are seen to exist only when they are participating in observational events-((See observer participation here. Quantum scale objects seen to exist when there is observer participation)) it means reality is like a mind. It displays mental characteristics. When a mind perceives it. Thus the laws of physics is as much an expression of our minds or perception as they are objectively physical. This what is meant when Langan writes at the end of the blue up above- reality to mind [Langan actually writes this and lays this out in his most basic yet wonderful work in the Art of Knowing and I won’t share the whole argument he lays out but here’s his conclusion: “So where will we go now, at the dawn of the New Millennium, for renewed confirmation of our “specialness” in the scheme of things? Fortunately, we are not yet facing an ideological dead end. For it seems that just ahead on the intellectual horizon looms a new science of metaphysics…a logical framework in which the importance of humankind is unthreatened by reductionism, and in which the significance of human feelings and emotions is uncompromised by their correlation with lowly biological processes. Rather than declaring us the abject slaves of natural laws beyond our control, this framework yields a new understanding of space and time in which the very laws of physics can be viewed as an expression of our minds. Granted, this framework remains hidden despite its portentous approach. But if its ongoing delay contributes to our collective store of humility, perhaps this is not entirely a bad thing.”-Pg.79. Then it makes a big Venn diagram out of all three, assigns appropriate logical and mathematical functions to the diagram, and deduces implications in light of empirical data. A little reflection reveals that it would be hard to imagine a simpler or more logical theory of reality.

“The Complete Description of the Interacting Circles In Conspansion in the CTMU”

-The small dots in the centers of the circles represent the initial events and objects from which the circles have arisen.

- Where each circle corresponds to two or more objects

-until a new circle appears within the old one by syntactic embedment, the circles are intrinsically undefined in duration and are thus intrinsically atemporal. Time arises strictly as an ordinal relationship among circles rather than within circles themselves. With respect to time-invariant elements of syntax active in any given state (circle)

That is, each circle depicts the “entangled quantum wavefunctions” of the objects which interacted with each other to generate it. The small dots in the centers of the circles represent the initial events and objects from which the circles have arisen, while the twin dots where the circles overlap reflect the fact that any possible new event, or interaction between objects involved in the old events, must occur by mutual acquisition in the intersect. The outward growth (or by conspansive duality, mutual absorption) of the circles is called inner expansion, while the collapse of their objects in new events is called requantization. The circles themselves are called IEDs, short for inner expansive domains, and correspond to pairs of interactive syntactic operators involved in generalized-perceptual events (note the hological “evacuation” and mutual absorption of the operators). Spacetime can be illustrated in terms of a layering of such Venn diagrams, mutual contact among which is referred to as “extended superposition” (in the real world, the Venn diagrams are 3-dimensional rather than planar, the circles are spheres, and “layering” is defined accordingly).

Conspansive duality, the role of which in the CTMU is somewhat analogous to that of the Principle of Equivalence in General Relativity, is the only escape from an infinite ectomorphic “tower of turtles”. Were the perceptual geometry of reality to lack a conspansive dual representation, motion of any kind would require a fixed spatial array or ectomorphic “background space” requiring an explanation of its own, and so on down the tower. Conspansion permits the universe to self-configure through temporal feedback. Each conspanding circle represents an event-potential corresponding to a certain combination of law and state; even after one of these intrinsically atemporal circles has “inner-expanded” across vast reaches of space and time, its source event is still current for anything that interacts with it, e.g. an eye catching one of its photons. At the same time, conspansion gives the quantum wave function of objects a new home: inside the conspanding objects themselves. Without it, the wave function not only has no home, but fails to coincide with any logically evolving system of predicates or “laws of physics”. Eliminate conspansion, and reality becomes an inexplicable space full of deterministic worldlines and the weighty load of problems that can be expected when geometry is divorced from logic.

The areas inside the circles correspond to event potentials, and where events are governed by the laws of physics, to potential instantiations of physical law or “nomological syntax”. Where each circle corresponds to two or more objects, it comprises object potentials as well. That is, the circular boundaries of the Venn circles can be construed as those of “potentialized” objects in the process of absorbing their spatiotemporal neighborhoods. Since the event potentials and object potentials coincide, potential instantiations of law can be said to reside “inside” the objects, and can thus be regarded as functions of their internal rules or “object syntaxes”. Objects thus become syntactic operators, and events become intersections of nomological syntax in the 28 common value of an observable state parameter, position. The circle corresponding to the new event represents an attribute consisting of all associated nomological relationships appropriate to the nature of the interaction including conserved aggregates, and forms a pointwise (statewise) “syntactic covering” for all subsequent potentials. Notice that in this scenario, spacetime evolves linguistically rather than geometrodynamically. Although each Venn circle seems to expand continuously, its content is unchanging; its associated attribute remains static pending subsequent events involving the objects that created it. Since nothing actually changes until a new event is “substituted” for the one previous, i.e. until a new circle appears within the old one by syntactic embedment, the circles are intrinsically undefined in duration and are thus intrinsically atemporal. Time arises strictly as an ordinal relationship among circles rather than within circles themselves. With respect to time-invariant elements of syntax active in any given state (circle), the distinction between zero and nonzero duration is intrinsically meaningless; such elements are heritable under substitution and become syntactic ingredients of subsequent states.

“The Ultimate Answer To quantum Non Locality”

Because each circle is structurally self-distributed, nothing need be transmitted from one part of it to another(Thus, just as the initial collapse of the quantum wavefunction (QWF) of the causally self-contained universe is internal to the universe, the requantizative occurrence of each subsequent event is topologically internal to that event[Inside each circle in this case], and the cause spatially contains the effect. The implications regarding quantum nonlocality are clear.

No longer must information propagate at superluminal velocity between spin-correlated particles; instead, the information required for (e.g.) spin conservation is distributed over their joint ancestral IED…the virtual 0-diameter spatiotemporal image of the event (The circle itself and topologically internal is just that inside it or inside the IED or Inner-inside-Expanding Circle) that spawned both particles(So from the instant to entangled particles generated a circle as it expands the two particles have always been sharing information throughout the circle or the gap between them so when it seems they are doing it instantaneously it’s because they’ve been connected the whole time but the circle expands or in the CTMU conspands at the speed of light. But they would exchange information across this circle not faster than light. When it appears they are instantaneously they were already connected in this circle event from the beginning of the circles expansion or of that information exchange in my own words.) As a correlated ensemble (The small dots in the centers of the circles represent the initial events and objects from which the circles have arisen,). The internal parallelism of this domain — the fact that neither distance nor duration can bind within it — short-circuits spatiotemporal transmission on a logical level. A kind of “logical superconductor”, the domain offers no resistance across the gap between correlated particles; in fact, the “gap” does not exist!), locality constraints arise only with respect to additional invariants differentially activated within circles that represent subsequent states and break the hological symmetry of their antecedents. Conspansion thus affords a certain amount of relief from problems associated with so-called “quantum nonlocality”

The CTMU, on the

other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality

with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.

In a Venn diagram, the contents of circles reflect the structure of their boundaries; the boundaries are the primary descriptors. The interior of a circle is simply an “interiorization” or self-distribution of its syntactic “boundary constraint”. Thus, nested circles corresponding to identical objects display a descriptive form of containment corresponding to syntactic layering, with underlying levels corresponding to syntactic coverings. Through the

principle of conspansive duality, ectomorphism is conjoined with endomorphism, whereby things are mapped, generated or replicated within themselves. Through conspansive endomorphism, syntactic objects are injectively mapped into their own hological interiors from their own syntactic

boundaries.

Because quantum-scale objects are seen to exist only when they are participating in observational events, including their “generalized observations” of each other, their worldlines are merely assumed to exist between events and are in fact syntactically retrodicted, along with the continuum, from the last events in which they are known to have participated. This makes it possible to omit specific worldlines entirely, replacing them with series of Venn diagrams in which circles inner-expand, interpenetrate and “collapse to points” at each interactive generalized-observational event.

SCSPL is logical in construction, has a loop-like dynamic, and creates information and syntax (SCSPL-grammatical process featuring an alternation between dual phases of existence associated with design and actualization…, principle of natural selection a basic means of generating information and complexity.-Pg.1 Abstract) , including the laws of physics, through telic recursion generated by agent-level syntactic operators whose acts of observer-participation are essential to the self-configuration of the Participatory Universe. These acts are linked by telic recursion to the generalized cognitive-perceptual interactions of quantum-level syntactic operators, the minimal events comprising the fabric of space-time (Lumps of matter, syntactic operators containing within themselves the syntactic rules by which they internally process each other to create new states of physical reality.)

You may ask what in the world is a syntactic operator? What in the hell is it? Best explained in the 56 page paper where it’s the fundamental objects everything is made of and why the CTMU can united and adopt string theory as well: “In conventional physical theory, the fundamental entities are point particles, waves and more recently, strings; each class of object has its problems and paradoxes. In the CTMU, the fundamental objects are “syntactic operators” (units of self-transducing information or infocognition) that are not only capable of emulating all of these objects and more, but of containing the syntactic structures to which they must inevitably conform and resolving their characteristic paradoxes in the bargain.”-Pg.20.

attempts to explain reality entirely in terms of physics are paradoxical; reality contains not only the physical, but the abstract machinery of perception and cognition through which “the physical” is perceived and explained. Where this abstract machinery is what we mean by “the supraphysical”, reality has physical and supraphysical aspects. Physical and supraphysical reality are respectively “concrete” and “abstract”, i.e. material and mental in nature.

The question is, do we continue to try to objectivize the supraphysical component of reality as do the theories of physics, strings and membranes, thus regenerating the paradox? Or do we take the CTMU approach and resolve the paradox, admitting that the supraphysical aspect of reality is “mental” in a generalized sense and describing all components of reality in terms of SCSPL syntactic operators with subjective and objective aspects?

One of the biggest moments in the history of physics was when they discovered they could create an atomic bomb Due to Einstein’s equation that an isotope if it can be split could cause a chain reaction of neutrons running into each other and imploding on each other richaing the neutrons off of the next nearby atom in line causing a massive release of energy which we know from Einstein’s equation. In the CTMU atoms are made up of Syntactic Operators or that Syntactic Operators are still fundamental to the atom. I’m not sure if the practicality goes with syntactic operators that we can split one. But as far as physics go they play a fundamental role in all of the particles and stings and waves.

���L�H��H��3tGM�

--

--