Bekenstein Bounds

Wes Hansen
16 min readJun 29, 2023

--

This barn-burning I’ve been engaged in is intended to be an open-ended process, but for the benefit of Gregrchris, Lfsequeira, Tom Ritchford, Bibi Harim, Remarkl, Alan King, Cam Cairns, Bhupinder Singh Anand,
I’ve been thinking about…
, Graham Pemberton, and Bruce McGraw and anyone else who I may have offended, I thought I would try to clear a bit of smoke, reduce the entropic content, and perhaps produce a coherent narrative. I think I can do it; I think so.

The original Bekenstein bound referred to black holes and led to the so-called Holographic Principle, but Bekenstein later generalized his bound to arbitrary spacetime volumes. This is why the Holographic Principle is typically expressed in terms of a dimensional reduction: the information density of a volume of spacetime is proportional to the boundary of that volume. That just IS the Bekenstein Bound without the math. There’s a short introductory article which gives a brief history and the motivation for the full Holographic Principle, by which I mean, as it’s related to the AdS/CFT correspondence, on the Young Scientists Journal: The Holographic Principle. There are actually a number of problems with this analogy; in fact, I would classify it as an analogical fallacy.

Take Lasers and Holography: An Introduction to Coherent Optics [1] (pdf), last copyrighted in 1981, as reference. Photographic holograms are created by diffracting coherent laser light off of three-dimensional objects and recording the diffraction patterns on a photographic plate. The photographic plate is treated with emulsion, which can be several wave-lengths thick, due to the excessively small wavelengths of visible light. To quote the author:

Photographically made gratings, zone plates, and holograms must, therefore, be considered as recorded volume interference patterns.

So, what we actually have with a typical photographic hologram is a two-dimensional object — the surface of the three-dimensional object the light is diffracted off of, encoded in a three-dimensional, emulsion treated, photographic plate, i. e. a dimensional increase. Furthermore, one of the key characteristics of holography is its rather high information densities, again, due to the excessively small wavelengths of visible light. You can read a 2020 article about Microsoft developing holographic memory storage and retrieval for cloud operations. Now, you might think that this fits well with the Holographic Principle, that the information in the volume, typically called the bulk, is compressed into a high density and stored on the boundary, but this isn’t how the Holographic Principle, as generally stated, works. From the Young Scientists Journal:

Foremostly, in calculating his equation for black hole entropy, Bekenstein worked out the maximum information that can be stored within the black hole, finding it proportional to the black hole surface area. Well, Bekenstein also extrapolated this principle into Bekenstein Bounds which measured the maximum information storage of any space, finding it again proportional to the surface area of the enclosed space.

Just as I stated above.

So, it’s quite simple to see the fallacy here. But the actual situation, at least as I see it, is much, much more interesting and insightful. That aspect of holography which truly captures what seems to be happening in our Universe is related to the essence of diffraction, which is information entanglement. This is what enables an image of the entire holographically recorded object to be produced from a fragment of the photographic plate: the whole (holos) is contained in the parts. This seems to suggest that, in some sense of the term “simpler,” the whole is simpler than the parts.

[G]ibbs had perceived that, when two systems interact, only the entropy of the whole is meaningful. Today we would say that the interaction induces correlations in their states which makes the entropy of the whole less than the sum of entropies of the parts; and it is the entropy of the whole that contains full thermodynamic information. This reminds us of Gibb’s famous remark, made in a supposedly (but perhaps not really) different context: “The whole is simpler than its parts.” How could Gibbs have perceived this long before the days of quantum theory?

Edwin Jaynes, as quoted by Jeynes et. al. [2]

Per the derivations by Michael Parker and Chris Jeynes in their paper, Entropic Uncertainty Principle, Partition Function, and Holographic Principle Derived from Liouville’s Theorem [3], and the empirical results of William Tiller, it appears the Holographic Principle, separated from String Theory and AdS/CFT and presented as a Corollary to Tiller’s Macroscopic Information Entanglement (MEI), which is what enables “the whole (holos) to be simpler than its parts,” IS a completely general phenomena, as Bekenstein intuited AND calculated. Theoretically, in the Quantitative Geometrical Thermodynamics (QGT) of Parker and Jeynes it “is a consequence of the holomorphism (and MaxEnt state) of the objects considered” and in Tiller’s dual-space formalism it is a direct consequence, as implied above, of MEI facilitated by the relation between spacetime (Tiller’s Direct or D-space), the wave domain (Tiller’s Reciprocal or R-space), and the necessary deltron moeity, a variable coupling field connecting R-space to D-space and necessitated by the constraints of Relativity theory. This coupling is modeled via deltron-modulated Fourier transform and this leads to information entanglement with ALL of R-space, similar to a hologram but including the temporal dimension, facilitating the formation of “holomorphic” objects, in the case of Parker and Jeynes, and “holomorphic” events, in the case of Tiller et. al. [4].

Parker and Jeynes present their formalism in Maximum Entropy (Most Likely) Double Helical and Double Logarithmic Spiral Trajectories in Spacetime [5], but they summarize the process leading to it in [3] (the “most likely” here means “maximum entropy”, i. e. systems that have these geometries are MaxEnt systems):

PJ2019 construct the QGT formalism from a rigorous restatement (in their Appendix A) of Parker & Walker (2010) [6], who use the standard definition of entropy as differential information (following standard literature and Parker & Walker, 2004 [7]) to build a geometric entropy theoretical framework. Integrating the differential information in a contour integral across 4-space (that is, Minkowski spacetime) such that the line integral consistently follows a path in the positive time direction ensures that the geometric structures frequently observed in nature (such as the double-helix of DNA) explicitly obey the Second Law of Thermodynamics, even though the result of the integration is a static geometrical structure (independent of time).

It is necessary to appreciate that not one but two conceptual steps are made in this move. The first step is to show that the information emerges from a contour integral over time in 4-space, which therefore reduces the dimensionality of the structures under consideration to three and is the basis for the static description required for Maximum Entropy objects. The second step is then to explicitly derive expressions for the entropy of certain holomorphic geometries, which must have C2 symmetry in physical systems.

They mean “at least” C2 symmetry, of course, and you see that their fundamental assumption (axiom) is the Second Law of Thermodynamics. The result, summarized in Table 1 in [5], is a thorough treatment isomorphic to the Lagrangian/Hamiltonian kinematics but with “action” replaced by “exertion” (I find that term highly suggestive). This enables, in [3], the derivation of an entropic Liouville Theorem which results in “the general entropy for a geometric system featuring 2N degrees of freedom with a quantized cell size whose scale is undetermined”:

S = k_B ln(M).

Applied to MaxEnt systems this becomes:

S = (Ω_s/△p△q)k_B

where Ω_s is the total system phase space volume, △p△q is an incremental area of phase space sub-volume, and k_B is the Boltzmann constant. For the double helix △p = k_B R^2△R, where R is the radius, and △q = Rκ△x, where κ = 2π/λ, with λ the pitch and x = R exp (iκx_3). This leads immediately to:

S = (1/4(A/△R△x))k_B

where A = 4πR^2 is a sphere of radius R and △R△x determines the granularity of the system. Letting △R = △x = ℓ_p, the Planck length, with R = r_s, the Schwarzchild radius, leads immediately to

S_{BH} = 1/4(4π(r_s)^2/(ℓp)^2)k_B

the entropy of a blackhole. They also derive the Holographic Principle (its mathematical form) for DNA and an idealized model of our Milky Way galaxy. They discuss additional results in Halo Properties in Helium Nuclei from the Perspective of Geometrical Thermodynamics [8] and summarize in [2] (emphasis in the original):

In our present context, the point about holography is precisely that each part represents the whole, that is, it carries the implication of non-locality. It is of course well known that “individual” electrons in an atom, or “individual” nucleons in a nucleus are strictly indistinguishable in a proper quantum treatment: this implies that in a holographic system all the “individual entities” are actually somehow mutually entangled.

Entanglement at the microscopic scale is currently well understood. But the galactic scale also appears to us to have some properties which seem similar. It is clear that our idealised spiral galaxy, expressed as a (holomorphic) double-logarithmic spiral, is treated by the QGT formalism as an object whose entropy is given holographically, just like the entropy of its central supermassive black hole. But then, should the galaxy not also be considered as entangled, just as are quantum things like atoms and atomic nuclei? After all, entanglement represents another way to speak of non-local influence, and what could be more non-local than the symmetry of well-formed spiral galaxies, which are common in the Universe?

Essentially, the Holographic Principle, as intended here, implies Tiller’s MIE on galactic if not cosmic scales and the relation to spacetime geometry, via QGT, would seem to indicate a link to gravity, via Einstein’s General theory. Before discussing that, I would like to introduce some related curiosities involving the foundational works of Kevin Knuth, which are related to David Hestenes’ Zitter model of fermions and also to the foundational works of Ulf Klein.

Knuth is a highly regarded physicist who specializes in Bayesian Model Selection and MaxEnt methods. He has played a central role in creating what might be called Inference Theory, formally initiated by Edwin Jaynes, with his formulation of MaxEnt as a variational principle, and Richard Cox, with his derivation of Probability Theory as a calculus generalizing an algebra of implication. Knuth and his co-conspirators have extended these methods of Jaynes and Cox to general algebras, in many cases relevant to the foundations of physics. A good introduction to Knuth’s work in this vein is his paper, Information-Based Physics: An Observor-Centric Foundation [9]. Knuth is who introduced me to Hestenes’ Zitter model of fermions and in The Problem of Motion: The Statistical Mechanics of Zitterbewegung [10] (a more involved treatment is in [11]) he explores the consequences of Schoedinger’s Zitterbewegung, which came about due to the velocity eigenvalues of the Dirac equation being ± c, the velocity of light. Knuth derives the relativistic velocity addition rule in 1 + 1 dimensions, developing a statistical mechanics of motion in the process. This leads to an entropy measure based on Helicity and Shannon Entropy:

S = −Pr(R)logPr(R) − Pr(L)logPr(L)

where Pr(X) denotes the probability of coming from direction X (Helicity). This can be represented with relativistic terms

S = log(2γ) − βlog(1+z)

where γ is the relativistic Lorentz factor γ = (1−β^2)^{−1/2} and 1+z is related to the redshift z given by 1+z = √1+β/√/1−β for motion in the radial direction. As Knuth points out, with this relativistic entropy measure, a particle at rest is MaxEnt, since PR(L) = 1/2 = PR(R), while a particle moving at the velocity of light minimizes entropy, with S = 0, because either PR(L) = 1 or PR(R) = 1 (see his short paper). In other words, Helicity disappears at the velocity of light. This is consistent with Hestenes’ Zitter model, where the electron is a point charge orbiting a center of mass at the speed of light with radius r = ℏ/mc. With translational velocity it traces out a helix in spacetime, with the radius of the helix going to zero as the translational velocity goes to c, obviously, i. e. per the Pythagorean Theorem, c^2 = (v_t)^2 + (vr)^2, where v_t is translational velocity and v_r rotational. This is a great candidate for explaining Ulf Klein’s result showing that the so-called Bohr Correspondence Principle does not hold in general [12]; it doesn’t hold because ℏ → 0 as v_t → c, a rather elegant explanation. Furthermore, this entropy measure is thoroughly consistent with the QGT measure for, well, the double helix. In [5], Parker and Jeynes provide an alternative representation as a function of helix length, for their study of DNA

S = √(1+κ^2R^2)πκLk_B

and comment:

However, in the case of a photon its proper length is actually zero relativistically, since it travels at the speed of light: L = 0, therefore, S = 0.

In [3], they discuss their hyperbolic velocity in light of the isomorphism between their entropic formalism and the standard kinematic formalism. They point out that kinematics distinguishes between a group velocity, which is equal to that of the particle and describes the velocity of the pilot wave, and the phase velocity, which is superluminal [13] [14]. Let v_w be the phase velocity and v_g the group velocity, then

v_wv_g = c^2

which makes sense given that all waves have frequency and wavelength with λν = c, i. e. the product of “space” and “time” is c. But now

1/v_w(v_w − v_g) = 1 − (vg)^2/c^2

and we have these relativistic entropies going to zero — becoming minimum, when v_w = c = v_g! This is the key point! It is precisely these superluminal phase waves which motivates Tiller’s deltron moiety. As Tiller points out in [4], these phase waves contain all of the information about these de Broglie particle/pilot wave constructs and, via the relation between information and entropy, made explicit by Parker and Jeynes, there is a thermodynamic free energy exchange going on here in apparent conflict with Einstein’s Special theory. Tiller resolves this with his deltron moiety and it would seem a natural conjecture that this relativistic geometric entropy is related to Tiller’s coupling field, i. e. the coupling field is responsible for the Bekenstein Bounds! And it is these phase waves that are the source of the Holographic Principle, as defined here on Tiller’s MEI.

William Tiller died at age 92 a little over a year ago and he was considered a heretic by the religious zealots online (see his Wikipedia page), but to get an idea, see his obit in the Stanford Record [15]:

That work and his subsequent nine years at Westinghouse Research Laboratory earned Tiller a certain academic reputation such that in 1964 when he joined Stanford University’s Department of Materials Science and Engineering he was the first faculty member to be appointed as — rather than promoted to — full professor. In Tiller’s first year on the faculty, his Air Force Office of Scientific Research contract alone was $600,000 per year, the largest in the department by a considerable margin. In today’s dollars, such a contact would exceed $5 million.

In [4], Tiller describes the three robust experimental results they were able to obtain in spite of “the unstated assumption that no human qualities of consciousness, intention, emotion, mind or spirit can significantly influence a well-designed target experiment in physical reality”:

The key experimental step needed to confirm the operational nature of psychoenergetic science was to unequivocally prove, via a series of human intention experiments, that in today’s world the unstated assumption of orthodox science is quite wrong!

Our various steps in the four groundbreaking experiments were (a) to carefully design four different intention experiments:

  • To increase the acid/alkaline balance (pH) of a specific type of water by △pH = +1.0 pH units (a factor of 10 decrease in hydrogen ion, H^+, content of the water) with no chemical additions to the system,
  • to decrease the pH of this same type of water by △pH = −1.0 pH units with no chemical additions to the system,
  • to increase the in-vitro thermodynamic activity of a specific liver enzyme, alkaline phosphatase (ALP), by a significant amount (~30% for example) by simply exposing a vial of this ALP to a highly “intention-conditioned” experimental space for about 30 minutes and
  • to increase the in-vivo ratio of ATP/ADP in the cells of fruit fly larvae by a significant amount as a result of lifetime exposure of the larvae to a highly “intention-conditioned” experimental space.

All of the above experiments were successful and the pH experiment, due to its simplicity, was peer-replicated several times, in both the United States and Europe. Of course, you will not find any of his papers on the arxiv or in the mainstream journals, that would actually constitute an adaptive and highly functioning scientific establishment, which doesn’t exist.

The pH experiment was actually peer-replicated in 8 different labs in the United States, in a lab in Britain, and in a lab in Italy. The macroscopic information entanglement was realized most spectacularly with the British and Italian labs. These results were reported on in the Journal of Alternative and Complementary Medicine [16], Tiller being rejected by the mainstream physics journals, and is behind a paywall. It’s actually a fascinating story.

Tiller and a few other researchers were in the process of replicating the pH experiment in a number of labs in the United States, this research being funded by a wealthy industrialist from Minnesota, Buck Charlson, and the Samueli Institute. They had experiments set up in Arizona, Kansas, Missouri, and Maryland and were fully engaged when Tiller was contacted by a group of young engineering professionals from Britain. They were interested in replicating Tiller’s work, so Tiller told them what to purchase for the pH experiment and instructed them on how, exactly, to set everything up. Once they got everything up and running Tiller was going to send them an Intention Imprinted Electrical Device (IIED) consistent with the pH experiment. In the interim, Tiller was contacted by a second group of young engineering professionals from Italy with the same request and the same situation developed with them. Before Tiller could send either of these groups an IIED, they both started seeing robust results! This represented what Tiller calls macroscopic information entanglement across rough terrestrial terrain and the Atlantic ocean and across distances from 5,000 to 6,000 miles. Rather remarkable!

Tiller explains this with a biconformal reference frame, where the two conjugate spaces are our ordinary distance/time space he calls D-space, and a reciprocal frequency space, which is neither distance nor time dependent, which he calls R-space, as described briefly above. The R-space contains the holographic wave aspects, including the superluminal phase waves, and these spaces are coupled by a coupling field (gauge field) having the peculiar property of being able to interact with both subluminal and superluminal entities. He calls this coupling field the deltron moeity and human consciousness appears capable of impacting the density of this moeity, hence, the coupling strength. So, these spaces are actually coupled, mathematically, by a deltron modulated Fourier transform according to what Tiller calls the Inversion Mirror Principle. The waves in R-space are actually magneto-electric (moving magnetic charges inducing electric fields) information waves.

It has long been known in Applied Kinesiology that if a DC magnet is placed near an acupuncture/meridian point on the human body it impacts the strength of the muscle group associated with that acupuncture/meridian point depending on which pole is facing said point. Tiller applied this same test to the pH experiment.

Using the experimental apparatus illustrated in Figure 4.2, the ceramic magnet can be placed at the bottom of the pH-measurement vessel to have either its North-pole pointing upwards or its South-pole pointing upwards without destroying the symmetry of the magnetic field in the water relative to the vessel’s vertical axis. When such an experiment is conducted in a typical unconditioned laboratory, one observes two things, (1) there is no detectable difference between the North-pole pointing upwards and the South-pole pointing upwards case and (2) there is no detectable pH change occurring in the water for either case. On the other hand, when one makes identical measurements in an IIED-conditioned laboratory, the results can be remarkably different. In this case, one usually finds that the change in pH, pH (South-pole up) minus pH (North-pole up), is not equal to zero.

The South-pole up raises the pH significantly while the North-pole up lowers it slightly. They also detected anomalous spatial and TEMPORAL entanglement between oscillations in air temperature, water temperature, water pH, and water electrical conductivity.

Oscillations, in the frequency range of ~ one, one hundreth of a Hertz to one, one thousandth of a Hertz, are commonly observed (but not constantly)for the measurements of air temperature, T_A, water temperature, T_W, water pH, and water electrical conductivity. This is an extremely low frequency range. Furthermore, they are global in the partially conditioned room rather than just local, as one would find for our normal, electric atom/molecule level of physical reality.

He applied Fourier analysis to this anomalous data and the harmonics all nested with one another over a considerable distance. Temporally as well in later experiments. He concludes:

This is extremely anomalous and it tells us that something is coherently pumping this entire laboratory space and all the measurement instruments therein!

Clearly, or so it would seem, this “something” is also capable of coherently pumping laboratories tenuously related across spatial distances of 6,000 miles. With regards to the magnetic charges, see Spacetime algebra as a powerful tool for electromagnetism [17].

Parker and Jeynes, in their discussion of phase and group waves in [3] show that their formalism requires the hyperbolic velocity isomorphic to the kinematic phase velocity, i. e. the superluminal velocity. One cannot help but wonder if they are actually in some sense modeling the magnetoelectric information waves of Tiller’s PsychoEnergetics? And this is precisely what makes PsychoEnergetics work — the Bekenstein Bounds. Human consciousness, in the form of intent, is capable of modulating the density of the deltron moiety, strengthening the coupling of these two conjugate spaces leading to a violation of the Bekenstein Bound, hence, to a significant information gradient. This gradient enters the Gibb’s free energy equation under the entropy term, hence, it can be used for work. If we followed Tiller’s duplex-space program, augmented with Hestenes’ Maxwell-Dirac theory and its consequences, then we should eventually be able to construct a general theory of spacetime PsychoEnergetics with one free parameter, the alpha variable. We should be able to do this because, according to Tiller and many, many wisdom holders, we created this thing:

This simulator model is primarily an energy/consciousness model. My working hypothesis is that we are primarily elements of spirit, indestructible and eternal and “multiplexed” in the divine. As such, we have a mechanism of perception which is a 10-dimensional mind domain. In turn, this mind mechanism creates a vehicle for our experience — our cosmos, our local universe, our solar system, our planet, our physical bodies, etc. This is all a simulator for our experience which we view from the spirit level of self which is outside the simulator. Thus, we are spirits having a physical experience.

So, the Universe is projected holomorphically by mind, but mind is projected holomorphically by spirit. Tiller has a rather elaborate theory based on nodal networks fractally organized which act as transponders/transducers for consciousness/energy conversion.

Hopefully that clears the smoke a bit . . .

  1. Lasers and Holography: An Introduction to Coherent Optics (pdf);
  2. The Poetics of Physics;
  3. Entropic Uncertainty Principle, Partition Function, and Holographic Principle Derived from Liouville’s Theorem;
  4. PsychoEnergetic Science: A Second Copernican-scale Revolution;
  5. Maximum Entropy (Most Likely) Double Helical and Double Logarithmic Spiral Trajectories in Spacetime;
  6. A Dynamic Model of Information and Entropy;
  7. Information transfer and Landauer’s principle;
  8. Halo Properties in Helium Nuclei from the Perspective of Geometrical Thermodynamics;
  9. Information-Based Physics: An Observor-Centric Foundation;
  10. The Problem of Motion: The Statistical Mechanics of Zitterbewegung;
  11. Understanding the Electron;
  12. What is the limit ℏ → 0 of quantum theory?;
  13. Why Has Orthodox Physics Neglected the Superluminal Velocities of de Broglie Pilot Wave Components?;
  14. Feynman Lecture 48;
  15. William Tiller, materials engineer, expert in materials solidification, has died;
  16. Toward General Experimentation and Discovery in Conditioned Laboratory Spaces: Part IV. Macroscopic Information Entanglement Between Sites ∼6000 Miles Apart;
  17. Spacetime algebra as a powerful tool for electromagnetism.

--

--