QUANTUM THEORY

Challenging the Blackbody Radiation Narrative

On the nature of the physics behind Planck’s Radiation Law

Kieran D. Kelly
25 min readDec 14, 2022

--

Blackbody Radiation curves for various temperatures. (Credit: Darth Kule/Wikimedia Commons)

Why do we believe that, at the fundamental level of reality, energy must be a “quantized” quantity?

In December 1900, the German physicist Max Planck successfully derived the mathematics of Blackbody Radiation. But, in doing so, he employed a contextual narrative (explaining the supposed physics behind his mathematics) and this contextual narrative ultimately came to have a very profound effect on how we think about all of physics at the subatomic level. To this day, Planck’s narrative has never been challenged — because it seems so “obviously” correct.

On this day 122 years ago, Max Planck presented his “thermodynamic” derivation of his now famous “Blackbody Radiation Law”. In January of 1901, he followed up this presentation by publishing a paper on the same subject. In both his December 1900 presentation, and his January 1901 paper, Planck provides the same contextual narrative for the physics behind his mathematical derivation. But this contextual narrative was based on an implicit assumption — an assumption buried so deep in the core of physics, that almost no one seems to recognize it as being merely an assumption.

In his 1901 paper, Planck explains that thermal radiation (from a hot body) must be in thermal equilibrium with the temperature of the hot body, and from there he goes on to state that “the normal energy distribution is the one in which all different frequencies of radiation have the same average energy over time”.

Planck’s tacit assumption is that: frequencies of radiation can be treated as if they are PRE-EXISTING THINGS — like some type of natural quantity that permanently exist within the electromagnetic field simply waiting to be excited.

This is a huge assumption to make about quantities that are normally considered to be a property of a thing, rather than a thing itself. And it is clearly an assumption that Planck does not even recognize as being an assumption (as he probably viewed it simply as fact). But it is thanks to this tacit assumption that Planck is now in a position to be able to argue that his derivation can be “reduced to determining the entropy of a SINGLE-FREQUENCY resonator [i.e. single-frequency oscillator] as a function of its vibrational energy”.

So essentially Planck’s tacit assumption imposes a narrative restriction on his mathematical derivation — the restriction that he is dealing with a group of resonators/oscillators with a SINGLE frequency — which means that the frequency component of his derivation MUST be a FIXED quantity.

And so when Planck’s mathematics results in his energy-element (ε) being proportional to frequency (f), it is not surprising that Planck comes to the conclusion that the energy-element (ε=hf) must also be a fixed quantity. And this fixed quantity of energy is clearly NOT a “differential of energy”, because (h) is clearly NOT a “differential of ACTION”.

And so all of that means that this non-differential quantity of energy (ε=hf) appears to be LIMITING how small the smallest increment of amplitude can be.

And from this apparent restriction comes the idea that: the amplitude of any vibration must only be able to rise and fall in a jumpy-like fashion. And it is THIS apparent “jumpiness” in amplitude that makes Planck’s work so difficult for us to understand.

BUT what if we were to throw out the implicit assumption about pre-existing frequencies?

Well then we invalidate Planck’s “single-frequency” narrative!

AND so what if we then throw away Planck’s “single-frequency” narrative?

Well then we would be left with the bare mathematics of his model — a simple model of a system of 10 independent (unrestricted) oscillators sharing 100 units of energy. And this simplification of the narrative surrounding Planck’s model paves the way for a radical review of what is actually being “modelled” in the derivation of the Planck Radiation Law!

HOT THINGS GLOW

We all know that if we heat a thing hot enough, it will begin to glow. This glow is referred to in physics as thermal radiation (or “blackbody radiation”).

All forms of radiation fall under the domain of “electromagnetism”, a wide branch of physics mathematically summarized in the mid-1800s by James Clerk Maxwell’s “Equations of Electrodynamics” .

But in the latter part of the 19th Century physicists began to realize that there was a problem explaining the properties of thermal radiation with their existing understanding of Maxwell’s classical electrodynamics.

It was well known from experiments that the radiation emitted from hot bodies was distributed across a wide spectrum of wavelengths/frequencies; and from a theoretical point of view it seemed fairly obvious why this should be so.

The theory was that energy is always equally shared out among all available degrees of freedom, and in this case the degrees of freedom were considered to be the wavelengths/frequencies. But this classical theory had a serious problem trying to explain why the graphs of these spectral distributions of radiation have the SHAPE that they have.

Experimental observations showed that, regardless of temperature, the shape of a spectral distribution always had a peak intensity, before a significant drop-off in the intensity of shorter-wavelengths/higher-frequencies. But the question was

Why the Peak?

Why the Drop-off?

FROZEN-OUT

Today, the prevailing theory of thermal/blackbody radiation is that short-wavelengths/high-frequencies of radiation are frozen-out of existence — “INHIBITED” by Energy “QUANTIZATION”!

This understanding is the foundation stone of Quantum Theory. It is the basis on which everything else has been built.

In this paper we will argue that this foundational concept is in fact flawed; and consequently everything that has been built on this flawed idea is similarly flawed by association.

Moreover, we will argue that because of this foundational mis-step, the underlying truth about the quantum nature of reality has remained hidden for the past 122 years…

BRIEF HISTORY OF BLACKBODY RADIATION

For millennia, mankind had known that very hot things “Glow”. Moreover, we knew that as the temperature is raised, the color of the glow changes.

By the end of the Middle Ages it had started to become clear that ordinary white light actually contained the full rainbow of colors. Consequently, the color of the thermal glow from heated objects came to be understood as being the result of the prominence of one color of light over all the others.

Toward the middle of the 19th Century, physicists began to understand light as being waves of electromagnetic radiation. And it seemed obvious that there must be some form of a “relationship” between the temperature of hot bodies and the distribution of the wavelengths of light!

But WHAT was that relationship?

There is quite a long history associated with trying to identify this relationship, but the shortened version is something like this.

From Kirchhoff to Planck

In 1859, German physicist, Gustav Kirchhoff noted that the distribution of wavelengths of radiation (given off by a hot body) is “independent” of the material being heated. Kirchhoff recognized the “universality” of this type of radiation, but was unable to determine the underlying mathematics driving this physics.

For many years there was very little real progress made on this problem. But then in the closing decade of the 19th Century the electric light industry exploded into life; and suddenly it became financially very valuable to understand at what temperature could the maximum amount of “visible light” radiation be generated, from a heated electric bulb filament.

In 1896, German physicist, Wilhelm Wien used empirical data-fitting techniques to come up with an empirical formula that seemed to match experimental results.

Notwithstanding this progress, no theoretical explanation was as yet forthcoming. From a theoretical perspective, no treatment could be made to agree with the experimental data.

Now, the general consensus had long been that the blackbody problem was an electromagnetic wave theory problem; but then in 1899, Max Planck put forward a completely different theoretical approach when he appeared to re-derive Wien’s empirical formula using the concept of “entropy” from the second law of “Thermodynamics”.

However, by September 1900, Wien’s formula was shown to be slightly incorrect.

In October 1900, Planck tweaked Wien’s formula and, in doing so, got mathematical results that PRECISELY matched the experimental data.

In November 1900, in an attempt to explain his “lucky guess” and thereby gain a greater understanding of the underlying physics, Planck revisited his thermodynamic approach to the derivation of the mathematics. However all his efforts, using traditional approaches failed to yield his new radiation law (which he knew to be mathematically correct).

In “desperation” Planck turned to something he had always been very averse to, he tried the statistical approach to the Second Law of Thermodynamics (pioneered a number of years earlier by yet another German physicist, Ludwig Boltzmann) and it worked.

The methodology that Planck used seemed to indicate that amplitudes of thermal radiation come in increments of (hf). Or in other words, “radiated” energy seems to be “QUANTIZED”…

But WHY do we say “seemed” to indicate, and “seems” to be quantized?

Well because: although the methodology Planck used did indeed get the correct mathematical result, we would argue that the NARRATIVE that accompanied this methodology has been responsible for setting Quantum Theory off on the wrong theoretical path (which has ultimately led to 122 years of quantum confusion).

QUESTIONING PLANCK’S NARRATIVE

What Planck did was to use the statistical approach to the Second Law of Thermodynamics to derive the radiation law.

However his approach was based on an unspoken assumption that: in the absence of all constraints, energy should theoretically be shared equally among all possible frequencies.

Consequently, the methodology Planck describes in his paper is actually implying the implementation of a “double distribution”.

Planck says he wants to determine the equilibrium entropy of a “monochromatic” oscillator (i.e. the entropy of an oscillator that oscillates at a “single frequency”) within the walls of the “cavity oven” (i.e. within the walls of an experimental blackbody apparatus).

[NOTE: What he actually says is that: “the law of energy distribution in the normal spectrum [i.e. the blackbody spectrum] is completely determined when one succeeds in calculating the entropy of an irradiated monochromatic [i.e. single frequency] vibrating resonator [i.e. oscillator] as a function of its vibrational energy”.]

To “simulate” this state of “thermal entropy” Planck says he will distribute monochromatic thermal energy (i.e. thermally-induced vibrational energy of a given frequency) among a number of oscillators (of the given monochromatic frequency).

It is this focus on “a monochromatic frequency” that implies the application of a double-layered distribution — a distribution and a sub-distribution.

The unspoken primary distribution is to distribute the energy of the entire system among ALL possible frequencies. The narrated secondary distribution is to distribute all the energy AT a given frequency among (N) oscillators OF the given frequency. It is this secondary distribution that Planck carries out in what appears to be a “discrete quantized fashion”.

Planck’s Calculations

In part one of his calculations, Planck describes a distribution of a number (P) of discrete units of energy (ε) at a given frequency among (N) oscillators of the given frequency. Planck then goes through the mathematics of probability to come up with an expression for the “entropy” of a single oscillator.

In part two of his calculations, Planck works through Wien’s Displacement Law to determine an alternative expression for the entropy; and ultimately concludes from this analysis that the entropy of a resonator is solely a function of “vibration-energy over frequency”. He then applies this “radiation” entropy to his previously derived “probabilistic” form of entropy and finds that the energy-element (ε) must be “proportional” to the frequency (f). Planck sets the constant of proportionality to be (h) and states that the energy-element (ε) is equal to (hf).

Quantum unit of energy is proportional to a FIXED frequency

Questioning the Initial Assumption

Now, thanks to how Planck structured his single-frequency narrative, this “discrete unit of energy” must mean that each one of his 10 oscillators is restricted to “quantized amplitudes” — in other words: they can only have amplitudes that come in units of (hf). This counter-intuitive description of how changes in amplitudes of energy occur at the most fundamental level is usually described as “The Quantization of Energy”.

But this “conclusion” about the quantization of amplitude is a direct result of the implementation of an implicit double-distribution of energy, and its resulting single-frequency narrative — which requires that a quantity of MONOCHROMATIC thermal energy is shared out among a fixed number of MONOCHROMATIC resonators/oscillators.

But what if this double-distribution and resultant single-frequency narrative is wrong?

CHALLENGING THE NARRATIVE

It is our contention that the concept of “Discontinuous Energy” in the subatomic realm is not the result of Planck’s mathematical derivation, but the result of his accompanying narrative.

In part one of his 1901 paper, Planck sets out to distribute (P) units of energy (of a given frequency) among 10 monochromatic oscillators (of the given frequency). But implicit in this narrative is that this distribution is a secondary distribution to the primary distribution that initially shares the TOTAL energy of the system among ALL possible frequencies. Thus Planck’s paper has a “double distribution” implicitly built into his narrative.

This implicit double-distribution creates an artificial distinction between: the intensity OF a given frequency (nᵢ = P), and the intensity of an individual oscillator AT this given frequency (nⱼ). This artificial-distinction makes it appear as if the intensity of energy of an individual oscillator (nⱼεᵢ) is an “amplitude” of oscillation; and the number of units-of-energy (nⱼ) are “incremental units” of that amplitude.

Intensity of Energy-level-i

Next, in part two of his calculations, Planck finds that his unit of energy (ε) MUST BE proportional to the frequency (f). But, thanks to his accompanying single-frequency narrative, (hf) is considered to be an energy-increment of an energy-amplitude; and so energy in the subatomic realm appears to have some restrictions on how small a minimum quantity of energy (of a given frequency) can be. In other words, thanks to Planck’s accompanying narrative, energy appears to be “Quantized” into “minimum increments of change”. And it was THIS narrative-induced outcome that gave birth to the idea that energy appears to be “Discontinuous” in the subatomic world.

The premise of this paper is that this well-established belief is wrong. And, by challenging Planck’s narrative (not his mathematics) we are, in effect, challenging the concept of “discontinuous energy”.

Our argument is that the concept of discontinuous energy was a direct result of Planck’s single-frequency narrative; and although this narrative was flawed, his mathematics works nonetheless — because his underlying methodology works equally well whether one is dealing with a single-frequency system of oscillators, or a system of UNRESTRICTED oscillators.

Moreover, we would also argue, that although Planck’s narrative also speaks to the oscillators being MATERIAL oscillators (within the walls of the cavity), this aspect of his narrative also has no effect on the mathematics of the derivation, and so the mathematics would work equally well if we were to perceive the oscillators to be tiny SPATIAL oscillators (which occupy the spatial interior of the cavity).

And with this idea in mind, let us now engage in a little thought experiment…

THE SHRINKING ROOM

Imagine a bunch of us are in a perfect cubic room. The length and breadth of the room are exactly the same as the height of the room. Imagine also that the room is at room temperature (say twenty degrees centigrade).

Now, without thinking about it we would all automatically assume that the room is at thermodynamic equilibrium (meaning that all parts of the room have the same temperature).

Now imagine we convert our room into a small house of 8 rooms, 4 on the ground floor, and 4 more upstairs. Each room in the house is exactly the same size — exactly ⅛ the size of the house. Imagine also that we have shrunk ourselves down to ⅛ our original size; and that we all congregate in one of the rooms on the ground floor.

Now we could ask “what’s the temperature in this smaller room?” and on measurement we would find that it is the same as before — 20 degrees centigrade. Now let’s repeat the procedure, and again shrink both the room and ourselves, such that we are now standing in one of the rooms in a house of 64 rooms. Again we ask what is the temperature of this room? And again we find that the temperature is 20 degrees centigrade.

If we were to keep repeating this process over and over again, we would eventually find ourselves in a very very small room; and at this scale it is by no means a certainty that the temperature will be 20 degrees. In fact the smaller we go the less likely it will be so; and so when we move from room to room, we will probably find that each room has a slightly different temperature. Shrunk down to an infinitesimal scale what we are likely to find is a wide spectrum of temperatures across a house of almost infinitely many rooms, but the AVERAGE temperature would STILL be 20 degrees centigrade.

What we have described here is very similar to how temperature is described by the Kinetic Theory of Gases (which suggests that what we feel as “room temperature” is really just the microscopic motion of gas particles).

Thermal Energy in a Vacuum

Now, for a moment, let’s scale back up to our original size and to the original room; and imagine this time that this original room is a vacuum. Let’s also imagine that the walls, floor, and ceiling, of the room, are all heated such that all six surfaces are acting like thermal radiators. In this imaginary vacuum we would expect, given sufficient time, to find that the “thermal radiation” in the room is at thermal “equilibrium” with the surrounding surfaces of the room.

Now, imagine we all have oxygen tanks, and we go through exactly the same procedure as before. As we scale back down, the thermal radiation in the room remains pretty constant until we eventually get down to the infinitesimal level. At this level, we find that our infinitesimal room can register a RANGE of different levels of thermal radiation; and this range manifests itself by different rooms in the house having different energy-levels.

Now clearly no room in the house can have more energy than the total energy in the house as a whole. Moreover it should also be clear that it is highly unlikely that any one room will have energy far in excess of the average. But the question is: “How unlikely?”

Gathering the Data

Well, to figure out the answer to that question, what we first need to do is gather statistical data that records how often we come across rooms with a given amount of thermal radiation.

In doing so what we are likely to find is that there are a very high number of rooms with very low energy-levels, and a very low number of rooms with very high energy-levels. And when we graph this data, we will end up with a distribution that looks something like this…

Now, previously we talked about how, in the Kinetic Theory of Gases, we associate “room temperature” with the kinetic motion of gas particles. But WHAT can we associate “thermal radiation” with (if the room is a vacuum)?

Well the only thing we can say with certainty is that each room is like a tiny volume. So maybe these volumes are oscillating in some form or other; and the different types of radiation are simply a measure of how much a volume is oscillating. But that begs the question:

How can a tiny volume of space possibly oscillate?…

A New Concept of Space

Well there is really only one thing we know for sure about the concept of Space, and that is that it has three dimensions. There IS however a second thing that could be a definite physical possibility, and that is that Three-Dimensional Space could be susceptible to deformation…

Now, an infinitesimal volume of space could theoretically come in all shapes and sizes. But, given that there would be a truly enormous number of these volumes, it is reasonable to assume that the average volume could be treated as if it were a “perfect three-dimensional cube” (of an average infinitesimal size).

So let’s assume that a undisturbed unit of space is a perfect cube, and a disturbed unit is one that has been “twisted” away from its perfect “equilibrium” shape.

Now we know, from classical physics, that any displacement from equilibrium can act as the starting point for an oscillation; and this starting point displacement is what we refer to as being the “amplitude” of the oscillation.

So MAYBE infinitesimal units of space can oscillate about their equilibrium shape; and the energy (εᵢ) in that oscillation is determined by the initial displacement from spatial equilibrium (i.e. the energy in any spatial oscillation is determined by the amplitude of the oscillation).

A Distribution of Amplitudes

Okay. So now let’s get back to our house of infinitesimally small rooms.

With our new concept of space in mind, we are now in a position to measure the amount of thermal radiation (occurring across the totality of all of these rooms) simply by measuring the amplitude of each “twisting” oscillation (i.e. how much deformation from perfect 3D cube-ness occurs on each cycle of twisting oscillation).

Once again, we will use our measurement data to build and graph a statistical distribution of the number of rooms as a function of radiation amplitude. And when we examine this graph, we should find that there are many many low amplitudes, and very very few high amplitudes.

The Intensity of Radiation

In the distribution described above, it is clear that the number of “spatial oscillators” decreases exponentially with amplitude.

But when we multiply the number of spatial oscillators (nᵢ), at a given radiation-amplitude (εᵢ), by the same given amplitude (εᵢ), what we will find is that there is a “peak” at some point in this distribution (which we will know from repeated experiments, is directly related to the temperature of the walls of the house as a whole).

So clearly the graphing of intensity as a function of amplitude produces a PEAK and a DROP-OFF; and as such this resultant graph is very similar to the graph of intensity of blackbody radiation as a function of frequency.

Blackbody Radiation Intensity as a function of Frequency

Thus the results of our thought experiment would seem to suggest that the macroscopic intensity of light “of ANY given frequency” could actually be the product of the number of microscopic quantum spatial volumes (oscillating with a given amplitude), times the magnitude of the amplitude itself.

And THAT could ONLY be the case if the amplitude/magnitude of each spatial oscillation actually equates to the “frequency” of that particular oscillation.

In other words, it would mean that

For spatial oscillations: The amplitude IS the frequency.

CHANGING THE NARRATIVE

Now, clearly this thought experiment presents a strong case for arguing that electromagnetic waves are in fact made up of elementary spatial oscillations. And that implies that, for electromagnetic waves the amplitude and frequency are in fact one and the same thing.

In considering the validity of Planck’s double-distribution single-frequency narrative, we suggest that our shrinking-room thought experiment points the way to a more credible assumption-free alternative.

In contrast to Planck’s own narrative, we believe that his mathematical derivation is actually describing a simple distribution of the TOTAL amount of radiation energy among a FIXED number of unrestricted oscillators; and it is from this single distribution that a wide (but finite) range of amplitudes/frequencies naturally EMERGE.

In other words, in our “revised narrative” there is no concept of “individual oscillators RESTRICTED to a single frequency”. And this results in a model of the subatomic world where there are no different amplitudes of oscillation for a given frequency, because in this model of the subatomic world all oscillators at a given frequency have the same amplitude!

Moreover, in our revised narrative the quantum of energy (ε) can represent “a change in energy” (ε = ΔE). And thanks to the fact that our spatial oscillators are not “unrestricted” oscillators, we can confidently say that, for ANY spatial oscillator, a change in energy can theoretically be made infinitesimally small by simply allowing the change in amplitude/frequency (Δf) to become an infinitesimally small quantity.

Thus as (Δf) tends towards zero, (ε) effectively becomes “the differential of energy” — which represents a smooth and CONTINUOUS change of energy.

The “differential” of energy is proportional to the “differential” of amplitude/frequency

CONTINUOUS ENERGY

The concept that “the amplitude and the frequency are one and the same thing” marks a fundamental break with the norm for the rest of classical physics (where amplitude and frequency are normally independent of each other). But it is a break that does NOT require there to be some mysterious quantization of energy amplitudes.

Thus in our revised narrative we find that

Although energy of a given frequency/amplitude is quantized, energy itself is NOT!

This is akin to saying that although money in $5 bills is quantized by the number of $5 bills; money itself is not quantized.

Thus, in contrast to Planck’s narrative, our narrative supports “Continuous Energy” in the subatomic world; because in our narrative a continuous range of energy amplitudes represents a continuous range of frequencies. And so, in this revised model of thermal radiation (that results from our thought experiment on the nature and behaviour of quantum space)

Energy can have ANY amplitude “in a continuous range”; but different amplitudes of spatial oscillation will correspond to different colors of light.

Moreover, it is this color/frequency/amplitude that determines the intensity of each “individual-volume” of light. And that means that the intensity or “brightness” of a single photon of light changes with colour (i.e. a single blue photon is “brighter” than a single red photon); and so the overall brightness of light of a single colour, depends on BOTH “the number of photons” and “the brightness of the individual photons”.

Intensity of Energy-level-i

And, as history records, it was basically this property of the “brightness of individual photons of light” that Einstein used, in 1905, to explain the interaction between light and matter that is known as “The Photoelectric Effect”.

CONCLUSIONS

It might seem hard to believe but all the weirdness and problems associated with Quantum Mechanics, and Quantum Field Theory, could all stem from one flawed narrative — a narrative that was the direct result of something that seemed so “obviously” correct.

On first impression it seems obvious that the thermal energy of blackbody radiation should be shared out among wavelengths (because that is clearly what we observe). But through Planck’s 1901 paper, this readily accepted understanding ultimately led to the nonsensical concept of “discontinuous energy” in the subatomic realm.

The prevailing theory of blackbody radiation is that radiated thermal energy should “theoretically” be distributed among all possible frequencies of radiation; but in reality, this hypothesis leads to real-world physics that makes no sense at all (i.e. it leads to the so-called “Ultraviolet Catastrophe”).

Fortunately, it would seem, physics has another narrative that comes to the rescue. It turns out (according to this universally-accepted explanation) that energy distribution among higher frequencies is INHIBITED by a Minimum Energy Requirement that is demanded in the subatomic realm”.

This so-called “explanation” is clearly a result of Planck’s “double distribution single-frequency narrative” where amplitudes of energy (of a given frequency) are thought to be quantized by increments of energy (proportional to said frequency), and as a result amplitudes must have a minimum increment.

We believe that this explanatory narrative is wrong.

Moreover, not only is this explanation wrong, but we believe that Planck’s narrative has in fact been ground-zero for a whole multitude of such flawed explanations — all done in a desperate effort to explain the “unintuitive” behavior of the subatomic world.

We believe that Planck’s blackbody narrative was flawed, and consequently any understanding built on this narrative is also flawed as a direct result.

In this paper we have presented an alternative narrative, to accompany Planck’s mathematics, based on the idea that: unlike mechanical waves (where amplitude and frequency are independent of each other) for electromagnetic waves the amplitude and the frequency are actually one and the same thing. From this alternative perspective, we make the following observations.

1. The True Reason for the Dropoff

Planck’s narrative shares the thermal energy among a FINITE number of RESTRICTED material oscillators, and restrictive quantum amplitudes appear to be the result.

In contrast, our narrative shares the radiated energy among a FINITE number of UNRESTRICTED spatial oscillators, causing unrestrictive amplitudes/frequencies to emerge as a result.

The theory we have put forward here is that

For thermal radiation, the spectral distribution of frequencies is actually an EMERGENT PROCESS. Energy is distributed among a finite number of “spatial elements”, which GENERATES a spectral distribution of amplitudes; and these “generated amplitudes” ARE the observable frequencies.

Thus, in this model, an “infinite” number of higher energy-levels do not need to be “inhibited”, or “frozen out of existence”, because this infinite number simply does NOT exist in the first place.

In this model the mathematics of Planck’s methodology does NOT change in any way — BUT notably: the meaning does!

2. The True Source of the Quantum

We have argued in this paper that radiation amplitudes (of a given frequency) are not quantized; because for radiation: amplitudes and frequencies are actually one and the same thing.

Radiation energy, we argue, comes in a CONTINUOUS range of amplitudes — and this continuous range of amplitudes is what we observe as the continuous spectrum (of frequencies and wavelengths) of electromagnetic radiation.

Having said all that, there is however a lot of solid evidence that energy quantization IS INDEED a real phenomenon in the subatomic world. An obvious example is the photoelectric effect where the energy of incoming radiation is clearly quantized into individual photons of size (hf). And so, this must mean that our revised model of blackbody radiation has brought forth a meaningful question about the true source of energy quantization.

As things stand today, it is pretty much universally believed that energy quantization occurs because although the “quantum of action” (h) is a very small quantity, it is NOT a “differential of action” (i.e. it is not a zero-like quantity). And this leads everyone to believe that the non-zero value of (h) must be acting to LIMIT the minimum increment (ε=hf) of an energy oscillation; thus causing energy (of any given frequency) both to have a “minimum amplitude” and to be “discontinuous” at the subatomic level (i.e. the amplitude of energy cannot rise and fall in a smooth continuous fashion).

But we believe that all of this is simply wrong. Specifically, we believe that this “minimum amplitude” idea is seriously flawed.

We contend that, for radiation, the frequency actually IS the amplitude, and consequently any given frequency of radiation has a SET amplitude; and so (h) has NO inhibiting effect on the continuous nature of energy — it is merely acting as a constant of proportionality in the relationship between energy and frequency.

So if energy is not quantized by the non-zero value of (h), then

What exactly is causing energy to be quantizable?

And the answer to that question, we suggest, is that

Energy is quantizable into units of (hf) because (f) represents both the “Amplitude” and the “Duration” of a SINGLE cycle.

In other words, although “energy” itself is not fundamentally quantized, “energy of any given frequency” actually IS — because a periodic oscillation, with a given amplitude, is a single discrete thing.

[Note: Although off-topic, it is worth noting here that: this “Duality of Amplitude and Duration” is really all that is required to understand the “Relativity of TIME”. We have previously addressed this topic in some detail in NeoClassical Relativity; but we will be coming back to it again, in a future paper, when we come to address the so-called “Arrow of Time”…]

So in essence: we are arguing that energy quantization is not the result of the non-zero value of (h) but the result of the UNIQUE-SIZE of any given frequency (f); and (h) simply acts as a constant of proportionality.

Thus, we believe that Planck’s discovery of the fundamental relationship between frequency and energy actually reveals that

The true source of energy quantization are the unique-sizes of each unique CYCLE of SPATIAL Oscillation.

Now, clearly this statement is in direct conflict with the prevailing view that energy quantization results from the non-zero value of (h). But if we are right in asserting that this prevailing view is actually wrong, then that must beggar the question:

What exactly IS the significance of (h)?

3. The True Message of Planck’s Work

It is well known that in his probabilistic treatment of thermal radiation Planck uncovers two “new” constants of nature (h) and (k) which he duly recognises as being “Universal”; and in the final part of his 1901 paper, he showed how to go about calculating their numerical values.

Today, we speak of both (h) and (k) as being “constants of proportionality” relating frequency and temperature respectively to energy. In addition, it has long been known that (k) acts as a “scaling” factor between macroscopic temperature and the underlying dynamics of material particles. But the fact that Planck’s Constant (h) ALSO acts as a scaling factor has clearly been overlooked.

When we compare the two universal scaling constants, we see

  • (k) scales Macroscopic Temperature (T) down to AVERAGE energy of Micro Linear Motion (i.e. E= kT)
  • (h) scales Macroscopic Radiation (f) down to THE energy of a Micro Spatial Oscillation (i.e. E = hf)

It is the combination of these two scaling factors that allows Planck’s Radiation Law to capture the essence of Blackbody Radiation. In essence:

The different signature curves of blackbody radiation are the physical manifestation of the interplay of two mathematical scaling factors — the manifestation of the ratio of (h/k).

This manifestation of radiation as a result of the interplay of the two scaling factors is, we believe, the TRUE takeaway message of Planck’s work.

Moreover, implies that what is truly significant about the discovery of the universal constant (h) is that

Planck’s Constant (h) reveals the Fundamental “Scale” of the Quantum Nature of Reality.

In the paper The frequency IS the amplitude, we used this scaling message of (h) to put forward a conjecture, upon which we built a revised model of blackbody radiation.

The Conjecture is

  • Space is Quantized; and the spatial units can oscillate out of, and back into, 3-dimensional equilibrium.

And the main features of the Revised Model are

  • Thermal Energy is distributed among a FINITE number of Spatial Oscillators.
  • For any given temperature, a signature spectrum of spatial “amplitudes” is an EMERGENT feature of this spatial distribution of energy.
  • For spatial oscillations, the amplitude and the frequency are one and the same thing.

The idea that “the amplitude and frequency are the same thing” is clearly a break with the norm for classical physics (where amplitude and frequency are always independent of each other). But this departure from the norm does NOT require there to be some mysterious quantization of energy into discrete “incremental amplitudes” of energy. In fact, we would argue that the ready acceptance of the prevailing commentary, that minimum increments of amplitude “freeze-out” high frequencies of radiation, has merely served to keep hidden “The Quantum Geometry of Space” for a further 122 years.

REFERENCE

Max Planck, 1901, On the Law of Distribution of Energy in the Normal Spectrum. Annalen der Physik, vol. 4, p. 553

© Kieran D. Kelly

This is Post #4 in the series on NeoClassical Quantum Theory

--

--