Featured Post

Proof that Aleph Zero Equals Aleph One, Etc.

ABSTRACT: According to the current dogma, Aleph-0 is less than Aleph-1, but is there evidence to the contrary? Is it really true that ...

Saturday, February 24, 2018

An Amplitude Squared Equals a Probability--a Mathematical Proof

Start with a wave:

Add more waves. Waves are in and out of phase with each other (constructive and destructive interference):

The range is from zero degrees out of phase to 180 degrees out of phase. On average, any pair of waves is 90 degrees out of phase, making a sine wave and a cosine wave:

Each has an amplitude (Ao). We can add the waves together, making a complex number (equation 1). Since our intention is to isolate and square the amplitude, we need a complex conjugate (equation 2). Using Euler's identity we write the following equations:

When we square the the left sides and square the right sides of equations 1 and 2, here's what we get:

It's entirely possible we don't get a probability. Instead, we could get a value greater than one. Plus we have distance units--so we need to normalize the amplitude:

Now, let's express our oscillating wave(s) in terms of Hooke's law and derive an energy (E):

Below we make a couple of substitutions to derive equation 10:

At 11 we define the probability of E. It could be from zero to one:

Multiply both sides of equation 10 by the probability P(E) then add a pinch of algebra to get equation 16:

Equation 16 shows the probability is equal to the square of the reduced normalized amplitude. Of course we are not limited to just energy eigenvalues. We can show that any type of eigenvalue has a probability equal to the square of a normalized amplitude:

In fact, if we express energy E in terms of momentum (p), mass (m), position (x), time (t) and wave number (k), we discover that the momentum, mass, position, time and wave number each have the same probability as the energy: the square of the amplitude A'. This is due to the energy being dependent on a specific value of each of these other eigenvalues. Each term below contains one eigenvalue and constants (which don't change). So the energy eigenvalue correlates with each of these other eigenvalues, and, likewise, the probabilities correlate.

In conclusion it is safe to say a wave amplitude squared does not guarantee a probability; however, it looks as though the square of a normalized amplitude does.

Monday, February 19, 2018

Why the Ground State Ain't Zero

If you are familiar with Max Planck's work or Albert Einstein's photo-electric effect, then you know energy comes in discrete packets called quanta. These discrete energy levels are represented by nice, neat integers, where n = 1, 2, 3 ... But then there is this equation:

The above equation's second term is the ground state--but why isn't the frequency (f) and Planck's constant (h) multiplied by a nice, neat integer? Why 1/2? That's odd. And why ain't the ground state zero? Surely if you remove everything there should be nothing, nada, zip! Definitely not 1/2.

To understand what's going on, let's try a thought experiment. Imagine you want to push a couch a distance of x. You put your hands on the couch and apply zero force or energy. You gradually increase the energy applied until you reach a critical value where the couch begins to move. Let's label that critical value 1E.

Now suppose you want to push two couches along distance x. You gradually increase the applied energy to, say, 1.7E:

Unfortunately, 1.7E is not enough energy to do the work, so gradually increase the applied energy to 2E:

Assuming the couches are identical, the critical values of energy needed to push one or two couches are 1E and 2E. The critical coefficients are integers. If we want to move n couches we need nE. Below is a diagram of our thought experiment:

Note that the energy you applied is continuous, but the critical values (in red) are discrete. Also note the lowest energy is zero, so again, where does the (1/2)fh come from? Consider a single die. It has six discrete states: 1, 2, 3, 4, 5, 6. If we add up these states, we get 21:

This time, instead of a continuous line, the following die diagram has discrete steps:

Now, let's imagine a die with an infinite number of sides (or states) ranging from zero to six. Its states are no-longer discrete, but continuous. However, like the couch experiment, there are critical values marked in red (see diagram below).

Once again note the ground state is zero, but also note when we add up all those energies (see equation 3) we don't get 21 like before. We get 18! The following diagram illustrates the difference between the six-sided die and the infinite-sided die. The six-sided die clearly has more area under the curve. That extra area is indicated in gray:

So how do we fix this discrepancy? Let's include a y-intercept as they say in calculus parlance:

At equation 5 we use an intercept of 1/2. Doing so gives us a sum of 21--equal to the six-sided die! The diagram below illustrates this point. Note the gray areas cancelling each other:

Also note the ground state is not zero--it's 1/2.

Thursday, February 8, 2018

How to Solve Complex Equations When Perturbation Theory Fails

Perturbation theory is useful to any mathematician or physicist who needs to solve a seemingly unsolvable equation such as the one below:

Equation 1 may be hard to solve, but its structure looks simple. Perturbation theory works fine in its case. Later, we will try to solve a much more complex equation, where perturbation theory ceases to be practical, and we will use an alternative method. But for now, let's explore the benefits and limitations of perturbation theory. To solve equation 1 we first reformat it as follows:

Then we expand x and x^2:

Next, we make substitutions:

Let's assume the following terms equal zero:

Using a little algebra and more substitutions we solve for x0, x1, and x2. When we add these x's up we get the value of the original x:

Not bad! But check out this monster:

OK, let's set some values for y and z and solve for x. We derive equation 19:

Should we solve it with perturbation theory? Well, let's see. We first have to expand x^5:

I don't know about you, dear reader, but I don't even want to think about expanding x^5! In our previous example we only had to expand x^2 to only the second power of epsilon. Can you imagine the can of worms we will open if we expand x^5 to two or more powers of epsilon? There's got to be an easier way ... and there is.

To solve equation 19 we use a systematic guess-and-check method. The idea is to choose a value for x that will give us an answer that is close to zero if not exactly zero. We first test -1, 0, and 1. We see that 1 is closer to zero. We try 10. If 10 is better, we try 100, and higher powers of 10 as we get closer to zero. After all, the value for x could be any real number from negative infinity to positive infinity.

Now, after checking 10, we see it's no good. So we now know x is between 1 and 10. We try 2. Whoops, 1 was better. So we try 1.1 and see an improvement. Then we try 1.2--even better. We increment by tenths to 1.3. At 1.4, we overshoot. So the answer is around 1.3, so we try 1.29 and 1.31, i.e., we increment in hundredths. Our final answer is 1.29. If we want more accuracy, we increment in thousandths, and ever smaller powers of 10. Below is a table of our results:

The chief advantage of this method is it is possible to write a computer program that will yield a quick answer.

Update: Click here for a proof of perturbation theory

Monday, February 5, 2018

Resolving the Cosmological Constant Problem

Imagine you're a quantum physicist and you just got through calculating the energy density of the vacuum. To your horror you find that your calculation is around a hundred orders of magnitude greater than vacuum energy density measurements! This, in a nutshell, is the cosmological constant problem. According to your numbers, the cosmological constant should be huge. According to WMAP measuremments, the cosmological constant is too small to tweet about. In this post we shed some light on this problem. First, here are the variables we will be using:

Below we derive the cosmological constant value based on the WMAP vacuum energy density measurement of approximately 10^-10 Joules per cubic meter (see equations 3 and 4):

The next order of business is to consider the limitations caused by the Compton wavelength formula and the Heisenberg uncertainty principle--so we derive equation 8:

There are two ways we can interpret equation 8: 1. If delta-x grows smaller, energy (delta-E) grows bigger. This makes perfect sense when you consider that smaller spaces contain shorter wavelengths, and shorter wavelengths correspond to higher energies. 2. If delta-x grows smaller, delta-E becomes more uncertain. In any case, we use equation 8 as our means to make sense of the cosmological constant problem.

Equation 9 below is equation 8 modified. On the left side we have the vacuum energy measurement (E) and one meter (x). Of course the product of these values is far greater than Planck's constant, so the right side has the scale factor n:

OK, here's what we need to do: we need to reduce the vacuum's volume while maintaining a constant energy density. This can be done, but only up to a point. The question becomes, up to what point? Or, if you prefer, down to what point? How small can we make the energy and space before things get weird? Here's the answer:

Equation 10 above is the formula for finding the cutoff, or, the Heisenberg uncertainty limit. The product of the energy and space can't go below h-bar/2 without violating the uncertainty principle.

At 11 and 12 below, we plug-n-chug:

At the cutoff, the distance is around 10^-4 meters. The volume is 10^-12 cubic meters. The energy is approximately 10^-22 Joules. If the space is reduced further, the energy value will explode! With an upper limit of infinity! Obviously, if the energy skyrockets, or becomes more uncertain, the energy density won't be the same--and any calculation done at that level will yield ridiculous results.

The diagram below shows vacuum energy density matches observations at or above the cutoff. Any meaningful measurement is taken above that point. But if you prefer infinities and uncertainties, feel free to go smaller than the Heisenberg limit.

Thus, the cosmological constant problem appears to be a natural consequence of doing calculations and measurements below the cutoff limit.