Featured Post

Why the Ground State Ain't Zero

If you are familiar with Max Planck's work or Albert Einstein's photo-electric effect, then you know energy comes in discrete pac...

Monday, February 19, 2018

Why the Ground State Ain't Zero

If you are familiar with Max Planck's work or Albert Einstein's photo-electric effect, then you know energy comes in discrete packets called quanta. These discrete energy levels are represented by nice, neat integers, where n = 1, 2, 3 ... But then there is this equation:

The above equation's second term is the ground state--but why isn't the frequency (f) and Planck's constant (h) multiplied by a nice, neat integer? Why 1/2? That's odd. And why ain't the ground state zero? Surely if you remove everything there should be nothing, nada, zip! Definitely not 1/2.

To understand what's going on, let's try a thought experiment. Imagine you want to push a couch a distance of x. You put your hands on the couch and apply zero force or energy. You gradually increase the energy applied until you reach a critical value where the couch begins to move. Let's label that critical value 1E.

Now suppose you want to push two couches along distance x. You gradually increase the applied energy to, say, 1.7E:

Unfortunately, 1.7E is not enough energy to do the work, so gradually increase the applied energy to 2E:

Assuming the couches are identical, the critical values of energy needed to push one or two couches are 1E and 2E. The critical coefficients are integers. If we want to move n couches we need nE. Below is a diagram of our thought experiment:

Note that the energy you applied is continuous, but the critical values (in red) are discrete. Also note the lowest energy is zero, so again, where does the (1/2)fh come from? Consider a single die. It has six discrete states: 1, 2, 3, 4, 5, 6. If we add up these states, we get 21:

This time, instead of a continuous line, the following die diagram has discrete steps:

Now, let's imagine a die with an infinite number of sides (or states) ranging from zero to six. Its states are no-longer discrete, but continuous. However, like the couch experiment, there are critical values marked in red (see diagram below).

Once again note the ground state is zero, but also note when we add up all those energies (see equation 3) we don't get 21 like before. We get 18! The following diagram illustrates the difference between the six-sided die and the infinite-sided die. The six-sided die clearly has more area under the curve. That extra area is indicated in gray:

So how do we fix this discrepancy? Let's include a y-intercept as they say in calculus parlance:

At equation 5 we use an intercept of 1/2. Doing so gives us a sum of 21--equal to the six-sided die! The diagram below illustrates this point. Note the gray areas cancelling each other:

Also note the ground state is not zero--it's 1/2.

Thursday, February 8, 2018

How to Solve Complex Equations When Perturbation Theory Fails

Perturbation theory is useful to any mathematician or physicist who needs to solve a seemingly unsolvable equation such as the one below:

Equation 1 may be hard to solve, but its structure looks simple. Perturbation theory works fine in its case. Later, we will try to solve a much more complex equation, where perturbation theory ceases to be practical, and we will use an alternative method. But for now, let's explore the benefits and limitations of perturbation theory. To solve equation 1 we first reformat it as follows:

Then we expand x and x^2:

Next, we make substitutions:

Let's assume the following terms equal zero:

Using a little algebra and more substitutions we solve for x0, x1, and x2. When we add these x's up we get the value of the original x:

Not bad! But check out this monster:

OK, let's set some values for y and z and solve for x. We derive equation 19:

Should we solve it with perturbation theory? Well, let's see. We first have to expand x^5:

I don't know about you, dear reader, but I don't even want to think about expanding x^5! In our previous example we only had to expand x^2 to only the second power of epsilon. Can you imagine the can of worms we will open if we expand x^5 to two or more powers of epsilon? There's got to be an easier way ... and there is.

To solve equation 19 we use a systematic guess-and-check method. The idea is to choose a value for x that will give us an answer that is close to zero if not exactly zero. We first test -1, 0, and 1. We see that 1 is closer to zero. We try 10. If 10 is better, we try 100, and higher powers of 10 as we get closer to zero. After all, the value for x could be any real number from negative infinity to positive infinity.

Now, after checking 10, we see it's no good. So we now know x is between 1 and 10. We try 2. Whoops, 1 was better. So we try 1.1 and see an improvement. Then we try 1.2--even better. We increment by tenths to 1.3. At 1.4, we overshoot. So the answer is around 1.3, so we try 1.29 and 1.31, i.e., we increment in hundredths. Our final answer is 1.29. If we want more accuracy, we increment in thousandths, and ever smaller powers of 10. Below is a table of our results:

The chief advantage of this method is it is possible to write a computer program that will yield a quick answer.

Update: Just for fun and to satisfy curiosity, here is a proof of the perturbation expansion:

We can use the forgoing information to prove the Taylor expansion of the natural log:

Click here for a wonderfully straight-forward and concise primer on perturbation theory.

Monday, February 5, 2018

Resolving the Cosmological Constant Problem

Imagine you're a quantum physicist and you just got through calculating the energy density of the vacuum. To your horror you find that your calculation is around a hundred orders of magnitude greater than vacuum energy density measurements! This, in a nutshell, is the cosmological constant problem. According to your numbers, the cosmological constant should be huge. According to WMAP measuremments, the cosmological constant is too small to tweet about. In this post we shed some light on this problem. First, here are the variables we will be using:

Below we derive the cosmological constant value based on the WMAP vacuum energy density measurement of approximately 10^-10 Joules per cubic meter (see equations 3 and 4):

The next order of business is to consider the limitations caused by the Compton wavelength formula and the Heisenberg uncertainty principle--so we derive equation 8:

There are two ways we can interpret equation 8: 1. If delta-x grows smaller, energy (delta-E) grows bigger. This makes perfect sense when you consider that smaller spaces contain shorter wavelengths, and shorter wavelengths correspond to higher energies. 2. If delta-x grows smaller, delta-E becomes more uncertain. In any case, we use equation 8 as our means to make sense of the cosmological constant problem.

Equation 9 below is equation 8 modified. On the left side we have the vacuum energy measurement (E) and one meter (x). Of course the product of these values is far greater than Planck's constant, so the right side has the scale factor n:

OK, here's what we need to do: we need to reduce the vacuum's volume while maintaining a constant energy density. This can be done, but only up to a point. The question becomes, up to what point? Or, if you prefer, down to what point? How small can we make the energy and space before things get weird? Here's the answer:

Equation 10 above is the formula for finding the cutoff, or, the Heisenberg uncertainty limit. The product of the energy and space can't go below h-bar/2 without violating the uncertainty principle.

At 11 and 12 below, we plug-n-chug:

At the cutoff, the distance is around 10^-4 meters. The volume is 10^-12 cubic meters. The energy is approximately 10^-22 Joules. If the space is reduced further, the energy value will explode! With an upper limit of infinity! Obviously, if the energy skyrockets, or becomes more uncertain, the energy density won't be the same--and any calculation done at that level will yield ridiculous results.

The diagram below shows vacuum energy density matches observations at or above the cutoff. Any meaningful measurement is taken above that point. But if you prefer infinities and uncertainties, feel free to go smaller than the Heisenberg limit.

Thus, the cosmological constant problem appears to be a natural consequence of doing calculations and measurements below the cutoff limit.

Tuesday, January 30, 2018

An Incredibly Simple Formula For Removing Unwanted Infinities

Reality says, "It's finite," but your math says,"It's infinite." Why? And, is there a simple way to solve the problem? Today we present an incredibly simple formula for removing unwanted infinities. First, let's define the variables we will use:

Now that we got that out of the way, let's examine why infinities occur. One way to get infinity is to add up an infinite number of numbers as the following integral-summation shows:

You may have noticed something missing in equation 1 above. The thing that's missing makes the infinity finite:

The missing ingredient is none other than dx -- a really tiny number. When the infinity is multiplied by dx it magically becomes a finite number.

So one way to rid ourselves of an unwanted infinity is to multiply it by dx or something equivalent. Equation 5 below is a general formula that always provides a finite value by canceling infinity should one arise.

The first factor is equivalent to dx. The second factor could be equivalent to an infinite sum. Let's apply equation 5 to a classical example. Suppose we have a lattice with N cells. Each lattice cell contains some fraction (epsilon) of the total energy (E):

We could just add up all the parts to get the total energy (E):

We could calculate the average energy per cell (bar-epsilon*E) and multiply that value by the number of cells (N) to get the total energy:

But let's see what happens when we apply our new formula:

At equations 13 and 14, the dx equivalent is simply 1 which is often true in a classical case. It's a bit redundant but yields the right answer as do the more familiar methods above. In this example, it's like counting bricks to get the total. Very simple but effective. It's effective because the average energy per cell is simply the total energy divided by N--the number of cells. Thus, the bar-epsilon in our formula is the reciprocal of N--and the two variables cancel, leaving the total energy E. But look what happens when bar-epsilon doesn't equal the reciprocal of N:

In the above example we have infinite or infinitely uncertain energy in each cell, but the bar-epsilons and N's cancel, the uncertainty and/or infinity is nullified and we end up with the finite energy E.

The following example shows how infinity can rear its ugly head in a seemingly classical lattice.

Here's a data table and chart for different values of n:

Notice how the density increases as n decreases. As n increases the density levels off. The same is true for a 3D lattice:

When n is increased to infinity, the density is virtually constant and behaves like bricks. Increasing and decreasing the number of bricks does not change their mass density.

When n is reduced to zero, the density blows up to infinity! No more brick-density behavior:

We can translate the lattice equation into something resembling perturbation theory:

At equation 29 above we take epsilon to infinity (which is equivalent to taking n to zero) and the answer is 1 + infinity. We can use a famous ad hoc method to rid ourselves of the infinity: simply subtract it or ignore it. By pretending it isn't there, we get a legitimate finite energy value. On the other hand, it would make more mathematical sense to reduce epsilon to zero:

Let's apply our new formula to this problem. We begin with the original lattice equation and reduce n to zero to get the infinite result. At equation 33 we multiply the infinite density by the lattice volume to get the total energy--which is also infinite.

At 33b we restate the formula for convenience. At 34 we plug in the values we get from equation 33. At 35 we see the total energy is no-longer infinite, but the correct finite value.

Now, let's consider an even more classical setup. Here's the lattice diagram:

Surely the density should always behave like bricks, since we have one dot per cell. Sixteen dots per sixteen cells has the same density as one dot per one cell:

But even bricks have a limit. If you divide a brick up into smaller and smaller pieces, eventually you end up with something that no-longer resembles a brick. We can simulate this by reducing n to 1, and k to 0 (see 39 and 40). Once again we get the dreaded infinity--but we can fix that using the new formula (see 41 through 44).

How about a real-world example? At 45 below we set the total energy of two electrons equal to their Coulomb energy at a fixed radius (ro). You will note the total energy (2Ee) is quantized at a fixed integer value (no).

If the radius were to shrink to zero, one would think the energy would explode to infinity. To solve this problem we do some algebra to derive equation 52 below:

At 51 and 52 we see the radius cancelling itself. The total energy of the electrons is determined by the quantum number n. Apparently it is possible to have zero to infinite Coulomb energy (or force) without impacting the overall energy? The crude diagrams below sheds some light on this:

The diagrams show two Coulomb energies: E1 and E2 with radii of r1 and r2, respectively. There is a unit area A. Over a large surface area there are more units of A and vice versa. There are two circles; each representing two different surface areas. Using this information we derive equation 58:

Equation 58 tells us that a large Coulomb energy over a small surface area is equal to a small Coulomb energy over a large surface area, and the total energy is the Coulomb energy over unit area A times the surface area. The total energy is, of course, the total energy of the two electrons--and that energy is finite and conserved! This result is consistent with our formula:

If we plug in the following values and do the math, we get the finite energy of the two electrons:

Thus, we now have an incredibly simple and redundant formula that shows the connection between the parts and the whole at the classical and quantum levels.

Update: Below is a proof of the formula: