Featured Post

Why a Discrete Minimum Distance Fails

In the previous post entitled "Why Strings Don't Exist" , we showed it is possible to have a length shorter than the famed ...

Saturday, September 29, 2018

Why a Discrete Minimum Distance Fails

In the previous post entitled "Why Strings Don't Exist", we showed it is possible to have a length shorter than the famed Planck length. The question becomes, what is the minimum discrete distance, assuming there is such a thing? Normally, we use infinitesimal points to build geometrical objects like lines, planes, rectangles, etc. What happens if we use a line with the smallest magnitude possible that is greater than zero? Before we delve into these questions, let's define the variables:

OK, let's assume the fundamental building block is a line somewhere between zero and the Planck length. We'll call it d:

Let's try building a square with d:

So far, so good. All the distances appear to be no less than d. But what is the distance along the diagonal (or hypotenuse)?

The diagonal distance is a little bit more than d. The additional distance is marked in red. Notice this distance is not an integer multiple of d. To make this distance, we need a distance d plus a distance less than d. There can be no distance less than d, so we can't draw the above square. Here's an idea: draw a rectangle with 3X4 d-units. Thanks to Pythagoras, the diagonal will be 5 d-units:

Because all distances must be integer units of d, our geometry does not include squares, rectangles or triangles that have diagonals and sides that fail to have magnitudes that are integer multiples of d. But at least we found one rectangle that works--or maybe not:

If we draw lines from each d to every other d, we once again have distances that are not integer multiples of d. In the example above, we have a distance (c) of 3.6d. To have that distance, we need 3d plus a 0.6d. In our geometry, there's no such thing as 0.6d. Thus we can't draw the 3X4 rectangle. In fact, every rectangle and triangle we draw will have some distance that includes a fraction of d.

Perhaps we'll have better luck with circles? Check this out:

If we take distance d and shape it into a circle circumference (C), the diameter (D) will be less than d! D = d/pi (where C=d). If we try to draw any circle with d-units, we hit a brick wall. You see, you calculate the circumference with pi * D. If D is an integer (n) multiple of d, pi * n won't equal a circumference with an integer multiple of d. Pi is an irrational number.

OK, so perfect circles are out. How about imperfect circles? Perhaps we can replace pi with something more rational. Even if we do, circles have the same problem we encountered earlier:

There's always a line between two d's, that is not an integer multiple of d. This is also true with any shape imaginable:

So far, things look pretty hopeless for our geometry based on d. But unfortunately, there's more pain. Let's go back to the beginning and reexamine d:

Distance d is a line along the x-axis. But what is it along the y and z axis, i.e., what is its cross section?

Line d has a cross section of zero magnitude! That zero magnitude is just a single point in space with zero distance! Zero distance is not allowed, so a distance-d line is not allowed. Perhaps we can convert the line into a cylinder, so the cross-section will have a d-magnitude. Let's look at our new cross section:

Oops! The cross section is shaped like a circle. We can easily draw a red line from one d to the other that isn't an integer multiple of d. Thus it is now abundantly clear that a geometry based on distance d (instead of a point) is a dismal failure.

Thursday, September 27, 2018

Why Strings Don't Exist

Let's compare the point particle and the string. A point particle's dimensions all have a zero limit. By contrast, a string has a zero limit for its cross-section dimensions and a Planck-length limit on just one space dimension. So one could ask, if the Planck length is the shortest length, then how is it that a string has shorter dimensions along its cross section? Does space follow different rules along, say, the x-axis and y-axis than it does along the z-axis? It doesn't seem likely that it would--and we will mathematically prove it, i.e., we will disprove the string. First, let's define the variables we need:

Below we we define the Lorentz factor at equation 1; the de Broglie wavelength at equation 2; The Planck length at 3; and the Planck mass at 4. At equation 5 we express the Planck length in terms of the Planck mass and de Broglie function. The Planck mass is multiplied by c to give the maximum momentum theoretically possible: the Planck momentum.

If the momentum were larger than the Planck momentum, The Planck length would not be the shortest length possible. If equation 5 is true, then equation 10 below must also be true, but is it really?

If we check how much energy is involved in bringing, say, an electron up to the Planck momentum, we discover the amount falls far short of infinity. In fact, the the energy amounts to only around 300 lbs. of TNT. If we could somehow squeeze more energy into the particle, surely we could increase the momentum. The result would be equation 15 where the wavelength is less than the Planck length.

Unfortunately the particle's momentum is not only restricted by the light-speed barrier, but also by the Planck temperature, which we will discuss later. Right now, let's see if the Planck units are valid. To build the Planck units, the following assumptions are made (see equations 16 & 17):

But then there's the Heisenberg uncertainty principle. At equation 19 we see that h-bar (the reduced Planck's constant) isn't really the smallest value possible. H-bar/2 is smaller. If we use this smaller value, we can derive new and smaller units (see 23 and 24):

Let's plug in these new units into the uncertainty relation (see 25). Now, if we assume the Planck mass is valid, then the shortest length is half the Planck length (see 26). OK, we broke the Planck length barrier. Can we do better? Yes! According to special relativity, a particle approaching light speed has an upper mass limit of infinity (see 27). If we could somehow harness the energy of the whole universe, then surely the minimum possible length would be far shorter than half the Planck length (see 28 to 30).

OK, now is a good time to address the Planck temperature. The Planck temperature is allegedly the hottest temperature that can be achieved. If such is the case, then our particle maxes out at 300 pounds of TNT worth of energy and corresponding momentum, i.e., the Planck momentum.

The Planck temperature is defined at 31 below. Notice how equation 31 does not take relativity into account. Let's factor in relativity (see 32). At equation 33 we end up with a temperature that has an upper limit of infinity.

Some physicists have postulated that the Planck mass represents the mass of the smallest black hole possible. If such a black hole is at rest, we can certainly use equation 31 to find the Planck temperature. But what if the black hole is not at rest? Then its energy (E) would be equation 35 below. Once again we end up with a temperature greater than the Planck temperature (see 36,37). Even if we limited the energy to kinetic energy only, it seems highly probable such energy would exceed the Planck energy (Planck mass X c^2). Thus it seems probable the highest temperature would exceed the Planck temperature (see 38 & 39).

OK, we've laid the groundwork needed to disprove the strings of string theories. We begin our disproof with Gay-Lussac's Law. Using the Planck length and Boltzmann's for the constant and doing a bit of algebra we derive a temperature equation at 44:

Both a string and a point particle have zero volume due to one being one-dimensional and the other zero-dimensional. Zero volume yields infinite temperature (see 45 to 47). If we assume the Planck temperature is the highest possible temperature, then one-dimensional strings (and point particles) don't exist. If the string (or point particle) is at rest, there's infinite uncertainty re: the temperature, thus possible temperatures that exceed the Planck temperature (see 48, 49).

Now let's attempt to save string theory. If we assume there is a minimum distance greater than zero, then that minimum distance should exist in all directions. Given that assumption, is it possible to build a string? For illustrative purposes, let's assume the Planck length is the shortest distance possible. Here's what we get:

At 50 above we have the temperature equation. At 51 we shrink the dimensions down to the Planck length. We end up with a small volume of matter (circled in red) rather than a string. We use this object to derive equations 52 to 54. Of course, given what we have covered in this blog post, the Planck length could be cut by at least one half. However short the actual shortest distance, it is abundantly clear it doesn't make a one-dimensional string.

Tuesday, May 29, 2018

Re-normalizing Feynman Diagram Amplitudes in a Non-arbitrary Way

Quantum electrodynamics (QED) is perhaps the most precise and successful theory in all of physics. There is, as I've mentioned in previous posts, a peculiar characteristic within the theory's math: infinities keep cropping up. In this post we deal with the infinities that appear in the math when calculating Feynman-diagram amplitudes.

If you read the previous post, you recall Paul Dirac having a problem with re-normalization. He said, " I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way."

Let's see if we can re-normalize Feynman-diagram amplitudes in a non-arbitrary way. First, we define the variables:

Next, let's do a typical textbook calculation and reveal how the infinity arises. Below is the Feynman diagram we will be working with. A and A' are particle and anti-particle, respectively:

The diagram progresses from bottom to top. There are two vertices. The particle (A) and anti-particle (A'), with momenta p1 and p2, meet at the first vertex and annihilate each other. A boson (B) is released. It has an internal momentum q. At the top vertex it creates a new particle (A) and anti-particle (A') with momenta of p3 and p4.

To find the amplitude M, we need a dimensionless coupling constant (-ig) for each vertex. This coupling constant contains the fine structure constant (see equation 1) There are two vertices, so we square the coupling constant (see equation 2):

To conserve momentum we use the Dirac delta function (see 3 and 4). Momenta p1 and p2 are external momenta heading in, and q is the internal momentum heading out (see 3). At 4, q is incoming momentum and p3, p4 are outgoing momenta.

For boson B's internal line we need a propagator, a factor that represents the transfer of propagation of momentum from one particle to another:

We integrate over q using the following normalized measure:

We put all the pieces together to get equation 7. We begin solving the integral at equation 8:

We can solve the integral more easily if we set q equal to p3 and p4. Using some algebraic manipulation, we arrive at equation 11:

Note that at equation 11 we have a red portion and a blue portion. To get the solution at equation 12, we simply throw away the blue portion! We can just imagine Dirac rolling over in his grave. Further, equation 12 is supposed to be the probability of the event illustrated in the Feynman diagram. But probabilities are dimensionless numbers. This probability has dimensions of 1/momentum squared!

Experiments may show that equation 12 is correct within a tiny margin of error, but can the math that leads to it be more sloppy and arbitrary? Sure it can. But let's try to make it less sloppy and arbitrary. We can start by changing the normalized measure:

Next, we can recognize that momentum is conserved, so the Dirac delta functions will equal 1:

As a result, a lot of the stuff we arbitrarily threw away is now properly cancelled. We end up with equation 19:

If we evaluate the integral, we get an infinity (see 20). The good news is we can convert that infinity to the expression at 21. If we introduce a gamma probability amplitude factor, the infinity becomes a finite number at 21b.

We make a substitution at equation 22:

If we throw away the blue section at equation 22, it makes logical sense when you treat that section as all the probable outcomes that could have happened but didn't happen when the observation was made. The observer saw the expression outlined in red--the eigenvalue. That eigenvalue is paired with what is supposed to be its probability amplitude. Notice if we multiply this amplitude by the gammas in the summation, we get the probability amplitudes for all the eigenvalues that add up to infinity. As a result, the right side of equation 22 is no longer infinite. If we take the sum of squared probability amplitudes multiplied by their respective eigenvalues we get the expectation value.

The expectation value is not what we want, however. We want the actual observed value outlined in red, so we ignore "what could have happened but wasn't observed" outlined in blue. This approach is logical instead of arbitrary.

Now, let's see what we can do to fix the dimension problem. At 23 we pull out a momentum unit and set it to one. This leads us to a new solution at 24:

At 24 we end up with an eigenvalue multiplied by a probability amplitude--and the dimensions come out right. The eigenvalue fits nicely into Einstein's energy equation:

So we have a solution for four-dimensional spacetime. For three-dimensional space, we get equation 27:

At 27, the eigenvalue is just q, the internal momentum of the Feynman diagram. The probability of q is the same as the Feynman-diagram event. We obtain the probability by squaring the phi amplitude:

In conclusion, if you encounter an infinity in QED math, it is OK to discard it. It's not really arbitrary to do so, because you are only interested in what you observed. You are not interested in an infinite number of probable events you didn't observe in your experiment.

Saturday, May 26, 2018

Finding the Flaw that Necessitates Renormalization

Here's what Paul Dirac had to say about renormalization:

"Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!"

So let's see if we can find the flaw that causes infinity to appear in equations and necessitates the ad hoc method of neglecting it in an arbitrary way. First, let's define the variables:

Consider the integral below. It adds up the Coulomb potential energy between two particles. The result is infinity.

If the location of each particle is uncertain and/or there is a superposition of states, we might assume, that at each location there is some energy, and, if we add up each of those energies from zero to infinite r (the distance between the particles) we end up with infinite energy!

Let's assume, arguendo, there is infinite energy. We could get that result if we take the average energy and multiply it by infinity:

Of course, when we measure or observe the two particles, we find the energy is not infinite. So why did the math give us infinity? Well, notice there were no probabilities involved when we solved the integral.

Suppose we assume that, since there is an infinite number of states the particles could be in (due to the distance apart (r) being anywhere from zero to infinity), there must be an infinite number of probabilities. Those probabilities must also add up to one. The average probability is therefore 1/infinity:

If we multiply the average energy by infinity, we get infinity, but if we multiply that by 1/infinity, we get the average energy or expectation value:

This is the same result we would get if we summed each probability and each energy eigenvalue:

When we observe and measure the energy, we get the different eigenvalues. The average energy we will observe is the expectation value. So, it makes perfect sense to multiply the absurd infinity by the average probability. After all, we want our math to agree with nature.

Now, let's consider an example from QED (quantum electrodynamics). We want to calculate the total vacuum energy or ground-state energy. One typical way of doing this is to integrate over k-space. We begin with equation 8 below and work our way to equations 14 and 15 (note: variables including but not limited to Planck's constant are set to one):

At 14 we see the ground-state is infinity. Ridiculous! At 15 we renormalize by subtracting the infinity from the total energy (H). This is exactly the kind of thing Dirac complained of, so let's take what we've learned above and apply it to this situation. We know we can get infinity by multiplying the average observed ground-state energy by infinity:

Even though we are dealing with a field instead of individual particles, let's quantize the field by imagining it is made up of individual particles--each with it's own energy state and finite eigenvalue, and, more importantly, each finite energy has a probability associated with it. Also, the totality of these particles, at any point in time, have an overall state with a probability associated with it. We can imagine an infinite number of possible particle states and overall states with finite energies adding up to infinity, so there must be an infinite number of probabilities that add up to one. The average probability is, once again, 1/infinity:

We get the average ground-state energy if we multiply the infinity by the average probability:

Note that equations 20 and 22 are in agreement. The solution is not infinity, but the expectation value or average ground-state energy? Not quite. The solution is definitely not infinity. Additionally, we are not interested in knowing the average energy. We want to know the total energy, say, in a given volume V.

So the next step is to divide the average energy by a unit volume (Vu):

Now we have an energy density. According to WMAP, the vacuum energy density is approximately what we have at equation 24. At equation 25 we multiply the density by the volume we are interested in to get the total "finite" ground-state energy.

Equation 26 shows the energy above the ground state is no longer the total Hamiltonian (H) minus infinity, but the total energy minus a finite vacuum energy.