Featured Post

Proof that Aleph Zero Equals Aleph One, Etc.

ABSTRACT: According to the current dogma, Aleph-0 is less than Aleph-1, but is there evidence to the contrary? Is it really true that ...

Showing posts with label renormalization. Show all posts
Showing posts with label renormalization. Show all posts

Thursday, March 31, 2022

Curing Divergences without Supersymmetry and Renormalization

Abstract:

Supersymmetry or ad hoc methods such as renormalization are often used to tame infinities that result from divergent functions in quantum physics. Although SUSY particles have yet to be discovered and may be too massive to fulfill their purpose, and, renormalization seems to lack mathematical rigor. Here we offer an alternative method that employs the least-action and Heisenberg uncertainty principles.

Imagine a Lagrangian with divergent terms. One strategy is to renormalize it. Simply discard the divergent terms, especially if they are infinite. However, Paul Dirac had this to say about such methods: "I must say that I am very dissatisfied with the situation because this so-called 'good theory' does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!"

Another strategy is to add superpartners that each have the same mass as their respective standard-model counterparts but make an opposite contribution to the Lagrangian. As a result, the divergence vanishes. Albeit, there is a slight problem: the symmetry of Super-symmetry is broken--the superpartners are believed to be more massive than their standard-model partners. This deflates the balloon of vanishing divergences. To make matters worse, there is a complete and total lack of empirical evidence supporting these superpartners.

If renormalization seems like bad math and SUSY particles are nowhere to be found, what other options are there? How about the least-action and Heisenberg uncertainty principles? Let's first examine the least-action principle:

A particle typically takes the shortest path possible between two points. For that to happen, delta-s, at equation 1, cannot be a large, divergent quantity. It should be zero units of action or time multiplied by energy. However, the following is true:

Line 3 shows that time multiplied by energy is greater than or equal to h-bar. To get delta-s to equal zero requires steps 4 through 6:

At 7 we set up another substitution. The final equations are 8 and 9 below:

Equations 8 and 9 show why there's a least action principle and why energy is generally conserved. Suppose we have a conserved energy L. The divergent energy, delta-E, can be interpreted as energy borrowed from the vacuum. Because it's borrowed, it must vanish within time delta-t. The larger this energy, the shorter its lifespan. As a result, the energy L that you start with is the energy you end up with. It is conserved. Also, the action is the least action.

At equation 10 we have a Lagrangian where there is no borrowed energy. Because no energy is borrowed, time delta-t is infinite. In other words, this scenario can last indefinitely and create the impression that energy is always conserved.

At equation 11 we have the opposite extreme: a Lagrangian that diverges to infinity. The good news is delta-t is zero, which shows that infinite borrowed energy does not exist. We can also infer that large borrowed energies exist for too short of a time to be meaningfully observed and measured, so the energy we do observe and measure is small by comparison. Thus, renormalization works despite its ad hoc nature because nature wipes out divergences by means of the uncertainty principle and least action. The only time it is appropriate to keep the divergent terms is when divergent energy is added to the system and not borrowed from nothing.

Now, let's suppose L is a Lagrangian for vacuum energy (see equation 12). A Higgs boson (m-sub-H) pops into existence and has a lifespan of t-sub-H. A too-large Higgs mass would have a lifespan too short to provide a meaningful opportunity to observe it, so the mass we are most likely to observe is a smaller mass.

More examples: Equation 13 below takes into account multiple particles. Equation 14 takes into account a Lagrangian or function with multiple terms and parameters.

Since delta-s must be zero to minimize the action, then delta-s along D dimensions must also be zero. Further, both delta-s and s have units of momentum multiplied by position. If we integrate over position and/or momentum space, the following must be true:

The uncertainty of knowing a particle's position is cancelled by knowing its momentum and vice versa. As a result, the particle's action is minimized along with its position path and momentum.

In conclusion, divergences are tamed if the least-action and uncertainty principles are applied. SUSY particles are not needed and ad hoc methods such as renormalization can be set aside.

References:

1. Lincoln, Don. 2013-05-21. What is Supersymmetry? Fermilab.

2. Martin, Stephen P. 1997. A Supersymmetry Primer. Perspectives on Supersymmetry. Advanced Series on Directions in High Energy Physics. Vol. 18.

3. Susskind, Leonard. 2012. Supersymmetry and Grand Unification Lectures. Stanford University

4. McMahon, David. 2008. Quantum Field Theory Demystified. McGraw Hill

5. Baez, John. 11/14/2006. Renormalizability. math.ucr.edu

6. Renormalization. Wikipedia

Thursday, December 5, 2019

Introducing Stochastic Trigonometry for Quantum Physics and Statistical Mechanics

In the field of quantum physics, each eigenvalue has an eigenvector, and, when the eigenvector is normalized and squared, we get the probability for the eigenvalue. The normalized eigenvector is sometimes referred to as the probability amplitude.

When all the probability amplitudes are squared and added, the total should be 1. We can represent this with the Pythagorean theorem and the right triangle below:

The above diagram consists of two probability amplitudes: 'a' and 'b.' One is a wave function cos(theta) and the other is a wave function sin(theta).

Now, suppose there are more than two eigenvalues/eigenvectors? The diagram below shows that a and b can be broken up into smaller pieces or smaller and more numerous probability amplitudes. As before, when they are all squared and summed, they give us a total of 1.

It is possible to break up 'a' and 'b' into as many pieces as we like. Below we focus on amplitude 'a':

We can imagine breaking up amplitude 'a' into as many as an infinite number of sub-amplitudes. This can be done in both Euclidean and curved space. Equation 10 below shows how amplitude 'a' and its sub-amplitudes are invariant within flat or curved space.

With a little algebra, we can derive equation 14:

Equation 14 shows amplitude 'a' consists of an infinite number of eigenvalues (eta), each with its own probability (P(eta)). Without the probabilities, the etas would add up to infinity, and that would necessitate some sort of re-normalization technique. If we assume, however, that all quantum numbers have a probability, we will not get infinity; rather, we get the expectation value, i.e., the value actually observed.

What kind of probability values yield a finite result when eta increases linearly to infinity? Probability values that decrease exponentially. Below we derive such a probability function by using the natural-log function and converting eta to 'n':

At 16.2 we have a probability function that will reduce the probability exponentially. It gives us a number between 0 and 1, but we can derive a better function that gives us a number between 0 and 1, and, we can make a substitution. The end game is equation 16.9:

Equation 16.9 claims that if n = Q, the probability of Q (P(Q)) equals the definite integral of the probability function over a range from Q-1 to Q. We can further justify this claim with the diagram below which shows the relation between discrete values (in red) with continuous values (blue line).

Note how the area under the blue line, say, from Q-1 to Q is the same as the area of the red squares from Q-1 to Q. Equation 17 models the fact the the area under the blue line is the same as the area of the red squares over the entire range.

Now, to get a finite expectation value (amplitude 'a') we could combine equations 16.9 and 17, but the math would be more complicated than need be. To simply the math we will encounter later, let's first stretch the above diagram vertically:

Next, we draw a yellow line from zero N+1. This new line is going to make our lives easier and has the same area beneath it as the red line. Wouldn't it be nice if we could nix the red and substitute the yellow? Sure! But first we have to rotate the diagram:

Ah ... now we're in business! Below is the adjusted diagram and equation 18 with a new slope of N/(N+1):

The integral has a new range of zero to N+1, so we give the probability integral the same range:

Let's combine equations 18 and 19 to get 20:

If the limit of N is infinity, equation 20 will always give us the finite probability amplitude 'a.' No re-normalization required.

Using the diagram below and equations 21 and 22, we can derive a formula that finds probability densities:

What we've covered so far allows to find probabilities for integer values. This works fine if the value is, for example, the number of vertices in a Feynman diagram. Albeit, energy can have values of n+.5. Below is the math for that circumstance:

Notice if we divide both sides of equation 26 by Q+.5, and use summation signs, we arrive at equation 23, the formula for finding probability densities.

Now that we have the math the way we want it, let's put it to a test. Let's say we want to add up an infinite number of quantum numbers to get a finite value. Let's assume that the principle of least action applies: the most probable value will be the least action (e.g. least energy, least time, least distance, least resources required, etc.). The least probable value will be the action or event that requires the most resources, time, energy, etc. So we expect the probability to drop exponentially as the value of 'n' increases linearly--this will ensure a finite result.

Let's also assume that experiments confirm that probabilities change according to equation 27:

OK, now we only have to do some complicated math to find the expectation value 'a,' right? Wrong! At 28 and 29 below we convert the right side of 27 to a natural exponent function. If we look at equation 20, it becomes obvious that we can solve this problem by mere inspection. Looking at the exponent, everything to the right of -n is 1/a. Thus equation 31 is our final result.

Here's another test: What is the probability that a particle will travel a distance 'Q' along a pathway 'omega'? Equation 34 below can answer that. At 32 we assume that each pathway has the same probability if the distance traveled is constant, since the action is the same along each pathway (except for the direction, angles, curves, twists, turns, etc.).

Equation 35 gives us a definite answer if we want to know the probability density of a range of distances and pathways the particle can travel:

As you can see, stochastic trigonometry simplifies mathematics that can turn into a complicated, ugly, and infinite mess. It can also improve statistical mechanic's coarse-graining techinique:

Why use squares when you can use triangles?

Update: The following math shows both a convergent series and divergent series can yield a finite number 'q.' First, we start with a divergent series and make it convergent by using the probability function we derived above.

Next, we take a divergent series and assume the coefficients (the c's) don't add up to 1. Each could be any finite size; they could be a random series. The strategy is to factor out 'c' from the coefficients and use one of Ramanujan's techniques:

Another update: The following math generalizes the idea that a finite value can result from any arbitrary convergent or divergent series:

Tuesday, May 29, 2018

Re-normalizing Feynman Diagram Amplitudes in a Non-arbitrary Way

Quantum electrodynamics (QED) is perhaps the most precise and successful theory in all of physics. There is, as I've mentioned in previous posts, a peculiar characteristic within the theory's math: infinities keep cropping up. In this post we deal with the infinities that appear in the math when calculating Feynman-diagram amplitudes.

If you read the previous post, you recall Paul Dirac having a problem with re-normalization. He said, " I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way."

Let's see if we can re-normalize Feynman-diagram amplitudes in a non-arbitrary way. First, we define the variables:

Next, let's do a typical textbook calculation and reveal how the infinity arises. Below is the Feynman diagram we will be working with. A and A' are particle and anti-particle, respectively:

The diagram progresses from bottom to top. There are two vertices. The particle (A) and anti-particle (A'), with momenta p1 and p2, meet at the first vertex and annihilate each other. A boson (B) is released. It has an internal momentum q. At the top vertex it creates a new particle (A) and anti-particle (A') with momenta of p3 and p4.

To find the amplitude M, we need a dimensionless coupling constant (-ig) for each vertex. This coupling constant contains the fine structure constant (see equation 1) There are two vertices, so we square the coupling constant (see equation 2):

To conserve momentum we use the Dirac delta function (see 3 and 4). Momenta p1 and p2 are external momenta heading in, and q is the internal momentum heading out (see 3). At 4, q is incoming momentum and p3, p4 are outgoing momenta.

For boson B's internal line we need a propagator, a factor that represents the transfer of propagation of momentum from one particle to another:

We integrate over q using the following normalized measure:

We put all the pieces together to get equation 7. We begin solving the integral at equation 8:

We can solve the integral more easily if we set q equal to p3 and p4. Using some algebraic manipulation, we arrive at equation 11:

Note that at equation 11 we have a red portion and a blue portion. To get the solution at equation 12, we simply throw away the blue portion! We can just imagine Dirac rolling over in his grave. Further, equation 12 is supposed to be the probability of the event illustrated in the Feynman diagram. But probabilities are dimensionless numbers. This probability has dimensions of 1/momentum squared!

Experiments may show that equation 12 is correct within a tiny margin of error, but can the math that leads to it be more sloppy and arbitrary? Sure it can. But let's try to make it less sloppy and arbitrary. We can start by changing the normalized measure:

Next, we can recognize that momentum is conserved, so the Dirac delta functions will equal 1:

As a result, a lot of the stuff we arbitrarily threw away is now properly cancelled. We end up with equation 19:

If we evaluate the integral, we get an infinity (see 20). The good news is we can convert that infinity to the expression at 21. If we introduce a gamma probability amplitude factor, the infinity becomes a finite number at 21b.

We make a substitution at equation 22:

If we throw away the blue section at equation 22, it makes logical sense when you treat that section as all the probable outcomes that could have happened but didn't happen when the observation was made. The observer saw the expression outlined in red--the eigenvalue. That eigenvalue is paired with what is supposed to be its probability amplitude. Notice if we multiply this amplitude by the gammas in the summation, we get the probability amplitudes for all the eigenvalues that add up to infinity. As a result, the right side of equation 22 is no longer infinite. If we take the sum of squared probability amplitudes multiplied by their respective eigenvalues we get the expectation value.

The expectation value is not what we want, however. We want the actual observed value outlined in red, so we ignore "what could have happened but wasn't observed" outlined in blue. This approach is logical instead of arbitrary.

Now, let's see what we can do to fix the dimension problem. At 23 we pull out a momentum unit and set it to one. This leads us to a new solution at 24:

At 24 we end up with an eigenvalue multiplied by a probability amplitude--and the dimensions come out right. The eigenvalue fits nicely into Einstein's energy equation:

So we have a solution for four-dimensional spacetime. For three-dimensional space, we get equation 27:

At 27, the eigenvalue is just q, the internal momentum of the Feynman diagram. The probability of q is the same as the Feynman-diagram event. We obtain the probability by squaring the phi amplitude:

In conclusion, if you encounter an infinity in QED math, it is OK to discard it. It's not really arbitrary to do so, because you are only interested in what you observed. You are not interested in an infinite number of probable events you didn't observe in your experiment.