Featured Post

Why Different Infinities Are Really Equal

ABSTRACT: Assuming different infinities are unequal leads to strange and counter-intuitive mathematical results such as Ramanujan's ...

Saturday, October 14, 2017

Proving the Schwartz Inequality and Heisenberg's Uncertainty Principle

In this post we once again derive the Heisenberg uncertainty principle, but this time we make use of the Schwartz inequality and the position-momentum commutator. We begin our proof by defining the variables:

Below we have the Schwartz inequality:

Is it true? Let's prove it. At lines 2 and 3 we map inner products with multiple (n) dimensions to simpler 2D Pythagorean expressions. At 4 through 7 we define the bras and kets and their inner products in terms of a, b, c, d.

Next, we take the products of the inner products and derive 11 below:

At 12 and 13 we convert the variables ad and bc to x and x+h. You may recognize h from calculus texts. In this case, it is just any arbitrary number. At 14 and 15 we do a little algebra to get 16:

At 16 it is obvious the absolute value of h^2 is greater than or equal to zero. Thus, the Schwartz inequality is true.

Before we put it to work, we need to define energy (E) and time (t). E is the lowest possible energy (ground-state) and t is the reciprocal of frequency (f). We sandwich these between the normalized bras and kets at 20. That brings us to the energy-time uncertainty at 21.

At 22 and 23 we do a quick-and-dirty derivation of the momentum-position uncertainty:

We can also find the momentum-position uncertainty by making use of its commutator and a wave function (psi). At 24 we define the commutator; at 25 we define momentum (p). After making a substitution for p at 26, we do some more algebra until we get the desired outcome at 30.

Equation 30 is looking good, but there is a slight problem: it's an equation! We want an inequality, so we make use of the Schwartz inequality one more time:

Ah, that's better.

Sunday, October 8, 2017

Where Are the Extra Dimensions Hiding?

Imagine a line x along a plane. How many such lines can the plane hold? An infinite number:

Imagine a plane within a cube. How many such planes can the cube hold? An infinite number:

Now this is harder: imagine a cube within a 4D space. How many cubes can the 4D space hold? An infinite number:

Thus there are an infinite amount of cubes or 3D spaces inside 4D. According to the equation above, this is true no matter how small wxyz is. We can infer that space that is 4D or higher can accommodate an infinite amount of 3D space. The above video shows Brian Greene talking about a little ant crawling around in a tiny, curled-up higher dimension. Here is an illustration:

According to Greene, the ant is able to enter and exit the higher dimension with ease. However, according to the math above, the ant would have to travel an infinite distance. It would never make the trip within its puny lifetime. Below, the red arrows represent the ant's journey. The vertical line represents 3D and the circle represents a higher curled-up dimension. Note the lines within the circle. They represent 3D spaces--an infinite number of them.

Now where there's infinite 3D space, there is bound to be infinite ground-state energy and mass:

So within a higher dimension (no matter how small) there should be infinite energy and mass! If these higher dimensions exist, every volume of stuff we measure should have infinite energy and mass! But they don't. That suggests strongly that higher dimensions are imaginary. There is also the Heisenberg Uncertainty principle which tells us that something very small with infinite energy should exist for zero amount of time:

Then again, what about the many many dimensions of Hilbert space? Where would quantum physics be without those extra dimensions? We should take a close look at Hilbert space. Let's start by examining a familiar 3D space, then we will analyze a 6D object I recently discovered.

Consider vector A below. It is composed of unit vectors i, j, and k:

Let's take the dot product of A with itself.

Taking the dot product of A with itself leads to the identity matrix or Kronecker delta. Because space dimensions are orthogonal (90 degrees to each other), the products of the cross terms equal zero. "Orthogonal" is a good thing--it suggests the 3D space is legit. If the dimensions are orthogonal, the dimensions should also be linearly independent. Let's check this. First we define the eigenvectors:

Next, we multiply each eigenvector by a coefficient (ci,cj,ck), then add them to get a column vector with all zeros.

It is clear that all the coefficients equal zero. This spells linear independence. Now, below is the math for the 6D object I mentioned earlier. As you can see, its dimensions are also orthogonal and linearly independent.

This 6D object does not have infinite mass or energy or infinite 3D space within. The 6D object is none other than a single die:

Notice that the unit vectors i and j are parallel lines; i.e., ijcos(0). Their product does not equal zero, so they are not orthogonal. When we test for linear independence, equation 19 reveals that ci does not necessarily equal zero, nor does cj. So the die's spacial dimensions are not 6D, but 3D.

So what are the die's six dimensions and why are they orthogonal and linearly independent? We can think of equation 20 below as the wave function of the die. At equation 21 we take the dot or inner product of the wave function. Notice at 22 the cross-term products are all zeros within the matrix. How is that possible? (See equations 23 to 25.)

Equation 25 reveals the 6D has nothing to do with space or its angles. It has to do with probabilities! The cross-term products are zero because there is a zero probability the die will yield any numbers that aren't 1, 2, 3, 4, 5, 6. Equations 26 and 27 provide an example of a cross-term product and its probability:

Thus the die's dimensions are orthogonal because they are each statistically independent--not because of right angles. However, we can say that p (the variable subsuming probabilities) is equivalent to cosine-phi for Cartesian unit vectors--at least for the cross-term products, since they are all zero.

This equivalence might lead one to believe there are six spacial dimensions, but what we really have are six statistically independent states within Hilbert space. We can determine the expectation value of these states as follows:

As you can see, having extra dimensions, parameters or states can be useful--and they can reside within 3D space.

Wednesday, October 4, 2017

Proving the Dirac Delta Function, Etc.

In this post we prove the Dirac Delta function and its sampling property. Here are the variables we need:

So we ask, is equation 1 below true?

To prove equation 1, we first define the Dirac Delta function:

And it helps if we draw a diagram:

In the diagram above, we have a line segment along the x axis ranging from minus infinity to infinity. If we let the value of epsilon go to infinity (or assume an infinite number of points along any epsilon distance) we can make a substitution and create the diagram below:

At line 3 we create a new integral that is equivalent to the one we started with. From there to line 5 we show that the integral does indeed equal 1.

Now let's prove the sampling property. Suppose a particle is located at position 'a' instead of zero. Is the value of the integral f(a)?

We assume the following are true:

We make some changes to equation 6 to get 7:

We draw a new diagram to account for the fact that x equals 'a' instead of zero:

We can now derive equation 8. From there we derive the calculus difference quotient or derivative formula at line 14.

Line 15 above confirms the Dirac Delta function's sampling property.