Featured Post

A Simple Schrodinger-Equation Solution for the Periodic-Table Elements

Suppose you want to solve Schrodinger's equation for the hydrogen atom. It's the simplest atom in the universe: only one proton ...

Sunday, November 12, 2017

A Simple Schrodinger-Equation Solution for the Periodic-Table Elements

Suppose you want to solve Schrodinger's equation for the hydrogen atom. It's the simplest atom in the universe: only one proton and one electron, so finding the solution should be as easy as blowing up a hydrogen balloon, right? Here's an operation Schrodinger's equation requires you to perform:

Three partial derivatives with respect to spherical coordinates of the function psi. That doesn't sound too bad, but take a look at the function psi:

Obviously it's going to take you more than a minute to find a solution. Looking at the function above, it is no surprise many QM books never take you past the hydrogen atom. To find the ionization energies of all the atoms in the periodic table seems like a daunting task, requiring a supercomputer. But there is a quick-and-dirty way to achieve the task with no computer help. However, it requires that we simplify the wave function.

The first step is to focus on what we are after: we want to know the energy, so how can we know it? Consider equation 1 below:

Yes, equation 1 (the Heisenberg uncertainty principle) tells us that if we want to be certain about the energy, we need to be uncertain about the time. Often, the basis chosen for the wave function involves space coordinates, but the basis we will choose is going to be time or space-time.

Equations 2 through 4 show how a single dimension (ct or x4) subsumes all three space dimensions and time. This will allow us to do one double derivative of the function instead of the more complicated Laplacian operation.

At 5 below we express the Schrodinger equation using the space-time variable (x4). Using a pinch of algebra, we derive equation 8--a Lagrangian form of Schrodinger's equation.

Finding a normalized wave function is now easy. We focus on the time aspect of space-time and choose boundaries of zero to infinite time. (Space-time, in this instance, is just time multiplied by c to get the units right.) Zero to infinity is the most uncertain time we can have, so we can be certain about the energy that we are trying to calculate.

At 15 we choose a value for the wave number (k). At 16 we solve the kinetic-energy term and convert the units to electron volts.

At 17 through 21 we solve the potential-energy term (V):

We put all the terms together and do the math. At the other end the solution comes out (see equation 22):

Finding the ground-state ionization energy for hydrogen is now just a plug-n-chug operation:

With our new Schrodinger solution, we can easily find the ionization energy for more complicated atoms such as carbon:

The main trick is finding the values for constants A, B, and C. Here they are for atoms with up to 12 electrons:

OK, so how do we find A, B, and C? Let's take a real-world example. We want to find the values of A, B, and C for atoms with at least 13 electrons. The smallest of which is Aluminum (Al). We look at some data (click here to see the data) and choose a sample size of five elements (Z >= 13). We then make a pretty little table:

Notice as the table progresses downward it represents the equivalent of taking a double derivative of the energies. Notice at the last row, the values are pretty similar, around 3.5. Let's figure out the average:

Let's take the integral of the average with respect to Z, and then find B:

Notice we didn't use the integer Z values to find B. We used 13.5, 14.5, etc. That's because we used the numbers in the second to the last table row. If you follow the broken lines leading to the top of the table, they fall half way between the Z values. We are dealing with energy differences and not the energies themselves. Each energy difference corresponds to a half-way point.

Now that we know B, let's do another integral to find A:

Let's not forget C. This time we are dealing with the energies and not energy differences, so we plug in the integer Z values:

And here's our final answer:

We should check this against the database (click here for data) for every 13th electron listed. We may have to tweak A, B and C a little to get a near-perfect fit.

Saturday, October 14, 2017

Proving the Schwartz Inequality and Heisenberg's Uncertainty Principle

In this post we once again derive the Heisenberg uncertainty principle, but this time we make use of the Schwartz inequality and the position-momentum commutator. We begin our proof by defining the variables:

Below we have the Schwartz inequality:

Is it true? Let's prove it. At lines 2 and 3 we map inner products with multiple (n) dimensions to simpler 2D Pythagorean expressions. At 4 through 7 we define the bras and kets and their inner products in terms of a, b, c, d.

Next, we take the products of the inner products and derive 11 below:

At 12 and 13 we convert the variables ad and bc to x and x+h. You may recognize h from calculus texts. In this case, it is just any arbitrary number. At 14 and 15 we do a little algebra to get 16:

At 16 it is obvious the absolute value of h^2 is greater than or equal to zero. Thus, the Schwartz inequality is true.

Before we put it to work, we need to define energy (E) and time (t). E is the lowest possible energy (ground-state) and t is the reciprocal of frequency (f). We sandwich these between the normalized bras and kets at 20. That brings us to the energy-time uncertainty at 21.

At 22 and 23 we do a quick-and-dirty derivation of the momentum-position uncertainty:

We can also find the momentum-position uncertainty by making use of its commutator and a wave function (psi). At 24 we define the commutator; at 25 we define momentum (p). After making a substitution for p at 26, we do some more algebra until we get the desired outcome at 30.

Equation 30 is looking good, but there is a slight problem: it's an equation! We want an inequality, so we make use of the Schwartz inequality one more time:

Ah, that's better.

Sunday, October 8, 2017

Where Are the Extra Dimensions Hiding?

Imagine a line x along a plane. How many such lines can the plane hold? An infinite number:

Imagine a plane within a cube. How many such planes can the cube hold? An infinite number:

Now this is harder: imagine a cube within a 4D space. How many cubes can the 4D space hold? An infinite number:

Thus there are an infinite amount of cubes or 3D spaces inside 4D. According to the equation above, this is true no matter how small wxyz is. We can infer that space that is 4D or higher can accommodate an infinite amount of 3D space. The above video shows Brian Greene talking about a little ant crawling around in a tiny, curled-up higher dimension. Here is an illustration:

According to Greene, the ant is able to enter and exit the higher dimension with ease. However, according to the math above, the ant would have to travel an infinite distance. It would never make the trip within its puny lifetime. Below, the red arrows represent the ant's journey. The vertical line represents 3D and the circle represents a higher curled-up dimension. Note the lines within the circle. They represent 3D spaces--an infinite number of them.

Now where there's infinite 3D space, there is bound to be infinite ground-state energy and mass:

So within a higher dimension (no matter how small) there should be infinite energy and mass! If these higher dimensions exist, every volume of stuff we measure should have infinite energy and mass! But they don't. That suggests strongly that higher dimensions are imaginary. There is also the Heisenberg Uncertainty principle which tells us that something very small with infinite energy should exist for zero amount of time:

Then again, what about the many many dimensions of Hilbert space? Where would quantum physics be without those extra dimensions? We should take a close look at Hilbert space. Let's start by examining a familiar 3D space, then we will analyze a 6D object I recently discovered.

Consider vector A below. It is composed of unit vectors i, j, and k:

Let's take the dot product of A with itself.

Taking the dot product of A with itself leads to the identity matrix or Kronecker delta. Because space dimensions are orthogonal (90 degrees to each other), the products of the cross terms equal zero. "Orthogonal" is a good thing--it suggests the 3D space is legit. If the dimensions are orthogonal, the dimensions should also be linearly independent. Let's check this. First we define the eigenvectors:

Next, we multiply each eigenvector by a coefficient (ci,cj,ck), then add them to get a column vector with all zeros.

It is clear that all the coefficients equal zero. This spells linear independence. Now, below is the math for the 6D object I mentioned earlier. As you can see, its dimensions are also orthogonal and linearly independent.

This 6D object does not have infinite mass or energy or infinite 3D space within. The 6D object is none other than a single die:

Notice that the unit vectors i and j are parallel lines; i.e., ijcos(0). Their product does not equal zero, so they are not orthogonal. When we test for linear independence, equation 19 reveals that ci does not necessarily equal zero, nor does cj. So the die's spacial dimensions are not 6D, but 3D.

So what are the die's six dimensions and why are they orthogonal and linearly independent? We can think of equation 20 below as the wave function of the die. At equation 21 we take the dot or inner product of the wave function. Notice at 22 the cross-term products are all zeros within the matrix. How is that possible? (See equations 23 to 25.)

Equation 25 reveals the 6D has nothing to do with space or its angles. It has to do with probabilities! The cross-term products are zero because there is a zero probability the die will yield any numbers that aren't 1, 2, 3, 4, 5, 6. Equations 26 and 27 provide an example of a cross-term product and its probability:

Thus the die's dimensions are orthogonal because they are each statistically independent--not because of right angles. However, we can say that p (the variable subsuming probabilities) is equivalent to cosine-phi for Cartesian unit vectors--at least for the cross-term products, since they are all zero.

This equivalence might lead one to believe there are six spacial dimensions, but what we really have are six statistically independent states within Hilbert space. We can determine the expectation value of these states as follows:

As you can see, having extra dimensions, parameters or states can be useful--and they can reside within 3D space.