Featured Post

Taming Infinities--Introducing n-space

Each line has an infinite number of points. We tame this infinity by creating an arbitrary finite unit. For example, take the set of re...

Saturday, March 17, 2018

Taming Infinities--Introducing n-space

Each line has an infinite number of points. We tame this infinity by creating an arbitrary finite unit. For example, take the set of real numbers. Between 0 and 1 there are an infinite number of numbers:

Normally, we count using integers: 1, 2, 3, etc. But we don't have to do it that way. We could count like this: 1-infinity, 2-infinity, 3-infinity--all the way up to infinity-infinity. If 3 is greater than 2, than 3-infinity is greater than 2-infinity. So what we have are different magnitudes of infinity that make up our finite numbers. With this in mind, it seems reasonable to assume we could add up an infinite set of numbers and get a finite number. For example, we could take the entire line of positive real numbers ...

...and shape it into a circle:

Now infinity is equal to zero and 2pi radians, i.e., finite numbers.

Let's imagine we are extremely naive, we don't know the first integer greater than zero. So we decide to add up all the real numbers from zero to the next integer point. That gives us an infinity:

The vertical lines represent the infinite quantity of real numbers between zero and the question mark. They make a nice 2D drawing of a triangle. The average real number is at the half-way point. If we take this number (.5) and multiply it by 2, we get the right answer: not infinity, but 1. This is the basic logic behind n-space. We take an infinite number of points in space of any number of dimensions and map them to a 2D space. The average value (expectation value) becomes our vertical axis. We multiply this value by the horizontal axis to get the total area which is a finite value.

The above diagram shows how each point in the original lattice space is mapped to n-space. Each point in the original space becomes a vertical line in n-space. So an infinite number of points, lines, planes or cubes (lattice cells) become an infinite number of vertical lines. The average vertical line (bar-np) is multiplied by the horizontal line (nx) to get the area--which is the correct finite answer.

Why is the n-space area the correct answer--and not infinity? Consider the following diagram:

Max Planck found that if he added up a set of finite discrete energies, he got the correct finite value. The above diagram shows we can also add up an infinite set of continuous energies and get the same finite value! Whether the energies are discrete or continuous, the area under the curve is the same. Thus, finding the area under the n-space curve is a way to find the correct answer. (Take note that, throughout this post, we take the energy term normally reserved for a single particle and use it to represent any energy. Sometimes the frequency and Planck's constant are set to one.)

The following relationships show us how to get the values in n-space we need to calculate the correct, finite answer:

Now, we want n-space to help us solve infinity problems in the quantum as well as the classical realm. This is why n-space was derived from Heisenberg's uncertainty principle. Here are the variables involved:

Here is the derivation:

Equation 12 shows the n-space area is always greater than or equal to 1/2--or the ground-state:

According to equation 13, the total energy in a system, like Planck's constant, has two components, dimensions, or factors (nx, np). The horizontal dimension (nx) is derived from position, and the vertical dimension is derived from momentum. The total energy is equal to or greater than the ground-state energy. Using equation 12 we can derive equation 14:

At 14, k is a constant, so if equation 14 represents the total energy in the system, that energy is conserved. It does not matter how big or small the average energy is at any given point in the original space or lattice. Nor does it matter if there are an infinite number of such points. That energy or quantum number (np) is offset by quantum number (nx), giving the conserved quantity.

Now that we've laid the groundwork for n-space, let's attempt to solve a classic problem: calculating the total energy in a sphere, where each point in that sphere has a given amount of energy, momentum, and/or mass. And, of course, there are an infinite number of points in the sphere.

Immediately we run into a problem: if we know the exact energy at each point, we know the exact momentum if we divide the energy by c (light speed), and, it is obvious we know the exact position as well--a clear violation of the uncertainty principle. If we zero in on a point in space, according to Heisenberg, we should be totally uncertain about the energy and momentum. According to the de Broglie wavelength formula, if we reduce a wavelength to zero, i.e., a single point, we should have infinite energy! And, we can only know that if we have no clue where that point is located!

Below is the relevant math:

Realistically, each point of energy is not a point at all, but a wavelength with a one-dimensional magnitude. If the average wavelength is greater than zero, then we have a finite energy at each wavelength.

We can think of each wavelength as a line. Assuming we know the energies and momentums, we don't know the positions, but we can make this fact unimportant if we calculate the average energy/momentum. Then we know that at any randomly chosen position the average energy is always the same. We can then map each energy/momentum to n-space.

Using equations 21 and 22 we find the average vertical factor (np).

We use the following equations to find the horizontal factor (nx):

At equation 23 we see a problem. To find nx we must first know nt--the total that we are trying to calculate! So we move on to equation 24. We know the total volume but we don't know this thing called the unit volume. We get a unit volume by taking the volume of another system, where we know all the variable values, and multiplying that volume by a factor of np/nt (see equation 25). Once we have our unit volume, we can plug that into equation 24 to get the nx value for the instant problem.

We do the final steps below:

The strategy we used works as long as the following is true:

Suppose we have a scenario where we have a volume of energy, say, a star. The energy is conserved as follows:

The star collapses into a black hole. All wavelengths allegedly shrink to a zero limit. That forces the average momentum factor np to blow up to infinity:

At equation 31, the star's radius also shrinks to a zero limit. We should end up with a singularity that has a position unknown to us, assuming we know the total energy, mass, and momentum. We can imagine the singularity being anywhere within the Schwarzschild radius. Nevertheless, we can crudely map the star to n-space as follows:

As explained earlier, the positions of the wavelengths and the position of the singularity become irrelevant when we determine the average np for each wavelength. Now, let's assume we don't know the star's total energy. We want to find it, so we need to find the value of nx. The star volume is its radius cubed times 4/3 pi. The Schwarzschild radius cubed times 4/3 pi shall serve as the unit volume. We divide the volume by the unit volume--the 4/3 pi's cancel:

When we do the math we see that the star's total energy is finite and is equal to the black hole's (assuming energy is constant and none was transferred).

So in the case of the black hole, np had an infinite limit, but nx had a zero limit--so the total energy ended up being finite and conserved.

Monday, March 12, 2018

Why Entropy Happens

In the beginning, there was order, a very hot singularity, but as time progressed the universe expanded and cooled--and became more disorderly. Scientists predict a "big freeze." It's all due to entropy. As you read this blog, entropy continues. Why? That's what we will explore below. First, let's define the variables we will use:

We begin with the partition function, which has the Boltzmann factor, an exponent with a thermodynamic beta power over the base e:

If we want to determine the probabilities of the energies in a system, we make sure the probabilities add up to 1, so we normalize the partition function by dividing it by itself (Z):

However, if we want to model the universe's evolution, we need to make a slight change to the partition function. Instead of using the thermodynamic beta, we use its reciprocal. We also change the i index to a time (t) index:

Also, we want the universe's total energy to be conserved. We know dark energy is increasing and radiation energy is decreasing, so we put together an energy-conservation equation:

At equation 5, notice how an increase in the universe's volume (V) reduces the radiation energy (Er) but increases the dark energy (pV). Multiply the two energies, add a little dark and baryonic matter, and take the square root and we get a constant energy (E).

We define temperature as follows:

As volume (V) increases, the universe's temperature decreases. Equation 7 below gives us the probability of the temperature at a given time t:

A high temperature has a low probability. A low temperature has a high probability. So there is a high probability the universe's temperature will continue to decrease, and a low probability the temperature will increase. Thus, an expanding universe has a higher probability.

Now, let's take a look at entropy. We define it as follows:

We see that entropy increases as temperature decreases--so it has the same probability as temperature:

So why does entropy happen? Greater entropy has a higher probability than lower entropy. We can also say that reverse entropy is possible but less probable. A good example is the one Tyson discussed in the above video. There are pockets of order caused by star energy, so life is possible.

Saturday, March 10, 2018

Finding the Probability Distribution of Dark Energy, Dark Matter and the Rest

Suppose we want to find the expectation value (average value) of the universe's energy? We want to include dark energy, dark matter's energy and baryonic matter's energy. One approach is to make a lattice and use the partition function:

Equation 2 above will give us the expectation value. All we have to do is sum up an infinite number of lattice cells! Then divide by Z which also requires an infinite sum! Not a bad strategy if our goal is to crash the supercomputer. There's a better way, but first we have to change things up a bit. Let's assume, pursuant to the partition function, that high energies per temperature have a lower probability then low energies per temperature. Let's also express all energy, whether it's potential, kinetic, dark, or whatever, in a form that would make Max Planck proud. Here are the details:

At lines 3 and 4 above, we convert the dimensionless exponent power to the quantum number n. At 5 we create a unit frequency by taking all the universe's energy and dividing it by the number of particles in the universe, Planck's constant, and the average n (to be determined below). Equation 6 shall represent the value of an arbitrary particle in the universe. For ease of computation, we set Planck's constant and the unit frequency to one (see 7 and 8).

Now, rather than add up an infinite number of discrete energies, there is an advantage to treating those energies as if they are continuous--we could use an integral and get a finite value! At equation 9 below we add up the probabilities using an integral. We get "1" as we should. At 10 we equate the integral with the partition summation.

It's also convenient that we can convert discrete energies into continuous energies and vice versa. Below we compare a sum of discrete energies with an integral of continuous energies. The area beneath the curve is the same for both.

But just to be sure we are on the right track, let's prove the validity of the above diagram. Equations 11 through 16 give us the proof:

Now if the next equation is true, we are in business. We want to multiply each n by its probability and add it all up to get the expectation value.

The next diagram compares the expectation value of the integral versus the summation. The difference between the two (the error margin) is marked in red.

Note that each rectangle represents a small range of n (0-1, 0-2, etc.). Also note as the sample size gets larger, the error margin becomes less significant. It appears the discrete and the continuous merge as the n-range increases. Of course we want to be sure this is really the case, so let's prove it.

Basically, we have two similar functions: one is summed and the other has an integral. Let's label the first function g(n) and the other f'(n). From calculus we know we can merge the discrete with the continuous by taking the limit to zero. And, equations 18 through 25 show that any difference between the two functions becomes insignificant if we take the range (N) to infinity.

As long as we have an infinite n-range we can use the integral to find the expectation value of all the energy in the universe, from n = 0 to n = infinity! Here we go:

The expectation value is just the frequency unit times Planck's constant--or simply "1." This isn't surprising when you consider the vast majority of the universe is empty space, ground-state energy, zero-point energy, or dark energy. Here is the breakdown of the universe's energies:

We now have the information and tools we need to find the probability distribution of these energies and their expectation values. Dark energy makes up 70%, so we infer the probability of its existence to be .70. Below we calculate what value of N gives a .70 probability.

The next most common energy is that of dark matter. It makes up 25%, so we infer it's probability to be .25. If dark energy's n-range is from 0 to 1.21, then let's assume dark matter's begins at 1.21 and ends at N. What is N for dark matter? Let's calculate!

N is 3 for dark matter. So it appears the average energy (n) for dark energy is from 0 to 1.25. The average energy (n) for dark matter is from 1.21 to 3. We can now infer that the remaining energy has an average n from 3 to infinity! Don't panic. There is a way to pin down the average for the remaining energy. First, let's look at what we have so far:

The above diagram indicates a slight margin of error in red. This is expected, since the n-ranges for dark energy and dark matter are small. To compensate, the n-range for dark energy is not 0 to 1, but 0 to 1.21. The increase equalizes the areas under the curves for the discrete and continuous functions. Let's now find the expectation value for dark energy:

At 39 and 40 above we normalize the probability integral. At 42 we see that dark energy's expectation value is approximately .5, which is the ground-state value in Planck's equation (see equation 6). How lovely! This confirms that 70% of the energy is zero-point, the vacuum, dark energy.

Now let's do dark matter:

Dark matter's average energy is also low. Higher than dark energy, so it only has a .25 probability and makes up 25% the energy.

OK, we are left with finding the expectation value for baryonic matter's energy. As previously mentioned, the n-range is from 3 to infinity. Obviously we don't have infinite time on our hands to add up an infinite number of n's--so we are going to borrow an idea from probability theory: all probabilities must add up to 1, and, the average energy in the universe is also 1:

We create equation 48 and solve for (baryonic matter). Turns out the expectation value for baryonic matter is a mere finite 3.75.

With the above findings we can put together the following expectation-value equations:

Equations 50 to 52 show we can add up infinite ranges of n and get finite expectation values for dark energy, dark matter and baryonic matter. Further, the average of these expectation values is 1, or one unit of of the unit frequency times Planck's constant. Plus, the expectation values reveal why there is so much dark energy, not so much dark matter, and even far less baryonic matter: a low average energy appears to be more probable than a high average energy.

Below we derive some equations that describe individual particles of dark energy, dark matter and baryonic matter.

Update: At equation 26 there is a boxed term that could be interpreted as zero times infinity, which is undefined, so how do we know it is zero or converges to zero? Here's the proof: