_{1}

^{*}

In this paper, I suggest a possible explanation for the accelerating expansion of the universe. This model does not require any dark energy or quintessence. Rather, the idea is to suggest a different view on the origin of general relativity. Since it is very difficult to say something in general, I will mainly restrict myself to the case of very low curvature. The question about the underlying reasons for the acceleration is also closely related to the question whether the universe is a finite or infinite. It is part of the purpose of this paper to argue that a phase of accelerating expansion may be very well compatible with the idea of a closed universe.

The question whether the universe has a finite or infinite extension in space and time is a long open problem. Already when the 16th century philosopher Giordano Bruno claimed the universe to be infinite, the question was highly controversial. However, it is only after the birth of general relativity that it has become a scientific one instead of just a philosophical one.

It can very well be questioned whether it will ever be possible to actually prove that our universe is closed or open. What we can do is to compare different models for cosmology and try to see which one of them is in best agreement with observations. And in the same time apply Ockham’s razor to avoid building theories on more uncertain assumptions then necessary.

An old (and perhaps somewhat naive) way of approaching this question has been to consider the choice between closed and open models to be essentially the question whether there is enough matter in the universe to stop the expansion or not (see [

However, it has turned out to be difficult to explain the origin and properties of this dark energy or quintessence. The most reliable information that we have about it comes from supernova redshifts (see [

In this paper, I will take the point of view that a phase of accelerating expansion can be a very natural phenomenon in a closed universe, and that no dark energy or quintessence is necessary to explain it. The method of approach will be to try to analyze the origin of general relativity in the limit of very small curvature. In addition, I will assume that the total four-volume of the universe, from the Big Bang to the Big Crunch, is fixed. This idea is very natural if we give up the view that space-time is an empty arena for particles, but rather consider it to be an active part of the world which is also built out of some (finite number of) constituents. On the other hand, it should in the same time be noted that this kind of condition does not make sense if we want to explain the acceleration in open universes. Thus finiteness is really an essential condition.

Before we go further, let me state explicitly that for all phenomena on a normal astronomical scale (perhaps also excluding situations where the curvature is extremely high), I still consider usual general relativity to be more or less a perfect theory: every theory which claims to be realistic must in the end reduce to it on such a scale, and I will in fact also use ordinary general relativity to motivate some of the results in this paper. Nevertheless, in cosmology there have for a long time been indications that Einstein’s original equations may not be the last word.

The mathematical methods of this paper are rather unsophisticated. A deeper understanding can presumably be reached with methods from, in particular, optimization and control theory. But this is so far a project for the future. In this paper, I have restricted myself to just trying to show what kind of phenomena can occur.

Most of the following computations have been carried out using Mathematica. The author is indebted to the book by Parker & Christensen (see [

Many physicists believe that the final answer to the question about dark energy should come from micro- physics, and this may very well be so. There is however presently no agreement about what such as answer would look like. Therefor, it may be that general relativity still has something to contribute.

One of the historically most influential models for cosmology is the closed Friedmann model (see [

Since we will mainly be dealing with closed models, the function

The field equations of general relativity are given by

where

where

It is easy to see that the positive solution to this equation, starting from the value 0 at the Big Bang, will grow up to its maximal value

The closed Friedmann model is nowadays neither the best nor the most popular model for cosmology. Nevertheless, it may serve as a natural comparison with the model in this paper.

Simplifying somewhat, one can say that Einstein’s original way of motivating the field equations of general relativity was that, assuming the equivalence of all possible frames of reference and adding a few very natural assumptions, these are the only possible ones. The only one of these additional assumptions which did not appear to be absolutely indispensable was the idea that space-time in the absence of mass-energy should be flat. Abandoning this assumption opened up for a non-zero cosmological constant. Ever since Einstein reluctantly admitted this constant into his equations (see [

Is this bad news for general relativity? The answer to this question of course depends on our perspective: if we cannot find a good motivation for the cosmological constant or other modifications, then some of the natural simplicity of the theory is actually lost. On the other hand, searching for such a motivation could give us clues to an even more natural theory.

In a universe with accelerating expansion there may also be other problems with the field equations. One of the things Einstein was not prepared to give up was the conservation law for mass-energy. This is also built into the field equations in the disguise of the Bianchi identities. But in our universe, potential energy seems to be rapidly growing (because galaxies are drifting apart). In the same time photons are loosing energy (because of the red-shift). It is true that the conservation law which is implied by the Bianchi identities is not entirely equivalent to what we usually mean by energy conservation (see [

A traditional way of deriving the field equations is by means of the Hilbert-Palatini variational principle, which is the standard formulation of the Principle of Least Action in general relativity:

The Principle of Least Action in turn has a long history and is usually considered to be due to Maupertius [

although historically he may not have been the first one to formulate it. In Maupertius’ interpretation, this principle expresses the ultimate economy of nature, in the sense that nature always develops in the way which demands the least effort. However, already Euler observed that the developments which occur do not always minimize the action globally, and nowadays the Principle of Least Action is generally only considered to be a technical way of deriving the field equations, without any philosophical implications what so ever.

So are there any reasonable alternatives to (5)? Already Eddington [

In fact, since that time very many other Lagrangians have been suggested, and many of them have been shown to lead to pathologies. This includes many attempts with Lagrangians containing terms which are quadratic in the curvature tensor ([

On the other hand, there are no particularly strong arguments to support (5) either, except for simplicity. And perhaps also a very wide-spread belief in physics that there should always be an appropriate Lagrangian. It is my personal belief that although the Lagrangian approach has certainly been very successful in physics, there is no guarantee that there really is a natural and simple choice for a Lagrangian which will solve all problems for us. But in the same time there could be many choices which are close enough to generate approximate solutions which may not be easy to discard.

In this situation, it could be a better strategy to restrict the search to a situation where it may be possible to start from fundamental principles and then try to find something that could replace (5). In the next section, I will consequently start from the case of very low curvature and try to see what can be done in this case.

Let us for a moment go back to Maupertius original point of view and try to see what an idea of ultimate economy could lead to.

From Einstein’s original starting point, non-zero Ricci curvature indicates the presence of mass-energy, and is in fact equivalent to it. If we admit a term corresponding to a non-zero cosmological constant into the field equations, then at first sight this is no longer true, since even empty space-time will have non-zero Ricci curvature. However, this is not the only possible conclusion. In fact, an alternative view-point is to consider the non-zero Ricci curvature of space-time as a kind of energy of space-time itself. A naive starting-point could be to say that bending or distorting space-time may require energy.

For the moment, let me just note that a lot of effort has been spent on trying to compute the energy content of the vacuum, but so far different methods and theoretical assumptions tend to give very different and mainly unrealistic results. In the approach of this paper however, the total energy itself is not important. Instead, we will focus on the change of the energy content when space-time is curved. This may not be new either, but the point here is to try to draw conclusions from general geometric considerations which will in a sense be independent of our particular assumptions on the microscopic level.

Returning to Einstein’s original simple idea, we will assume that the natural state of space-time is zero curvature, and that distorting it demands a certain amount of energy. So how can we find an expression for this energy?

First of all, for the vacuum energy to be invariant, it is very natural to suppose it to be expressible in terms of the curvature tensor. Secondly, if we only consider very small curvature (so that terms of higher order than two can be neglected) and in addition suppose that the energy should be positive, then the most natural choice is (possibly a constant times) the square of the scalar curvature R:

It can be observed that the arguments in favor of (6) are more or less the same as those used for motivating the Lagrangian R in the ordinary Hilbert-Palatini variational principle above (see for instance [

This is related to the idea of multiple histories or the multiverse. Although the idea of a multiverse may still be a controversial one, the idea that all possible different histories must be taken into account in order to describe time-development is firmly established, and this is in fact all that matters in the following. As it is, we do not yet quite know how to fit general relativity into this framework. Nevertheless, if we only consider macroscopic phenomena, then it can be argued that quantum probabilities can essentially be treated as a classical Ensemble in the sense of statistical mechanics (see [

Let us consider a bounded region U in space-time to be split up into a disjoint union of small elementary cells

each with roughly the same diameter and volume. To each cell we attribute a certain statistical weight which we suppose to depend only on the total scalar curvature in

for some constant

On a scale where we do not observe the individual fluctuations of the metric, only its macroscopic properties, it follows that the total (un-normalized) weight for a certain geometry with metric g in U can be written as

for a fixed presumably very large constant

Obviously, this probability is maximized exactly when the integral

is minimized. Furthermore, if we sum over all possible geometries, we obtain what in statistical mechanics is usually referred to as the state sum

where g is the metric which actually minimizes (10). Minus the logarithm of the state sum in statistical mechanics is the (Helmholtz’) free energy. Thus, we arrive at the conclusion that minimizing the integral in (6) above is in a certain sense analogous to the general principle in statistical mechanics of minimizing the free energy.

The above argument can certainly not be considered as a derivation of (6), and in fact no such derivation seems to be possible in principle as long as we have not found a way to unite general relativity with quantum mechanics. But on the other hand, (6) is not just a guess either; it is in fact a natural assumption based on a very general principle.

It is also very important to observe that the result is not exact, and that a more detailed computation will give corrections and complementary information. In fact, it is well-known from statistical mechanics that not only the value of the maximizing term in the state sum is important, but also the density of states near it. It can be argued that in the case of very small curvature in this paper, this contribution is small compared to the integral in (10) above. This is also part of the motivation for Principle 1 below. Nevertheless, the density of states is important for another reason, namely for the derivation of the field equations in vacuum.

In fact, minimizing (10) clearly gives solutions with

In general, the size of the density-of-states term is determined by how sharply peaked the maximum is: the more sharply peaked, the smaller the contribution. If we now consider a metric g which maximizes the probability (minimizes (10)), then the question is therefor how rapidly R^{2} will grow when we vary the metric by

Thus, the metrics which satisfy the field equations are exactly the ones for which the lowest order terms in δR in all the directions

Thus, in a sense we have come back to Eddington’s argument: it is comparatively easy to construct variational principles which generate the field equations in vacuum. But If we now leave the case of zero or very small scalar curvature, the problem becomes much more difficult: in this case, fluctuations at different points can in general not be treated as independent, and the computation of the state sum will be extremely complicated. In fact, if we will ever be able to compute the general state sum for non-zero scalar curvature in a fairly correct way, it may very well turn out to be something much more complicated than any Lagrangian for general relativity which has ever been suggested so far.

Having said this, I will now spend the rest of this paper trying to understand the case of very small curvature. In this case it is still reasonable to neglect everything but the leading term (10), which brings us to

Principle 1. The metrics which are realized minimize the curvature functional Φ in (6) with respect to all local volume-preserving variations of the metric.

Remark 1. From Einstein’s original point of view, the condition of a constant volume may not have been a very natural one. But from that time, the idea that space-time is actually built out of something has grown much stronger. A lot could said about this assumption, but I will not enter this discussion here. Let me just note that if we suppose the total volume to be finite, then it is very natural to suppose that the number of constituents is fixed.

As explained above, it is only reasonable to hope for meaningful results in the limit of very small curvature and large length scale. In fact, I will only use Principle 1 on scales which are comparable to the size of the universe itself.

This principle is also quite natural if we think of curvature as a representing some kind of energy: the principle of minimizing curvature then becomes analogous to the principle of classical statics which states that the configuration which minimizes the (potential) energy is the one which is stable and hence the one which will occur in nature.

It is an interesting question whether or not the word local above can also be replaced by global, i.e. if the metric which is realized is the one which minimizes the global curvature of the universe as a whole (including the Big Bang and the Big Crunch). From my point of view the answer may very well be yes, but any such answer that we may come up with must in some way deal with the case of high curvature near the Big Bang and the Big Crunch, something which the simple minded methods of this paper do not allow us to do.

Let us start by noting that in the case of an empty massless space-time, where the global curvature of space time it-

self is the only thing which can contribute to Φ in (6), a completely spherical universe, i.e. where

is the natural minimizing solution to (6). In fact, in this case a direct computation shows that the obviously positive curvature in space is completely compensated by the behavior in the time-direction so that

for all

However, as has already been pointed out in Section 3, potential energy in the classical sense has no natural place in general relativity. In particular, it is a three-dimensional concept. If we for the moment consider kinetic energy to be neglectable, then it may make some sense to integrate the potential energy over time to get something comparable to the global curvature, but it is not clear that this will have any intrinsic meaning in general relativity, so this idea can at best be preliminary. It is in fact one of the ideas of this paper to turn this into something which makes invariant sense, see further Section 7 below.

Meanwhile, let us consider a simple model for a closed universe where matter is uniformly distributed at all times, essentially like a homogeneous ideal gas. As the universe expands, the distance between two arbitrarily chosen particles on an average increases proportionally to the scale factor

According to the previous argument, we conclude that the first order contribution to the functional Φ during the time interval

for some constant b. (The number

Remark 2. It is a very characteristic property of the classical potential that it is non-local: the potential energy of a particle is something which makes sense only in relation to the surroundings, or in fact to the rest of the universe. Nevertheless, for the discussion to follow in Section 7, it is useful to note that the potential energy term in (15) can formally be re-interpreted as the result of a density. In fact, if we consider each volume element dV to contribute with an amount

then the total contribution from the whole universe between

as in (15). Let me say again that there is no natural way to understand the meaning of this density in terms of the classical potential. The real interpretation which I want to suggest will appear later on.

There are also other reasons why this argument is unsatisfactory. The assumption that matter can be treated as a homogeneous gas is not a very realistic description of our universe, since the original gas undergoes an irreversible transformation into stars and galaxies. In fact, the assumption may make some kind of sense in the early history of the universe and it may also make sense during a later phase where already formed galaxies are considered as particles. But the b in (16) above will be different in these two cases. See also the discussion in Section 9.

We now turn to the study of the consequences of the minimizing principle with the gravitational energy expressed as in (15).

Let us consider as in Section 2 a compact, isotropic universe Ω, and let us try to minimize Φ under the condition that the volume is kept fixed. Since we cannot really use this model for drawing any conclusions near the endpoints −T_{0} and T_{0} (corresponding to the Big Bang and the Big Crunch), it is natural to restrict to a sub-inter- val_{0} we can choose T_{1}, but we should think of the interval

Under these conditions, the problems becomes to minimize

where

where

A traditional method of attack is to look at the Euler-Lagrange equation for the functional

It should be noted that a solution to this equation is in general not the same as a global minimum of Φ, even if condition (19) is satisfied, since there could also be other stationary solutions. As it turns out, there are strong indications that the solution is this case is unique, which would then imply that finding the global minimum is in this case equivalent to solving the Euler-Lagrange equation. This is simply because the solutions to the equations in this paper tend to be uniquely determined (at least in the time-symmetric, homogeneous and isotropic case). But a rigorous treatment of this question requires a much heavier mathematical machinery than I can go into here.

We start be computing the scalar curvature for the metric in (1). A not too long computation gives:

From this we obtain

Recall that for the problem of finding the minimum of a functional of the form

where

Using Mathematica we can now compute the Euler-Lagrange equation associated with the functional in (20), and obtain (after multiplying with

What does a reasonable solution to (25) look like? The equation is a non-trivial one, and it is also not at all clear how to determine the constants

For a closed universe with the simple kind of behavior of matter considered here, it is natural to suppose the solution to be time-symmetric i.e.

Presently, I have no good method for determining a reasonable value of

Under this condition, it turns out numerically that the general behavior of the solution is rather non-sensitive to the exact value of this parameter. In the following plots I have chosen to work with the value

On the other hand, the relationship between b and λ turns out to be rather sensitive. I will come back to this in Section 8.

Is there a way of reformulating the approach of the previous section which is invariant and independent of the concept of potential energy?

The solution which I suggest is to change the perspective: Instead of considering the curvature of space-time as a kind of mass-energy, perhaps we should instead consider the curvature itself to be the fundamental concept? Thus, what we should do is to try to compute the total amount of curvature (as measured by (6)), caused by the presence of mass in the universe. Then, we should add this to the large scale curvature of the universe itself and minimize the result. In contrast to the potential energy, this total curvature should be a perfectly well-defined and invariant concept.

Of course, if this idea is to make sense, it must be shown that it somehow corresponds to the classical idea of potential energy. To this end, let us consider a very simplified situation.

The metric around a particle or a gravitating body is generally considered to given by the Schwarzschild metric:

There is no reason to doubt that this metric gives an excellent description of the geometry away from the particle. But what happens close to or inside it? This we know very little about. In fact, more or less the only thing we know is that it follows from the field equations that the Ricci curvature must be non-zero there, since there is no global non-singular solution to the field equations

If we next turn to the case of two particles, what can be said about the curvature then? If the particles are so far away from each other that their interaction can be neglected, then clearly their contributions to Φ will just be the sum of the contributions from the particles separately. This would just add a constant to Φ which would not effect the Euler-Lagrange equation in any way. But if they are still far from each other but close enough to interact, then it is reasonable to treat the total metric as the metric of the vacuum plus two small perturbations caused by the two particles. In particular, as is well-known from general relativity, the presence of one particle with mass

and vice versa. Since the contribution of each particle is proportional to the length of its world-line, it follows that the total (negative) contribution to Φ, caused by the time-contraction, is of the form

Thus, the conclusion is that the contribution to Φ is of the same form as the classical potential energy which was used in Section 6.

It is important to keep in mind that this argument is based on the idea of a homogeneous gas where the particles interact weakly, and which is so dilute that the total effect can be considered as the sum of all pairwise interactions. If the density is higher, the behavior will no longer be additive. Hence, in a universe populated by stars and even super-massive black holes, and in particular near the Big Bang or the Big Crunch, the contribution to Φ could be quite different from what the potential energy approach would give. Nevertheless, for the problem of the accelerating expansion, these considerations may be of less importance. In particular, the plots in

Returning to Remark 2, we now also see that with the interpretation in this section, the density

Remark 3. It goes without saying that the interpretation of potential energy as curvature in this section is so far not a complete answer. In a sense, the situation is very much like in the early age of general relativity: on an abstract level the idea is very simple. But as soon as we want to make computations, the difficulties are enor- mous. On the other hand, making computations with classical potential energy is comparatively easy. Here the problem is instead that on an abstract level we do not really know what we are doing.

My personal opinion is that to compare the predictions that come out of this paper with reality, it is probably quite sufficient to use computations with classical potential energy, just as is done with many other problems in cosmology.

To reach clarity about the connection between this potential energy and the curvature interpretation in this section, the best way is probably computer simulations. In fact, to simulate the metric of a dilute gas and compute the corresponding Φ functional is a difficult but probably not impossible problem for a moderate number of particles.

The values of the parameters used to produce the figures of this paper, in particular b, λ and

I have presently no effective way of determining appropriate values from comparison with the real world. In this section, I will therefor briefly discuss the question whether the chosen values produce typical or non-typical results.

As for the choice of

On the other hand, the relationship between b and λ appears to be rather sensitive: small changes in one of the parameters may result in a solution which looks quite different and in particular may not exhibit the convex behavior associated with acceleration expansion.

Is this an indication that something is wrong or at least that an accelerating expansion is a rare exception in the kind of models investigated in this paper?

Although I cannot really give a complete answer at the present stage of investigation, there is another explanation which I find more plausible. To a large part, the problem is that we are solving the Euler-Lagrange equation in the wrong way. The best way would clearly be to start at the Big Bang and solve forwards in time. This however we cannot do, because we have no good way of stating appropriate initial conditions: such initial conditions must by necessity involve more complicated physics than just the metric.

Consider the three graphs in

In the graph in the middle we do have a phase of accelerating expansion, but in the rightmost graph, for a somewhat smaller value of λ, there is again no accelerating expansion. Is this behavior physical? As long as we do not have a reasonable theory for what happens at the Big Bang, it is difficult to say. As it seems, the smaller λ gets, the more accentuated the singularity when

A closed universe where the radius behaves like in

should only be considered as an example of possible solutions.

A natural question is whether it is possible to test the model in this paper against observational facts? The answer is yes in principle, but there are problems.

The first problem is that the parameter b is probably, as remarked at the end of Section 5, varying significantly during the part of the history of the universe which is accessible for measurements. Even if we roughly identify b as some kind of measure of the potential energy, there still seems to lack adequate estimations of this quantity. This means that the graphs of this paper tend not to be reliable during the early history of the universe.

The second problem is that even if we could estimate the qualitative behavior of b in time, there would still be the problem of comparing the size of it to the global curvature. This can also be expressed by saying that given b, the constant in front of the integral in (6) does not necessarily have to be one. To resolve this is something which inevitably involves the process during which elementary particles generate their gravitational fields, something which we are presently very far from understanding. In theory, it could of course work the other way around too. If we develop this theory and solve the other problems, then it could be that it would eventually give some constraints on the metric close to individual particles and hence contribute to our understanding of micro-physics. But this is so far just a speculation.

The third problem is that we, except for b, also have the unknown parameters λ and

Also, it should be said that the approach in this paper generates many other questions to which I presently have no answer. One example of such a question is the interpretation of the Lagrange multiplier λ. In usual cosmology of closed universes, the Lagrange multiplier is typically identified with the cosmological constant. In the context of this paper however, it appears that the cosmological constant will actually be varying with time. So what should we think of the Lagrange multiplier in this situation?

Summing up, it can be said that the minimizing principle in Section 4 gives a possible explanation to the problem with the accelerating expansion of the universe. And in addition, one which does not need any unknown components like dark energy or quintessence. What it demands is instead a reinterpretation of the physics underlying general relativity in terms of very general statistical principles. Still, a lot of work remains in order to understand the theory, and also a lot of work seems to remain before we can really compare the predictions with reality.

MartinTamm, (2015) Accelerating Expansion in a Closed Universe. Journal of Modern Physics,06,239-251. doi: 10.4236/jmp.2015.63029