Journal of Modern Physics
Vol.09 No.12(2018), Article ID:87652,24 pages
10.4236/jmp.2018.912132

Gilbert’s Postulate and Some Problematic Physical Theories of the Twentieth Century

Boris V. Vasiliev

Dubna, Russia

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: August 30, 2018; Accepted: September 27, 2018; Published: September 30, 2018

ABSTRACT

William Gilbert more than 400 years ago formulated a postulate that can be considered as main principle of natural sciences. According to this postulate, the criterion for the correctness of a theory can only be its confirmation by measurement data. In our time, all theories are confirmed by at least some experimental data. But sometimes the theory cannot explain parameters which can be considered as main for objects under study. Usually such “inexplicable” objects and dependencies are called empirical and it is assumed that they do not require theoretical explanation at all. In most cases, this means the fallacy of the used theory. So nowadays postulate Gilbert needs to be reformulated: the correct theory should describe ALL basic properties of objects of research. A number of theories developed in the twentieth century do not satisfy this formulation. In almost all cases, the reason for this is a misinterpretation of nature of objects of study. In particular, in order to satisfy Gilbert’s refined postulate, it turns out necessary to revise the theoretical descriptions: 1) nature of superfluidity and superconductivity; 2) nature of neutrinos; 3) nature of neutron; 4) nature of nuclear forces; 5) model of quarks with fractional charge; 6) internal structure of stars; 7) nature of the Earth’s magnetic field; 8) mechanism of thermomagnetic effect in metals.

Keywords:

Superfluidity, Superconductivity, Quarks, Neutron, Nuclear Force, Neutrino, Star Physics, Terrestrial Magnetism

1. The Main Postulate of Natural Sciences

The twentieth century is in the past.

It’s time to critically rethink a number of theories created by physicists during this time.

The need for such a rethinking arises from the fact that theoretical physicists in the last century often considered the most fascinating and important thing to build theoretical models for those phenomena and objects for which experimental data have not yet been collected.

To create such theories in addition to knowledge needed fantasy, intuition and imagination. Therefore, the validity of such models, even if they are common accepted, may be questionable.

In the field of elementary particles, theoreticians often to replace the missing experimental data used symmetry considerations to systematize particles, for example, tables of particles based on Gell-Mann’s quarks or a table, such as the Weinberg-Salam’s standard model of elementary particles. These symmetrized tables look really nice, but the weakness of this approach is that the dropping out of even one basic particle, such as neutron (see below), violates the very principle of such systematization.

The reason that forces to reconsider a number of other theories of the XX century (for example, physics of stars) is connected with the progress of measurement techniques and obtaining new experimental data. Sometimes new measurement data do not fit into old theories. Apologists of these outdated theories often struggle hard for their survival. Several of these theories still dominate their fields of expertise and require partial or complete revision in our time.

In general, the past twentieth century brought remarkable scientific discoveries in the field of physics.

In the early twentieth century, nuclear physics was born and then rapidly developed. It was probably its greatest discovery. It radically changed the whole material and moral image of the world civilization.

At the same time, superconductivity was discovered, and a little later, and superfluidity. These super-phenomena promise mankind a giant leap of technology and economy.

At the beginning of the twentieth century, radio was born, which gradually led to television, and then radio technics spawned computers. Their importance is difficult to overestimate.

There was a science of quantum, which led to the appearance of quantum devices, among which lasers shine.

It can be a long list of branch of physical knowledge that gave us the twentieth century.

However, not all theoretical explanations for these discoveries seem perfectly correct.

William Gilbert (1544-1603) developed the criterion of correctness of the theory more than 400 years ago. He formulated a postulate that can be considered as the main principle of the natural sciences [1] :

All theoretical constructions that claim to be scientific must be verified and confirmed experimentally.

Before Gilbert, false ideas did not fear of experimental verification. At that time the world of thought was incomparably more subtle than the ordinary and gross material world. A precise coincidence of a philosophical theory with direct experience almost degraded it dignity in the eyes of dedicated. The discrepancy between pre-gilbert theory and observations did not bother anyone. There were absolutely fantastic statements, from our point of view. So, W. Gilbert writes that he experimentally denied the popular belief that the force of a magnet can be increased by rubbing it with garlic.

However, the formulation of this postulate proposed by Gilbert seems somewhat simplified nowadays. It is applicable to relatively simple theoretical models.

Nowadays, it seems impossible to find researchers who would disagree with Gilbert in principle. Indeed, all well-developed theories of the twentieth century are consistent to some measurement data. But these theories may contradict other data, the existence of which they simply do not pay attention.

Therefore, in the application to the complex theoretical constructions that made up the essence of physics of the twentieth century, Gilbert’s postulate needs to be clarified:

Physical theory which claims to be an adequate description of the object of research has to explain ALL the experimental data obtained by studying it.

Without such clarification, in theoretical physics there is a paradoxical situation: there are theories of various phenomena that describe some their properties but cannot explain the main features of these phenomena.

2. Some Theories Created in the Twentieth Century, Which Can Not Explain the Main Features of Studied Phenomena

Here is a short list of such theories:

2.1. Superfluidity

Superfluidity was discovered in the late 1930s (see Figure 1). The main features of the phenomenon of superfluidity in liquid helium became clear soon after its discovering [2] .

Now there is a generally accepted well-developed theory that describes many features characteristic of the superfluid state. For example, it explains the temperature dependence of the concentration of the superfluid component, the temperature dependence of its heat capacity, the existence of different types of “sound” and calculates their velocities, as well as other features of the superfluid state.

However, the existing theory does not explain the main thing―why the transition of helium-4 to the superfluid state occurs at temperature

T λ 2.1768 K , (2.1)

and the density of superfluid helium is γ 4 0.145 g / cm 3 .

It will be shown below that these values are well described by formulas containing only world constants

Figure 1. P.L. Kapitsa (right) and L.D. Landau. P. Kapitsa discovered the phenomenon of superfluidity, and L. Landau gave the first theoretical explanation for this phenomenon.

T λ = M 4 c 2 α 6 3 k 2.1778 K . (2.2)

(where M 4 is mass of He-4 atom, α = e 2 c 1 137 is the fine structure constant)

γ 4 = α 2 a B 3 M 4 3 2 m e 0.1443 g / cm 3 . (2.3)

(where a B = 2 m e e 2 is the Bohr radius, m e is electron mass.)

Such formulas are absolutely not typical for the description of a condensed state of a system of many particles. They speak about the fundamental nature of this phenomenon. This is not clear from the point of view of the current generally accepted theory, which should therefore be revised.

2.2. Superconductivity

Superconductivity was discovered in the early 20th century (see Figure 2), but for a long time it was thought to be the most enigmatic phenomenon in condensed substances.

Its theory appeared a few decades later.

Now the generally accepted theory of superconductivity successfully explains, for example, the temperature dependences of heat capacity and energy gap in superconductors. But it cannot calculate main property of superconductors―the critical temperature of transition into a superconducting state. Therefore, this theory should be replaced by one that is able to explain all the main properties of specific superconductors.

2.3. Neutrino

The existence of the neutrino was predicted by W. Pauli in the early 30s of the last century (see Figure 3).

Figure 2. Heike Kamerlingh Onnes discovered the phenomenon of superconductivity in 1911.

Figure 3. Nobel laureate Wolfgang Pauli predicted the existence of neutrinos for theoretical reasons in 1932.

The effect of reactor neutrinos on the substance was found after about two decades (see Figure 4).

In neutrino physics, the triad―e-neutrino, μ-neutrino and τ-neutrino―and details of their mutual transformations are considered. But the main property of neutrino―its unusually high penetrating power―remains unexplained. This unusual property distinguishes neutrino from all other particles.

In addition, a special fundamental weak interaction of nature is introduced to explain neutrino-related reactions. The necessity of this introducing is justified by the special properties of the mysterious neutrino.

2.4. The Quark Theory

The quark model introduces new subparticles from which all other elementary particles must consist. At this a particular importance has the explanation of the mechanism of transformation of neutron into proton. An attractive invention is the scheme proposed by Gell-Mann (see Figures 5-7), in which this transformation is carried out by replacing only one quark with a fractional charge to another. However, no particle with fractional charge was experimentally discovered. That demanded to admit the existence of the specific confinement of quarks.

Figure 4. Fred Raines and Clyde Cohen, who discovered neutrinos, in the control center of their measurement equipment (1953).

Figure 5. Murray Gell-Mann.

Figure 6. The main achievement of the quark theory―the structure of neutron and proton―recorded by Gell-Mann.

Figure 7. The quark structure of nucleons according to Gell-Mann.

This directly contradicts Gilbert’s postulate: the quark model assumes that quarks with a fractional charge are real particles at their full undetectable.

It is also important that the quark model don’t make possibility to calculate the basic parameters of neutron at comparing them with the properties of proton.

2.5. Nature of Nuclear Forces

The problem of nuclear forces related to the quark model required for its explanation the introduction of a new type of fundamental interactions―a strong interaction―and a new type of non-observable particles―gluons. It is assumed that they must bond the nucleons in nuclei. This approach makes it possible to obtain a fully developed picture of nuclear forces, but does not allow to solve the main problem―to calculate the binding energy of nuclei.

2.6. Astrophysics

Astrophysics in its modern state was formed by the middle of the twentieth century and is a completely unique branch of physics, because it does not rely on measurement data and ignores Gilbert’s postulate.

However, the technological progress of astronomical measurements to the present time gave the ability to know about a dozen interdependence of the main parameters of the stars. These dependencies are radius-temperature- mass-luminosity of close binary stars, magnetic fields of stars, etc.

Naturally, it turned out that the existing theory of stars, built without reliance on any measurement data, cannot explain these dependencies and should be revised.

2.7. The Magnetic Field of Earth

Attempts to explain the mechanism of the Earth’s magnetic field have been undertaken for several centuries. Apparently, the first model of the Earth’s magnetic field was created by W. Gilbert more than 400 years ago [1] .

Einstein included this problem as one of the three main tasks of the science of his time.

Currently a hydrodynamical model is accepted. Despite some difficulties, its parameters can be chosen so that the magnitude of the magnetic field near the poles of the Earth will be approximately equal to 1 Oe, which is consistent with the measurements.

However, in the second half of XX centuries space flights began and the technique of astronomical measurements obtains further developing. As a result, the magnetic fields of most objects of the Solar system and a number of stars, including pulsars, were measured. It turned out that the gyromagnetic relations of all these space objects are approximately equal to the ratio of the world

constants G c (Figure 14).

Because the problem of terrestrial magnetism has become a special case of a common problem for all celestial bodies. This required rejecting the hydrodynamical model to create a new general theory of cosmic bodies magnetism.

3. What Should Be Theories That They Explain All Main Features of Phenomena under Study

3.1. Superfluidity as a Consequence of the Ordering of Zero-Point Oscillations of Helium Atoms

3.1.1. Superfluidity as a Quantum Effect

L. D. Landau saw in superfluidity a quantum effect in a macroscopic manifestation. That created the basis for understanding the characteristic features of this phenomenon and further progress in its study [3] .

The modern theory of superfluidity explains the general characteristics of this phenomenon: the energy spectrum of excitations, thermodynamics of superfluid helium, its heat capacity, etc.

3.1.2. λ-Transition

However, the energetically profitable transition of helium to a superfluid state should occur due to the appearance of some additional forces of attraction in the ensemble of its atoms lowering the ensemble energy.

Therefore, the most important task of the theory is to explain the mechanism of attraction that causes the transition to the superfluid state and the reason that this transition in helium-4 occurs at a temperature of about 2K.

According to Gilbert’s refined principle, the theory should provide a quantitative explanation of all the characteristic parameters that are observed in this phenomenon.

Therefore, the refined theory of superfluidity should first explain the physics of the λ-transition and way this temperature is almost exactly half the boiling point of helium:

T boiling T λ = 4.215 K 2.177 K 1.93. (3.1)

3.1.3. London’s Dispersion Forces

The feature of helium-4 is that the atom has no total charge or dipole moments.

Nevertheless, a certain electromagnetic mechanism should be responsible for phase transformations in its condensed state. This is evidenced by the scale of energy change in this transition, which corresponds to other electromagnetic transitions in condensed matter.

In the 30-ies of the last century F. London showed [4] (see Figure 8), that between the atoms He-4 in the ground state, there is an interaction of type of the van der Waals force, having a quantum nature.

At very low temperatures, all movements in liquid helium freeze. Only quantum zero-point oscillations remain. F. London considered these oscillations

Figure 8. Fritz London (1900-1954).

as three-dimensional vibrating dipoles connected with each other by electromagnetic interaction. He called this interrelation of atoms in the ground state as a dispersion interaction.

3.1.4. The Interaction of Zero-Point Oscillations of Helium Atoms

F. London showed that the electromagnetic interaction of zero-point oscillations of helium atoms leads to their attraction. Since there is no repulsion between particles of boson gas, the occurrence of attraction should lead to liquefaction of boson gas. However, F. London did not pay attention to the fact that there are two types of vibrations of the shells of symmetric atoms―the vibrations of neighboring atoms can be longitudinal or transverse with respect to the line connecting neighboring atoms. The interaction energy in these two modes turns out to be different [5] . The ordering of longitudinal oscillations leads to the liquefaction of helium. The ordering of transverse oscillations occurs at twice less temperature. It is remarkable that this temperature is described by the formula consisting of world constants only (Equation (2.2)). Below this temperature, the system of zero-point oscillations of atoms is completely ordered, i.e. atoms form a single quantum ensemble of the superfluid state.

Results of experimental measurements confirm the correctness of this theoretical evaluation with high accuracy Equation (2.1)).

The consideration of superfluidity as the ordering of zero-point oscillations allows to calculate all basic parameters of this phenomenon. (see Table 1 [5] )

3.2. Superconductivity as a Result of the Ordering of Zero-Point Oscillations of Electron Gas

The main difficulty of modern theory of superconductivity (BSC) is that it cannot explain why this phenomenon occurs in different metals at different temperature.

Superconductivity can be considered as superfluidity of electron gas. These phenomena are similar. Considering the superconductivity as a result of ordering of zero-point oscillations in electron gas, it is possible to show that the

Table 1. Comparison of the calculated values of liquid helium-4 [5] with the measurement data ( [8] [9] ).

ordering temperature of these oscillations is determined by the Fermi temperature of the metal [5]

T c 4 π α 3 T F , (3.2)

where α is the fine structure constant.

This is consistent with the measurement data (Figure 9, [5] ).

As for the external magnetic field of the critical value, which destroys the coherence of zero-point oscillations of electronic pairs, the theoretical evaluation of this field is also in good agreement with the measurement data [5] .

The consideration of superconductivity as sequens of the electron gas zero-point oscillations ordering gives possibility to explain all main properties of all separate superconductors.

3.3. Neutrino Is Magnetic Excitation of Aether

3.3.1. Magnetic Dipole Radiation in Maxwell’s Theory

It is usually accepted to consider neutrino as a specific particle moving at the speed of light and having no charge and mass (the latter with some reservations). This speaks that there is much in common between neutrinos and photons, although their penetrating abilities in matter differ by many orders of magnitude. This fact forces to consider the problem of electromagnetic waves radiation in more detail.

Let, for simplicity, the problem is formulated in such a way that there are no electric charges, electric dipoles and quadrupoles. And the electromagnetic radiation in aether can arise only due to the time-varying magnetic moment m ( t ) .

The changing moment m ( t ) will create in space at a distance R from it an electromagnetic disturbance described by vector potential [6]

Figure 9. The comparison of the calculated values of critical temperatures of superconductors (calculated according to Equation (3.2)) with measurement data [5] . Circles relate to type-I superconductors, squares show type-II superconductors. On the abscissa, the measured values of critical temperatures are plotted, on ordinate, the calculated estimations are plotted.

A ( R , t ) = m ˙ ( t ) × n c R . (3.3)

where t * is retarded time.

By definition, in the absence of free charges (i.e. at φ = 0 ) in this electromagnetic disturbance, the electric field strength will have the value [7] :

E ( R , t ) = 1 c d A ( R , t ) d t = 1 c 2 R [ m ¨ ( t ) × n ] , (3.4)

and intensity of magnetic field [7] :

H( R,t )=rotA( R,t )= 1 c 2 R [ n×[ m ¨ ( t )×n ] ] + 1 c R 2 [ n×[ m ˙ ( t )×n ] ] (3.5)

Thus, the amplitude of oscillations of the electric field generated by changes in the magnetic moment depends only on the second time derivative of the function describing its changes. At the same time, the first time derivative additionally contributes to the amplitude of the magnetic field oscillations.

In this case, two options are possible, since two types of magnetic emitters are possible.

3.3.2. Photons

This option is studied in all courses of electrodynamics. It is realized in the case when the magnetic dipole performs the motion described by the differentiable function from time. That is, the motion of the magnetic dipole is described by such a function, in which there are at least two first derivatives in time. A typical example of such a motion is the harmonic oscillation of dipole m ( t ) = m sin ω t , in which both E and H exist, since m ˙ ( t ) 0 and m ¨ ( t ) 0 .

The same solution has problems where the oscillations of the magnetic moment are described by more complex formulas, if the spectrum of these oscillations can be decomposed into harmonic components.

For harmonic oscillations at a considerable distance from the oscillating dipole, the second term in the formula (3.5), which depends on m ˙ , is λ/R times smaller than the first term (here λ is the length of the generated wave).

Therefore, term m ˙ can be neglected.

The result is that in this case fields E and H are equal to each other and only are turned relative to each other by 90 degrees.

3.3.3. Magnetic Excitation of Aether

Another solution of Equations (3.4) and (3.5) is obtained if m ( t ) is a discontinuous (spasmodic) function. For this function m ˙ 0 , but m ¨ = 0 , and therefore, the magnetic moment forms a purely magnetic wave, in which H 0 , but E = 0 .

More precisely, this excitation of vacuum should be classified as a kind of particle, because it is characterized by a very short time interval.

An example of the radiation of such a particle is β-decay, in which a free electron carrying a large magnetic moment arises relativistically quickly.

Another example is the transformation of π-meson into μ-meson. π-meson has no magnetic moment, but μ-meson does.

The uncertainty relation makes it possible to estimate the transformation time of π-meson to μ-meson:

τ π μ ( M π M μ ) c 2 10 23 sec (3.6)

Thus the time dependence of the magnetic moment in this reaction has the form of a very sharp Heaviside’s rung, which equal to zero for negative arguments and one for positive ones. (At zero, this function requires additional definition. It is usually convenient to set it to zero equal to 1/2):

H e ( t ) = { 0 if t < 0 1 / 2 if t = 0 1 if t > 0 (3.7)

An unusual property that pure magnetic photon m ˙ must possess arises due to the absence of magnetic monopoles in nature. The fact that normal photons, with the electric component, scattered and absorbed in matter with electrons. In the absence of magnetic monopoles, a small energy magnetic photon must interact extremely weakly with the substance and its free path in the medium must be about two dozen orders of magnitude greater than that of a normal photon [7] .

Thus, Maxwell’s equations say that the radiation of free electron at β-decay should generate in vacuum a pure magnetic excitation, similar to photon, but weakly interacting with the substance.

3.3.4. Neutrino and Antineutrino

According to the electromagnetic model of the neutron, the generalized angular momentum of a relativistic electron, which forms a neutron together with a proton, is zero [10] . Therefore, the self magnetic moment of the electron is not observed.

With neutron β-decay, the electron acquires freedom, and with it spin and magnetic moment. Given that the emitted electron has a speed close to the speed of light, this process should occur abruptly.

At that a δ-shaped magnetic field burst generates, which is commonly called as antineutrino.

Since in the initial bound state (as part of the neutron) the electronic generalized angular momentum was equal to zero [10] , and in the final free state its spin is / 2 , taking into account the law of conservation of angular momentum, the magnetic γ-quantum must carry with it the angular momentum equal to / 2 .

Another implementation of the magnetic γ-quantum must occur in the reverse process―in K-capture. In this process, the electron, which originally formed the shell of the atom and had its own magnetic moment and spin, at some point is captured by proton of nucleus and forms neutron with it. This process can be described by the inverse of the Heaviside function. In this process, a magnetic γ-quantum of the inverse direction of field with respect to the vector of its propagation R should arise (Figure 10).

p + + e n + ν . (3.8)

3.3.5. Results Shortly

The concept of neutrino as magnetic excitations of aether [11] explains all the basic of their properties:

Figure 10. Two functions of Heaviside responsible for the birth of neutrinos and antineutrinos.

- extremely weak interaction with the substance is the result of the absence of magnetic monopoles in nature,

- spin neutrino is equal to / 2 due to the fact that they have only the magnetic component,

- the birth of neutrino in beta-decay is due to the abrupt appearance of the magnetic moments of particles,

- the existence of neutrino and antineutrino is explained by the presence of two types of Heaviside steps.

In addition, this concept opens a new page in the study of mesons, with quantitatively predicting their masses.

What does tau-neutrino have to do with this concept remains unclear.

Due to the fact that neutrino radiation is a purely electromagnetic process, there is no need to introduce a fundamental weak (or electro-weak) interactions of Nature, which should be attributed to the category of speculation.

3.4. The Quark Model, Neutron Properties and Nature of Nuclear Forces

3.4.1. Proton and Neutron

In the second half of the twentieth century, scientists in the elementary particles theory began to develop the model of quarks. The formation of this theory in the chain of sciences about the structure of matter seems to be quite successive: all substances consist of molecules and atoms. The central elements of atoms are nuclei. The nuclei consist of protons and neutrons, which in turn consist of quarks.

The cornerstone of this model was the assumption that both―proton and neutron―are elementary particles and are composed of different sets of quarks (Figure 7).

This assumption allowed Gell-Mann quite simply to explain the transformation of neutron into proton. To do this, it only needs to replace one quark with a fractional charge to another.

It seems possible as neutron is an elementary particle.

However, the Gilbert’s postulate speaks in favor of a different design of the neutron. The Gell-Mann model provides an explanation for the transformation of neutron into proton, but it cannot explain other properties of the neutron. On the contrary, the model in which neutron is a kind of hydrogen atom with relativistic electron, makes it possible to calculate all basic properties of neutron: its magnetic moment, mass, decay energy. Its transformation into proton is considered as process of simple ionisation.

3.4.2. Neutron Properties

It is commonly thought that the Bohr’s atom is the only possible construction that can be constructed from proton and electron. This is true, if to use a non-relativistic electron. In this case, the equilibrium state between proton and electron is established by mutual attraction of their charges. At that the distance

between them is equal to the Bohr radius a B = 2 m e e 2 10 8 cm . The magnetic moment of proton is approximately equal to the Bohr magneton μ p e 2 M p c 10 23 Gs cm 3 . Its influence on electron is very small and it can be neglected.

However, the situation changes radically at distances of the order of 10−13 cm. If an electron orbit has this value, magnetic field of order

μ p R 3 10 23 ( 10 13 ) 3 10 16 Gs will act on electron. With a suitable orientation, such a

huge field can keep electron in orbit even if it rotates at a speed close to the speed of light and electron mass will be hundreds of times greater than its rest mass.

Detailed calculations [10] show that the equilibrium radius of such orbit is approximately equal to 10−13 cm, and the mass of an electron taking into account the relativistic effect is equal approximately to 370me.

However, almost all of this weighting of electron is compensated by the mass defect, which occurs due to the binding energy of electron to proton, so that the total mass of neutron only slightly exceeds the mass of proton. The result is the correct prediction for the neutron decay energy.

It is remarkable that the neutron magnetic moment can be calculated in this way, and the calculated value coincides with the measured one up to 10−4.

Thus, all the measured properties of neutron (except its lifetime) in this theory find a quantitative explanation. The calculation of the neutron lifetime should be carried out taking into account additional factors.

The most important consequence of this consideration is the fact that neutron is not elementary particle but it is a kind of structure, like a hydrogen atom only with relativistic electron. It discredits the Gell-Mann’s quark model completely.

3.5. Quantum-Mechanical Nature of Nuclear Forces

The rapid development of nuclear technology in twentieth century made the understanding of nature of nuclear forces a most important task of theoretical physics.

By 30s of last century, the experimenters found that the nuclei consist of protons and neutrons, and neutrons decay with the emission of electrons. For the first time attention to the possibility of explaining nuclear forces on the basis of the electron exchange effect drew apparently I.E. Tamm [12] . However, later the predominant model in nuclear physics was the exchange of π-mesons, and then the exchange of gluons.

The reason for this is clear. To explain the magnitude and radius of action of nuclear forces need a particle with a small natural wavelength. A nonrelativistic electron is not suitable for this.

Because of this, the assumption about the existence of a special strong interaction―the fundamental interaction of Nature, which is carried out by quarks and gluons―has come into use.

However, on the other hand, models of π-meson or gluon exchange were not productive either. These models could not give a sufficiently accurate quantitative explanation of the binding energy of even light nuclei.

It turns out that this explanation can be obtained by solving the corresponding quantum mechanical problem. At the same time, to explain the nature of nuclear forces, the hypothesis of the existence of a strong interaction can be abandoned.

In 1927, a quantum mechanical description of the simplest molecule―the molecular ion of hydrogen―was published. The authors of this article V. Heitler and F. London [13] has calculated the attraction that occurs between two protons at electron exchange. This exchange is a quantum mechanical effect and does not exist in classical physics. (Some details of this calculation are given in [10] [14] ).

The main conclusion of this calculation is that the binding energy between two protons, which occurs due to the electron exchange, is in order of magnitude close to the binding energy of proton and electron (the electron energy in the first Bohr orbit). This conclusion agrees satisfactorily with the measured data, which give results different from the estimated less than two times.

The calculation method developed by Heitler and London can be applied to the calculation of the binding energy of two protons that exchange with relativistic electron which is part of neutron. The energy obtained as a result of this calculation is quite satisfactory in agreement with the experimentally measured value of the deuteron coupling energy [10] .

The extension of the results of this calculation to the light nuclei allows us to obtain the values of their binding energy consistent with the measurement data.

Thus, the results of this calculation show that in order to explain the nature of nuclear forces there is no need to invent some fundamental strong interaction of Nature. At least in the case of light nuclei, nuclear forces are explained by quantum-mechanic way.

3.6. Astrophysics

Star physics stands apart from other physical Sciences. Until the last decades of the twentieth century, almost nothing was with certainty known about the internal structure of stars. However, in the last decades of the twentieth century, astronomers have measured a number of dependencies of parameters of stars. To date, already there are about a dozen of such dependencies. That are interdependencies of the temperature-radius-luminosity-mass of close binary stars, spectra of seismic oscillations of the Sun, distribution of stars by mass, the magnetic fields of stars etc. All these dependencies are determined by phenomena occurring inside stars. Therefore, the construction of the theory of the internal structure of stars should be based on these quantitative data as on boundary conditions.

However, modern astrophysics prefers a more speculative approach: qualitative theories of stars are developed in detail, which are not brought to such quantitative estimates that could be compared with astronomic data.

Of course, the existence of dependencies of stellar parameters measured by astronomers is known to the astrophysical community. However, in modern astrophysics it is accepted, without finding an explanation, to refer them to the category of empirical and believe that they do not need an explanation at all.

To reach agreement of the theory with the available data of astronomical measurements, it is necessary to refuse some astrophysical constructions which are generally accepted today. First of all, we need to change the approach to describing the equilibrium of matter inside stars. It should be noted that the interior of the stars is plasma―electrically polarized medium. Therefore, the equilibrium equation of the interstellar substance should take into account the role of gravitationally induced electric polarization (GIEP). Taking into account the GIEP of intrastellar plasma allows us to construct a model of a star in which all the main parameters―the mass of a star, its temperature, radius and luminosity―are expressed by certain combinations of world constants, and the individuality of the stars is determined only by two parameters―mass and charge numbers of atomic nuclei from which plasma of these stars is constructed. Thus it is possible to explain quantitatively and with satisfactory accuracy all dependences measured by astronomers (Figure 11, Figure 12) [15] .

Taking into account the gravitationally-induced polarization of the Sun’s core, it is possible to calculate the spectrum of its seismic oscillations [15] . This spectrum is in good agreement with the measurement data obtained in recent decades (Figure 13).

Taking into account the gravitationally-induced polarization, it is possible to construct the theory of magnetic fields of stars, consistent with the observational data (Figure 14).

In general, taking into account the GIEP effect allows to get an explanation of all data of astronomical measurements.

An important characteristic feature of the model of a star, built taking into account the GIEP, is the absence of collapse at the final stage of development of stars, as well as the absence of “black holes” in nature, resulting from such collapse.

3.7. Thermomagnetic Effect in Metals

The theoretical explanation of the thermomagnetic effect (TME) in metals stands out among the theories discussed above, since there was no such theory in the twentieth century. Previously, there was an opinion that this effect does not exist.

By the middle of the XX century a number of thermomagnetic effects in semiconductors had been discovered, studied and theoretically explained.

Figure 11. Comparison with measurements of the theoretical dependence of the surface temperature on the mass of the star. The theory takes into account the presence of electric polarization induced by gravity in the star plasma. Temperatures are normalized at the surface temperature of the Sun (5875 K), the mass―by the mass of the Sun.

Figure 12. Comparison with measurements of the theoretical dependence of the radius of a star on its mass. The theoretical dependence is obtained taking into account the existence of electric polarization induced by gravity in the dense plasma of a star. The radius is expressed in units of the solar radius, mass―in units of the mass of the Sun.

It was believed that thermomagnetic effects do not occur in metals due to their high electrical conductivity. It was assumed that there is only a thermoelectric effect, at which electrons of conductivity from the hot region of a metal sample transfer energy (heat) to the cold region and heating it. Electrons of from cold region are forced into the hot part, reducing its temperature.

Figure 13. (a) Spectrum of solar oscillations. The data obtained in the framework of the program “SOHO/GOLF”. (b) Theoretical spectrum calculated taking into account the existence of electric polarization induced by gravity in the solar plasma [15] .

At the same time, the researchers who studied this phenomenon missed from attention the fact that counter electric currents due to the magnetic interaction must repel each other and flow along different trajectories in a metallic sample.

As a result of this separation of currents near the metallic sample there is a significant, quite easily measurable magnetic field (with the help of sensitive modern magnetometers), which depends on a number of parameters, such as the conductivity of the metal, its average temperature, the configuration of the temperature gradient in the metallic sample, etc.

The theoretical description of this effect makes it possible to explain all its characteristic features [17] .

3.8. Nature of Magnetic Field of Earth

In the twentieth century, as before, it was believed that the most important experimental fact, which must satisfy the model of the Earth’s magnetic field, is

Figure 14. The measured values of the magnetic moments of celestial bodies depending on their torques [16] . By ordinate-logarithm of the magnetic moment (in Gs∙cm3), by abscissa is the logarithm of the moment of rotation (in egr∙s). Line illustrates the dependence of Blackett.

the dipole character of the main field with the magnitude of the field intensity near poles of approximately equal to 1 Oe.

The first such model was proposed by W. Gilbert more than 400 years ago.

The Gilbert’s model and others

W. Gilbert developed the first model of terrestrial magnetism [1] (see Figure 15). He assumed that inside the Earth there is an area filled with magnetized ferromagnetic (if to use the modern term). More recent studies have shown that the temperature in the central region of the Earth has high temperature―above the Curie temperature of ferromagnets. Therefore, the Earth’s core can’t be magnetized.

Later, many different models of the Earth’s magnetic field were proposed. In particular, several models based on the thermoelectricity effect. In the 40s of the last century, the hydrodynamic model was developed [18] , which won the recognition of experts.

It should be noted that the operation of such a mechanism requires the presence of a certain initial field, which can be strengthened. In the presence of only the space field (~10−7 Oe), the performance of this model is highly questionable.

Doubts about the performance of the hydrodynamic model in the following decades have arisen in many scientists, and for this reason, until recently, there are new models of this phenomenon.

Figure 15. Sir William Gilbert (1544-1603)―English physicist, who proposed the first model of terrestrial magnetism, introduced the concepts of electric and magnetic fields.

The Blackett’s hypothesis

On another way to the problem of magnetic fields of cosmic bodies approached baron P. M. S. Blackett, Nobel laureate and President of the Royal society of London [19] (see Figure 16).

He suggested that the magnetic field is generated not only by a moving electric charge, but also by any moving neutral mass. Later began to assume that this may be a consequence of the fact that the electric charges of the electron and proton are not equal to each other. It was estimated that their difference can be very small―only 10−18e. However, such a negligible difference was enough to at all cosmic bodies due to their rotation around its own axis there was a magnetic field of about the magnitude that was obtained from measurements.

Naturally, in this approach, there must be a connection between the magnetic moment of the cosmic body μ and its rotational moment L. Blackett showed that the ratio of these values (the gyromagnetic ratio) depends only on the world constants:

ϑ = μ L = G c , (3.9)

where G is the gravity constant, is the light velocity.

However, Blackett’s hypothesis was rejected, despite its beauty and attractiveness. Blackett himself refused it. High-precision experiments conducted by Blackett, as well as other experimenters, showed that electrically neutral massive bodies do not create magnetic fields of the desired intensity in the laboratory condition.

The Measurement Data of Magnetic Fields of Cosmic Bodies

In the first half of the twentieth century many geophysicists (see Figure 17) was involved in the problem of terrestrial magnetism. Their task of the first plan saw

Figure 16. Nobel laureate baron Patrick Maynard Stuart Blackett (1897-1974).

Figure 17. My Father―Basil I. Volkov, scientist-geophysicist. He was looking for a solution to the problem of terrestrial magnetism, was killed by Stalin’s torturers at 39.

the construction of such a theory, which would explain the reason why the main magnetic field of the Earth near its poles is approximately equal to 1 Oe.

In the second half of the twentieth century, this formulation of the problem becomes unacceptable, because by this time this geophysical problem has developed into a special case of a more general problem of magnetism of cosmic bodies.

Spacecraft flights in the second half of the twentieth century and the overall progress of astronomical technology have discovered a remarkable, previously unknown fact: magnetic moments of all cosmic bodies of the Solar system, as well as a number of stars and pulsars, are proportional to the moments of rotation of these cosmic bodies (Figure 14) as it should be according to Blackett’s conjecture.

It is remarkable that this dependence preserves the linearity in the range of about 20 orders of magnitude!

To explain this phenomenon is quite simple, given the phenomenon of the GIEP in the plasma of all large cosmic bodies [20] .

However there is peculiarity at formation of terrestrial magnetism. Pressure and temperature inside the Earth are not as high as in stars. If stars are consist of electron-nuclear plasma, then in the central region of the Earth only electron-ion plasma can exist. That requires attentional consideration for a successful theoretical description of the Earth’s magnetism [20] .

4. Conclusions

The development of physics in the twentieth century led to the appearance of many new its branches. At first, many of these discoveries gave the impression of a certain mystery. So, many scientists have called superconductivity still several decades after its discovery as the most mysterious phenomenon in the physics of condensed matter. The penetrating power of neutrinos is still often called mysterious. To explain mysterious phenomena in the twentieth century, new concepts were often introduced. So, for example, strong and weak fundamental interactions, gluons, quarks with a fractional charge, etc. appeared. This method of constructing theories is valid only under one condition―it is necessary that the construction of the theory was carried out in full accordance with Gilbert’s postulate.

It is obvious that without full confirmation of the measurement data, the theories constructed in this way turn out to be speculations.

In some cases, when this type of theory was presented with the help of a complex mathematical apparatus, it seemed that the conclusions following from these theories have found their mathematical confirmation and this is enough to recognize their correctness.

However, such mathematical confirmation and confirmation of the theory by means of some systematization and construction of tables should not replace experimental verification. At the same time, due to the large intricacy of some theories, it becomes important how successfully these theories explain ALL the properties of the object under study, or at least ALL MAIN its properties.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Vasiliev, B.V. (2018) Gilbert’s Postulate and Some Problematic Physical Theories of the Twentieth Century. Journal of Modern Physics, 9, 2101-2124. https://doi.org/10.4236/jmp.2018.912132

References

  1. 1. Gilbert, W. (1600) De magneto magneticisque corporibus et de magno magnete tellure. Peter Short, London.

  2. 2. Landau, L.D. (1941) JETP, 11, 592.

  3. 3. Khalatnikov, I.M. (1965) Introduction into Theory of Superfluidity. Nauka, Moscow.

  4. 4. London, F. (1937) Transactions of the Faraday Society, 33, 8. https://doi.org/10.1039/tf937330008b

  5. 5. Vasiliev, B.V. (2015) Superconductivity and Superfluidity Science PG (NY). http://www.sciencepublishinggroup.com/book/B-978-1-940366-36-4.aspx

  6. 6. Landau, L.D. and Lifshitz, E.M. (1971) The Classical Theory of Fields (Volume 2 of A Course of Theoretical Physics). Pergamon Press, New York.

  7. 7. Vasiliev, B.V. (2015) International Journal of Modern Physics and Application, 3, 25-38. http://www.aascit.org/journal/archive2?journalId=909&paperId=3935

  8. 8. Kikoine, I.K. (1978) Physical Tables. Atomizdat, Moscow. (In Russian)

  9. 9. Donnelly, R.J. and Barenghy, C.F. (1977) Journal of Physical and Chemical Data, 6, 51-104. https://doi.org/10.1063/1.555549

  10. 10. Vasiliev, B.V. (2015) Journal of Modern Physics, 6, 648-659. http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=55921https://doi.org/10.4236/jmp.2015.65071

  11. 11. Vasiliev, B.V. (2017) Journal of Modern Physics, 8, 338-348. http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=74443https://doi.org/10.4236/jmp.2017.83023

  12. 12. Tamm, I.E. (1934) Nature, 134, 1011.

  13. 13. Heitler, W. and London, F. (1927) Zeitschrift fur Physik, 44, 455-472. https://doi.org/10.1007/BF01397394

  14. 14. Vasiliev, B.V. (2015) International Journal of Modern Physics, 3, 25-38.http://www.aascit.org/journal/archive2?journalId=909&paperId=3935

  15. 15. Vasiliev, B.V. (2018) Journal of Modern Physics, 9, 257-262. http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=87076

  16. 16. Sirag, S.-P. (1979) Nature, 275, 535. https://doi.org/10.1038/278535a0

  17. 17. Vasiliev, B.V. (2014) Journal of Physics and Application, 8, 221-225.

  18. 18. Campbell, W.H. (2001) Earth Magnetism. Academic Press, New York.

  19. 19. Blackett, P.M.S. (1947) Nature, 159, 658. https://doi.org/10.1038/159658a0

  20. 20. Vasiliev, B.V. (2015) International Journal of Geosciences, 6, 1233-1247. http://www.scirp.org/journal/PaperInformation.aspx?PaperID=61451