Journal of Modern Physics
Vol.10 No.06(2019), Article ID:92298,16 pages
10.4236/jmp.2019.106041
Interval Based Analysis of Bell’s Theorem
F. P. Eblen1, A. F. Barghouty2
1Advanced Communications and Navigation Technology Division, Space Communications and Navigation (SCaN) Program, Human Exploration and Operations Mission Directorate, NASA Headquarters, Washington DC, USA
2Astrophysics Division, Science Mission Directorate, NASA Headquarters, Washington DC, USA
Copyright © 2019 by author(s) and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: March 28, 2019; Accepted: May 5, 2019; Published: May 8, 2019
ABSTRACT
This paper introduces the concept and motivates the use of finite-interval based measures for physically realizable and measurable quantities, which we call -measures. We demonstrate the utility and power of -measures by illustrating their use in an interval-based analysis of a prototypical Bell’s inequality in the measurement of the polarization states of an entangled pair of photons. We show that the use of finite intervals in place of real-numbered values in the Bell inequality leads to reduced violations. We demonstrate that, under some conditions, an interval-based but otherwise classically calculated probability measure can be made to arbitrarily closely approximate its quantal counterpart. More generally, we claim by heuristic arguments and by formal analogy with finite-state machines that -measures can provide a more accurate model of both classical and quantal physical property values than point-like, real numbers―as originally proposed by Tuero Sunaga in 1958.
Keywords:
Measurement Theory, Bell’s Theorem, Bell’s Inequality, Interval-Based Analysis, Interval-Based Physical Measures
1. Introduction
We present first two heuristic arguments, followed by theoretical and numerical demonstrations, to support motivation of a concept to replace point-like real-numbered physical property values with intervals of value we call -measures, which may be weighted by some function. The exact nature of the weighting is not crucial, however, to the interval-based representation. In Sect. 1.1, we show how these arguments suggest that the conventionally assumed assignment of real numbers to represent physical property values may not be tenable (see, e.g., [1] ), and instead -measures can provide more accurate models of manifest physical reality. In Sect. 1.2, we introduce the concept of finite intervals as defined and practiced in computing theory. In Sect. 2, we apply the -measure concept to a new analysis of Bell’s theorem using well established interval-analysis theorems to show that violations of the classically derived Bell’s inequality may thereby be reduced to expectations arbitrarily approaching their quantum prediction counterparts that are consistent with results of Bell tests. In Sect. 3, we offer concluding, general remarks highlighting how the use of finite intervals to represent physically measurable quantities may have significant impact on analysis of physical systems, both classical and quantal, and how, in particular, the new results derived from interval-based analysis may also impact technologies based on them.
1.1. -Measure Description of a Physically Measurable Quantity
Measurement of any physical property value and generic manifestation of any physical property value are equivalent processes at some fundamental level. This equivalence is foundational to environmental decoherence theory since certain manifestations of value are physically realized1 via “implicit measurement” of objects by the environment in which they exist [2] . This is precisely the essence of the equivalence claim, that object properties become physically manifest by unavoidable implicit measurement resulting from any and every interaction. This means that certain attributes of any process to measure physical values are common attributes of any generic process of manifestation of physical values. For example, all physically realizable measurements are performed using resolution-limited devices and processes, so generic manifestation of physical values are equally resolution-limited. Therefore, no physically realizable measurement and no physically realizable manifestation of any physical property value are expected to be represented by a single, point-like, real number. Such an assignment would require realization of infinite (physical) resolution. Infinite resolution is clearly untenable, and hence, a non-physical abstraction. This suggests that any assignment that relies on a finite resolution can only be manifest as finite intervals of values, i.e., “ -measures”.
From a communication theoretic point of view, suppose a communication signal could have a producible and detectable parameter represented by a real number. Since real numbers are infinitely precise and can be represented mathematically [1] only by an infinite number of digits, such a signal would contain an infinite amount of information. Conveyance of this signal from one point to another would constitute an infinite change of entropy, or an exchange of infinite information [3] , in a finite time through a finite spectral width channel, which is not physically possible, even if the channel were noise-free. Therefore the signal parameter cannot be validly represented by a single real number. A -measured parameter, on the other hand, has finite precision and finite information content, requiring a finite spectral width and finite time to convey. Further, the physics principle known as the Bekenstein bound [4] dictates that infinite entropy, or information, cannot exist in a finite region of space with finite energy, which can be interpreted as precluding both production and detection of any signal with a real-numbered parameter. It is interesting to note that -measures are “naturally” endowed with interval entropy and related information content [5] .
Using these and other similar arguments, we assert that realizable and measurable property values are more accurately modeled by -measures than by point-like real-numbered values of zero measure. We further assert that -measures apply to both classical and quantal physically realizable values. At some level, the interval ( -measure) model is in conflict with the convention of classical physics to assume measurable property values are mathematically represented by real numbers; this conventional representation may be too restrictive.
The conflict is perhaps less pronounced for quantal measurement outcomes due to the intrinsic uncertainties and ambiguities in a quantal description, but there is a critical difference in -measured quantal superposition and conventional quantal superposition: -measured outcome values, even when weighted by some appropriate function, are not envisaged to be associated with a probability metric across eigenvalues. Because -measure intervals apply to each single measurement, or manifestation of value, the eigenvalues within an interval are assumed associated with a non-statistical ontic metric. While the exact definition and meaning of this ontic metric is not yet clear (and the subject of a follow-on paper), the assertion that it is non-statistical means that single measurement outcomes have distributed value, i.e., they are -measures. This interval-based representation suggests that all realizable quantum states that result from measurement are comprised of simultaneously physically existing eigenstates. Every physically realizable quantum state is a superposition of multiple states in every realizable basis, i.e., a basis with physically measurable eigenvalues. A -measured state cannot be represented by a single, real-numbered direction in an abstract space of realizable eigenvalues.
-measured quantum state definitions open the opportunity to form an entropy metric calculated just like Shannon information entropy [2] is calculated from a symbol alphabet probability density function, i.e., , where is defined as the modulus squared of a state vector as a function of x eigenvalues. A critical difference in a -measured entropy, however, is that the function value is an ontic, or physical, metric as opposed to an epistemic, or informational/probability, metric. This is because the eigenstates of a -measured state are treated as simultaneously physically existing eigenstates in superposition, yet the entropy of the state can never be zero [4] in any realizable basis since this would require a single real-numbered eigenvalue, a non-realizable entity in the -measure concept (see, e.g. [6] ).
Application of -measures to physical values is analogous to the application of intervals to the values typical of finite-state machines, which are incapable of specifying or processing real-numbered values. The application of interval analysis herein to all physically realizable property values is likewise suggested for fundamentally similar reasons. Physical objects, systems of objects, and processes, such as classical and quantum measurement, are limited in their ability to manifest real-numbered property values by parameters such as spectral limits, process and time limits, and various other constraints. Both classical and quantal physically realizable objects and systems thus can be viewed in some sense as finite-state machines.
1.2. -Measures Represented by Finite Intervals
The mathematical formalism of interval analysis was developed and has seen its primary application in computing theory for numerical analysis and mathematical modeling. It is a relatively recent cross-disciplinary field pioneered by M. Warmus [7] , T. Sunaga [8] , R. Moore [9] , and U. Kulisch [10] . (For these and other early contributions, see [11] ). According to [11] , it was Sunaga [8] who first foresaw the fundamental connection between the mathematical concept of an interval and its applications to real systems and applied analysis. Applications to the physical sciences have thus far been extremely limited, however, to studies of formal systems through the “intervalization” of their representative differential or algebraic equations [12] [13] .
The need for the concept of an interval was spawned by the need in the above numerical applications to enclose a real number when it can be specified only with limited accuracy, i.e., it cannot be exactly represented on any finite-precision machine. In physical systems, inaccuracy in measurement coupled with known or unknown uncertainty and variability in physical parameters, initial and boundary conditions, etc., formally inhibit the manifestation of measurable quantities as real numbers, to be treated via the machineries of real numbers’ arithmetic and algebra. Special axioms and special interval arithmetic and algebra were clearly needed to endow the new field with rigorous mathematical foundations.
In numerical analysis, finite intervals of one or more dimensions are seen as extensions of real (or complex) numbers. As mathematical objects, intervals in themselves do not form proper vector spaces [14] [15] . Interval arithmetic and interval algebra have nonetheless been developed by abstracting their real-numbered counterparts, based primarily on set theory and algebraic geometry [16] . However, compared to real-number objects, intervals have “extended” properties. As we demonstrate below, these properties provide for a powerful analytical tool in the description and/or analysis of real physical systems when property values are represented by -measures.
2. Application to Bell’s Theorem
In Sect. 2.1, we demonstrate how, under some conditions, an intervalized but otherwise classically calculated correlation function can be made to arbitrarily come close to its quantal counterpart. The demonstration is essentially a re-casting using intervals and interval analysis of a limiting case derived by Bell [17] . Using proxies of the interval-valued correlation functions, and using two basic theorems of interval analysis suggest that, under some conditions, the two can be made to come arbitrarily close to each other. In Sect. 2.2, we test this assertion by applying it to a prototypical measurement of the polarization states of an entangled pair of photons.
2.1. Theoretical Illustration
We demonstrate in this section the validity of the following assertion: Expressed as interval-valued functionals, as opposed to real number-valued functions, the distance between a classically calculated correlation function, of two measured interval quantities, and its quantal counterpart can be shown (under some conditions) to arbitrarily approach zero.2
Let the two measured interval-variables be and , where we assume that both are one-dimensional intervals (generalization to higher dimensions may not be trivial, see, e.g., [18] ). We denote their real-numbered values, i.e., their degenerate values, as x and y, and the unit vectors along their directions by and . By definition, a classically calculated correlation function of X and Y, , will always involve a weighted sum over the parameter of their inner product. For the sake of this demonstration we do not distinguish between a Riemann and a Lebesque integration; we only assume the existence of an integrable real-valued function or functional. Its quantal counterpart, assuming an entangled single state, is a dot product (in the same metric space), which can be expressed as . For real-valued inner and dot products, it has been shown [17] , Equation (18), that
(1)
where is a small number but which cannot be made arbitrarily small, i.e., will always be bounded from below due to the finite precision of any physical measurement. Our demonstration of the assertion made above is essentially a recast of Equation (1) in its interval analog for intervals X and Y, but in which the analog of is shown that it can be made arbitrarily small. The conditions pertain to our assumed low dimensionality of the intervals and of the unit vectors, in addition to the assumed forms of the inner and dot products, our proxy correlation functions.
In lieu of the inner product, we will have an interval-valued integral function, or a functional, and in lieu of the dot product for unit vectors, an assumed interval-valued functional related to the range of the first. The interval analog of the inner product can be written as
(2)
where lower and upper refer to the lower and upper Darboux integrals [13] . It is important to note that
(3)
over the same interval “[Z]”, which follows from our assumed interval-extension of . Although generalization to an extended is straightforward and could present interesting cases for further analysis, for purposes of this demonstration, we take the parameter to be the same in both the real-numbered valued and interval-valued cases. Since the interval has a range , can be written as:
(4)
where is a functional of Z. Note that the above form for is not unique; as uniqueness is not required for this demonstration. Also, the exact form of the interval-valued function F is not required, but that it is analytic and convergent over an interval that includes Z, and over which interval the derivative of F exists and does not contain zero. These general properties [13] follow since F is assumed to be an “extension” of the real-valued function f, i.e., the integral function that gives ; since,
(5)
we have for F,
(6)
where denotes the diameter (or width) of its interval argument.
A fundamental property of any extended function, F, of an interval is its “enclosure” property, i.e.,
(7)
where is the range of the function F over the interval , is now a “functional” of the interval , and “ ” denotes “a subset of”. Almost all derived properties of intervals, including their mapping, differentiation and integration, differential (or integral) equations-based applications are based on the enclosure property [12] [13] . We will use this property below.
The extended functional is further assumed divisible into smaller subintervals, where it can be regarded as the union of these subintervals,
(8)
and where smaller refers to the diameter of each sub-interval being reduced by the factor .
For the interval-valued dot product of the unit vectors, being a projection of one unit vector onto the other, and since the range is also , we make the ansatz and connect this with the functional F via its range,
(9)
or with any linear function of , where is the range of F over the interval Z. Again, this relation is not unique for .
Next, we take advantage of two basic theorems of interval analysis [9] [12] . The first concerns the distance between two intervals, also referred to as the Hausdorff distance [12] , . For our demonstration, the distance between and is given by:
(10)
where , and the constants . denotes the maximum norm. Applied to the subdivided , the Hausdorff distance becomes
(11)
What the above theorem suggests is that , our proxy for the intervalized dot product, can be arbitrarily close to , our proxy for the intervalized inner product, if the subdivision of is made sufficiently fine. Clearly, this is only true under the conditions (i.e., low dimensionality of the intervals and the unit vectors) and assumptions made (i.e., the assumed specific forms of the inner and dot products). Applications to different forms and/or any generalization are clearly beyond the scope of the assertion.
2.2. Numerical Illustration
Our first application of the -measure concept is to an interval-based analysis of a prototypical [19] [20] form of Bell’s inequality [17] [21] . The quantum-mechanical probability for a measurement of the polarization states of an entangled pair of photons can be shown to be proportional to the cosine (or sine) squared of the measured polarization angles [21] . (See Appendix for an illustrated structure of a prototypical Bell test using the entangled spin states case, which is, in essence, the same as the polarization states but easier to illustrate.) If is the measured polarization angle detected by detector 1 of photon 1, and similarly for , the probability of detecting a photon along the 2-dimensional axes, , of each detector is
(12)
for each of the four possible combinations that add up to unity. When a third detector is introduced, a Bell’s inequality can constrain the degree of polarization correlation among the angular separations in such a way that
(13)
To intervalize Equation (13), we re-express the measured angles, , and as angle intervals, , and , where is the total uncertainty in measuring , i.e., including all system and random errors in the set-up and the measuring devices. Note that in the limit of , , i.e., it is a degenerate interval. Being a statement about probability measures and their correlations, the form of Equation (2) is retained when expressed as
(14)
Intervalized, Equation (14) suggests that the interval will always include the interval . Note that the sine of an interval is also an interval since the sine function will map every point in the interval argument to a point in the interval image of the function.
The enclosure property, Equation (7), can be used, as an example, to show that
(15)
Another intriguing property of interval functionals is their dependence on the algebraic form or structure of the enclosing function f, or its extended pair F. This dependence stems from the set-theoretic attributes of intervals. For example, another form of Equation (14) that is equivalent for degenerate intervals, i.e., real numbers, but is not for finite intervals is
(16)
“Inequality violation” of Equation (14) is when the left hand side of the equation minus the right hand side becomes negative. This is indeed seen for the quantum-mechanically calculated probabilities at various angles and over extended domains of none-zero measures (see Figure 1). For our demonstration, however, all we need is to choose carefully a small set of angles (or even a single set of angles) at which Equation (13) is violated by an amount much larger than a typical experimental value of the order of the error in angular measurement , typically ~0.1 deg. For clarity of illustration we choose to be identically zero, and deg and deg, since the surface dips appreciably, ~0.1, below the zero plane for this choice. The expected standard deviation in Equation (13), given non zero and , can easily be calculated given to be only ~10−4.
For , we calculate the probability (at deg and deg) of no violation for each . This is when that difference in the two parts of Equation (14) crosses the zero plane. We assume that both the intervalized difference and the difference that is calculated using error propagation are centered Gaussians. To arrive at the probability of no violation, we simply integrate from the center of the interval to the zero point, after normalizing to unity and subtracting 1/2. Since, for purposes of this demonstration, we do not
Figure 1. Density plot of Equation (13) evaluated at for . The surface is seen to dip below zero for some angles.
ascribe any weighting function to the intervalized angles, the calculated probabilities are more representative of upper limits rather than most likely values.
Figure 2 shows the calculated probability of no violation as a function of the size of the error in the angle measurement, . From Figure 2 we see that intervalizing measured polarization angles, i.e., using their -measures, and using interval arithmetic to calculate probabilities and correlations can lead to re-interpreting violations as non-violations with the probabilities as estimated above. The calculated probabilities of no-violation themselves show a strong, nonlinear geometric relation to the assumed uncertainty in the measured angles. This is due both to the structure of Equation (14) and the size of the error in the measured angles, i.e., not just the presence of the error itself. For this particular photon polarization-angle example, Figure 2 suggests that uncertainties in the measured angles need to be less than 0.05 deg to differentiate clear violation from no violation.
As mentioned above, the calculated no-violation probabilities using interval-based quantities appear to depend on the algebraic structure of the inequality itself. A critical parameter in the interval estimation for the probabilities is the Hausdorff distance, Equation (11). In Figure 3, we show the dependence of this distance on the number of subdivisions needed for the proxy classical correlation interval to enclose the proxy quantal correlation interval. The distance is
Figure 2. Probability of no violation of Bell’s inequality versus the total uncertainty in the measurements of and . The top set of points is from Equation (14), the intervalized inequality, while the bottom set is from the not intervalized inequality, Equation (13).
Figure 3. The Hausdorff distance, Equation (11), normalized to the interval diameter as a function of k, i.e., the number of enclosing intervals.
normalized to the diameter of the interval at each k, such that a distance of unity is the smallest possible distance. Here, seems to give a rapid (but not necessarily too rapid) of a convergence, almost in an exponential rather than a geometric fashion. This feature may be important in designing Bell tests optimized for error constraints and the algebraic form or structure of the inequality. Rapid convergence (the “right” form of the inequality) can compensate for the size of the measurement error. In this particular illustration, however, given the exponential convergence, the form of the inequality, Equation (14), seems less of a consequence to the calculated no-violation probabilities than the size of the measurement error.
3. Discussion
We have introduced and motivated the use of finite intervals to represent physically measurable quantities, we call -measures, in place of the real-numbered representation, which we consider untenable. We demonstrate the utility of -measures using theoretical and numerical illustrations. Our theoretical demonstration, an interval-based recasting of Bell’s inequality using proxy correlation functionals, shows that, under some conditions, two measured interval quantities―a classically calculated correlation function and its quantal counterpart―can come arbitrarily close to each other. This is in stark contrast to the Bell theorem claim, which assumes classical property values are real-numbered, that no hidden variable theory [21] [22] [23] [24] can produce this arbitrary closeness.
In our numerical demonstration, we apply interval analysis to a measurement of the polarization states of an entangled pair of photons. We calculate the probabilities of no violation and demonstrate that quantal violation of the Bell inequality is likely less severe under the assumption of -measured values. This means that Bell tests should be considered less compelling as proof of quantum correlations and non-locality [25] [26] [27] [28] .
These demonstrations, along with our heuristic arguments, motivate the need for and use of -measures to more accurately model physical property values than the traditionally assumed real-numbered representation. We assert that the interval-based -measured representation applies to both classical and quantal physical values, and that their desirability and need for broader application to physical theories in general seems apparent. The development of interval analysis for computing theory and its application to finite-state computing machines was predicated by the need to represent numerical values and quantities that are only approximate by necessity in a real world computing machine. It may be at first counterintuitive to think, for example, any microscopic or macroscopic object can have two or more simultaneous values for any one of its physical properties. Upon analysis, however, it becomes evident that distributed values as provided by -measures are more tenable than real-numbered values, just like in finite-state machines. Thus we suggest that the application of -measures and interval analysis should see rapid and pervasive growth in applications to many physical and other theories.
More than 60 years ago, mathematician Tuero Sunaga, working in the field of communication theory at the University of Tokyo wrote [8] : “The interval concept is on the borderline linking pure mathematics with reality and pure analysis with applied analysis.” Since that time, however, the application of interval analysis has been almost entirely restricted to the theory of computing machines. It is past time that Sunaga’s vision and seminal contributions regarding interval analysis are realized in broader applications as they may have dramatic and far reaching impacts.
More specifically to NASA, the need for advancements in communication and computing theories and related technologies make broader applications of -measures to physical systems even more compelling. Our own future work on this effort will include more rigorous interval-based mathematical modeling of Bell-like tests, re-formulation of some well known models of physical systems using interval-based analysis, and a better appreciation of the benefits and limitations of the new analysis when applied to physical theory, with the goal of supporting the advancement of quantum-based analysis, modeling and technologies.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper.
Cite this paper
Eblen, F.P. and Barghouty, A.F. (2019) Interval Based Analysis of Bell’s Theorem. Journal of Modern Physics, 10, 585-600. https://doi.org/10.4236/jmp.2019.106041
References
- 1. Gisin, N. (2018) Why Bohmian Mechanics? One- and Two-Time Position Measurements, Bell Inequalities, Philosophy, and Physics. Entropy, 20, 105. https://doi.org/10.3390/e20020105
- 2. Schlosshauer, M. (2014) The Quantum-to-Classical Transition and Decoherence. https://arxiv.org/abs/1404.2635v1
- 3. Shannon, C.E. (1948) A Mathematical Theory of Communication. The Bell System Technical Journal, 27, 379-423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
- 4. Bekenstein, J.D. (1981) A Universal Upper Bound on the Entropy to Energy Ratio for Bounded Systems. Physical Review D, 23, 287. https://doi.org/10.1103/PhysRevD.23.287
- 5. Sunoj, S.M., Sankara, P.G. and Maya, S.S. (2009) Characterization of Life Distributions Using Conditional Expectations of Doubly (Interval) Truncated Random Variables. Communications in Statistics—Theory and Methods, 38, 1441-1452. https://doi.org/10.1080/03610920802455001
- 6. Misagh, F. and Yari, G. (2012) Interval Entropy and Informative Distance. Entropy, 14, 480-490. https://doi.org/10.3390/e14030480
- 7. Warmus, M. (1956) Calculus of Approximations. Bulletin de l’Academie Polonaise de Sciences, 4, 253-257.
- 8. Sunaga, T. (2009, Original Publication 1958) Theory of an Interval Algebra and Its Application to Numerical Analysis. Japan Journal of Industrial and Applied Mathematics, 26, 125-143.
- 9. Moore, R.E. (1966) Interval Analysis. Prentice-Hall, Englewood-Cliffs.
- 10. Kulisch, U. (1969) Grundzüge der Intervallrechnung. in Jahrbuch überblicke Mathematik 2. Bibliographisches Institute, Mannheim.
- 11. Markov, S. and Okumura, K. (1999) The Contribution of T. Sunaga to Interval Analysis and Reliable Computing. In: Csendes, T., Ed., Developments in Reliable Computing, Springer Science & Business Media, Berlin, 163. https://doi.org/10.1007/978-94-017-1247-7_14
- 12. Alefeld, G. and Mayer, G. (2000) Interval Analysis: Theory and Applications. Journal of Computational and Applied Mathematics, 121, 421-464. https://doi.org/10.1016/S0377-0427(00)00342-3
- 13. Moore, R.E., Kearfoot, R.B. and Cloud, M.J. (2009) Introduction to Interval Analysis. Society for Industrial and Applied Mathematics, Philadelphia. https://doi.org/10.1137/1.9780898717716
- 14. Markov, S. (2016) On the Algebra of Intervals. Reliable Computing, 21, 80-108.
- 15. Koshelva, O. and Kreinovich, V. (2016) Towards an Algebraic Description of Set Arithmetic. Technical Report UTEP-CS-16-90, University of Texas, El Paso.
- 16. Kaucher, E. (1980) Interval Analysis in the Extended Interval Space IR. In: Alefeld, G. and Grigorieff, R.D., Eds., Fundamentals of Numerical Computation (Computer-Oriented Numerical Analysis), Computing Supplementum Vol. 2, Springer, Berlin, 33. https://doi.org/10.1007/978-3-7091-8577-3_3
- 17. Bell, J.S. (1964) On the Einstein Podolsky Rosen Paradox. Physics, 1, 195-200. https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195
- 18. Piaget, A. and Landowski, M. (2012) Is the Coventional Interval Arithmetic Correct? Journal of Theoretical and Applied Computer Science, 6, 27-44.
- 19. Clauser, J., Horne, M., Shimony, A. and Holt, R. (1969) Proposed Experiment to Test Local Hidden-Variable Theories. Physical Review Letters, 23, 880-884. https://doi.org/10.1103/PhysRevLett.23.880
- 20. Aspect, A., Grangier, G. and Roger, G. (1982) Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities. Physical Review Letters, 49, 91-94. https://doi.org/10.1103/PhysRevLett.49.91
- 21. Bell, J.S. (1966) On the Problem of Hidden Variables in Quantum Mechanics. Reviews of Modern Physics, 38, 447-452. https://doi.org/10.1103/RevModPhys.38.447
- 22. Bohm, D. (1952) A Suggested Interpretation of the Quantum Theory in Terms of “Hidden Variables” Parts I and II. Physical Review, 85, 166-193. https://doi.org/10.1103/PhysRev.85.180
- 23. Mermin, N.D. (1993) Hidden Variables and the Two Theories of John Bell. Reviews of Modern Physics, 65, 803-815. https://doi.org/10.1103/RevModPhys.65.803
- 24. Einstein, A., Podolsky, B. and Rosen, N. (1935) Can Quantum Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47, 77-80. https://doi.org/10.1103/PhysRev.47.777
- 25. Mattuck, R.D. (1981) Non-Locality in Bohm-Bub’s Hidden Variable Theory of the Einstein-Podolsky-Rosen Paradox. Physics Letters A, 81, 331-332. https://doi.org/10.1016/0375-9601(81)90081-5
- 26. de Muynck, W.M. (1986) The Bell Inequalities and Their Irrelevance to the Problem of Locality in Quantum Mechanics. Physics Letters A, 114, 65-67. https://doi.org/10.1016/0375-9601(86)90480-9
- 27. Zukowski, M. and Brukner, C. (2014) Quantum Non-Locality. Journal of Physics A, 47, Article ID: 424009. https://doi.org/10.1088/1751-8113/47/42/424009
- 28. Aharonov, Y., Botero, A. and Scully, M. (2014) Locality or Non-Locality in Quantum Mechanics: Hidden Variables without “Spooky Action-at-a-Distance”. Zeitschrift für Naturforschung A, 56, 5-15. https://doi.org/10.1515/zna-2001-0103
- 29. Dehlinger, D. and Mitchell, M.W. (2002) Entangled Photons, Nonlocality, and Bell Inequalities in the Undergraduate Laboratory. American Journal of Physics, 70, 903-910. https://doi.org/10.1119/1.1498860
- 30. Maccone, L. (2013) A Simple Proof of Bell’s Inequality. American Journal of Physics, 81, 854-859. https://doi.org/10.1119/1.4823600
Appendix: The Structure of a Bell Test
In 1964, Irish physicist John S. Bell proposed a revolutionary theorem that could possibly prove the existence of quantal correlations of entangled objects. His theorem showed that violations of a classical probability inequality could be tested so as to prove classical correlations of detected particles cannot be made arbitrarily close to quantal correlations (see, e.g., [29] [30] , and references therein). We illustrate the Bell theorem and tests with an example Bell test structure with these key elements (see Figure 4): 1) A source of twin photons, P1 and P2, entangled with the same quantum spin state. 2) A set of two detectors, D1 and D2, one for each of the entangled pair. 3) An adjustable relative angle, , between the two detectors, along with relationships for the classical correlation function to the relative detector angle and for the quantal correlation function (Figure 5).
If quantal correlations are as predicted by the theory, Bell test data show a cosine squared-relationship of correlation with respect to the relative detector angle (the red curve in Figure 5). If classical correlations are correct, on the other hand, the relationship will be linear (the blue curve in Figure 5). Figure 6 and Figure 7 illustrate the justifications for the linear and the cosine-squared relationships, respectively.
The classical and quantum correlations are most easily illustrated using photons with the same spin, though twin polarization photons are essentially the same. The angle of Detector 1, designated D1, is used as a reference angle of 0-deg. The angle of D2 relative to D1 is . For the classical case, the green arc in Figure 6 shows where these detectors will agree, i.e., be correlated, while the red arc shows where they will disagree. Clearly, as increases linearly, the green arc will diminish linearly and the red arc will increase linearly. This shows the relationship of correlation to relative detector angle to be linear for the classical case. Linearity can also be appreciated to stem from the assumed uniform distribution of a random .
The quantal case is very different, as illustrated by Figure 7. Quantum theory dictates that when D1 detects P1 spin, for example, spin up, the quantum spin state of P2 must assume the same spin angle, i.e., spin up. So P2 must strike D2 with the D1 detected angle of P1. But since D2 is at a relative angle of with D1, the P2 quantum spin state must be projected onto D2, i.e., multiplied by the
Figure 4. A notional Bell test setup. Key elements are 1) a source of twin photons, P1 and P2, entangled with the same quantum spin state, 2) a set of two detectors, D1 and D2, one for each of the entangled pair, and 3) an adjustable relative angle, , between the two detectors.
Figure 5. Prototypical classical correlation (in blue) and quantum correlation (in red) as functions of the relative detector angle .
Figure 6. Classical correlation as a linear function of ; linear increases in cause correspondingly linear changes in the green and red arc lengths.
. Since quantum probability is the square of the state amplitude, the multiplier becomes . This means the probability of a D1 detection being the same as a D2 detection, i.e., the probability of agreement, or correlation, is a function of .
So, if quantum predictions are correct, Bell test data will reproduce the red curve in Figure 5 for many measurements of random spin and random detector angles. If classical predictions are correct, the blue curve will be reproduced. Many actual Bell tests consistently have reproduced the quantum prediction. However, there is a critical built-in assumption for the classical case and the Bell inequality: that property values, such as spin or polarization, are real-numbered values.
But, as we have argued in this paper, if one replaces real-numbered values with “quasi classical” interval values, or -measures, the differences between the two results may not be as pronounced or as differentiated, at least under some conditions (see Figure 8). One obvious result of this finding is that the validity of using conventional Bell tests to demonstrate quantum correlations may be less compelling when using -measures than a real-number representation.
Figure 7. Quantum correlation as a function of . P2 assumes the direction of the P1 state detected by D1, e.g., spin up. This state is then projected onto the D2 up direction. The projected amplitude is squared so as to get the absolute probability.
Figure 8. For -measured (intervalized) spin angles, correlated and uncorrelated regions can overlap.
NOTES
1We use the term “physically realized”, while admittedly not rigidly definable, because it offers a working definition of the notion of a physically realized entity as one that can exist in and have influence on physical reality, while having physical properties that are, in principle, measurable by a physical device. It is to be contrasted with an abstracted physical property, which may be formally useful but may not be measurable by a physical device.
2By “demonstration” we mean here that what follows is neither a rigorous nor a general validation of the above assertion.