We devise an approach to Bayesian statistics and their applications in the analysis of the Monty Hall problem. We combine knowledge gained through applications of the Maximum Entropy Principle and Nash equilibrium strategies to provide results concerning the use of Bayesian approaches unique to the Monty Hall problem. We use a model to describe Monty’s decision process and clarify that Bayesian inference results in an “irrelevant, therefore invariant” hypothesis. We discuss the advantages of Bayesian inference over the frequentist inference in tackling the uneven prior probability Monty Hall variant. We demonstrate that the use of Bayesian statistics conforms to the Maximum Entropy Principle in information theory and Bayesian approach successfully resolves dilemmas in the uneven probability Monty Hall variant. Our findings have applications in the decision making, information theory, bioinformatics, quantum game theory and beyond.
The famous Monty Hall problem arises from a popular television game show Let’s Make a Deal [
As a mathematical problem, it is important to clarify the rules of the game that are not necessarily in parallel to the realistic game show situations. For the Monty Hall problem, Monty is required to open a goat-yielding door not chosen by the contestant under all circumstances [
door, there is a
the prize. In general, Bayesian inference indicates that the constraint as mentioned above leads to dividing the probability space into two sets. One set contains the door initially chosen by the contestant with prior probability
The controversy focuses on the correct way of updating information between Bayesian and frequentist appro- aches to statistics [
In this paper, we discuss and analyze a few variations of the Monty Hall problem to clarify the difference between Bayesian and frequentist inferences. We model the problem as an incomplete information game in which Monty and the contestant have opposing interests [
Bayes theorem serves as an approach to statistical inference by means of conditional probabilities. Bayes theo- rem states that for two events C and O, the probability of C given O,
We define
If the contestant’s initial choice is wrong (i.e., the car is not behind door A), Monty’s option to open a door is restricted in that if the car is behind B or C, then the host has to open door C or B, respectively. We have
Given that the possibility of finding the car behind door A, B, or C is equally likely, and then we have
bility,
Substituting everything into Bayes theorem, we obtain
Similarly for Monty opening door C, we have
The frequentist inference estimates Monty’s decision procedure q by means of long-run successful frequencies [
Bayesian inference uses expected values to determine the optimal option:
Calculated expected values for sticking and switching are independent of q; thereby supporting the argument that rational decision in the single case is not relevant to the degrees of belief about long-run success frequency [
If
In information theory, Shannon entropy is the average amount of information contained in each event. If
where
In the literature of discussing the Monty Hall problem, few have brought up the concept of conditional entropy. We utilize the Maximum Entropy Principle [
What is the rational response in the case where the host adopts a strategy that minimizes the winning chance of the contestant [
Preceding investigations [
The “coin-flipping” has been used exclusively in the literature [
to either door. With the use of the “coin-flipping”, the subsequent probabilities can be extracted from Equation
(3) or Equation (5). In either case (
As seen in
tropy endpoints are at
starts, we have a strongly biased scenario as a variant of the original Monty Hall problem [
there is a
door. In the case that Monty opens the preferred door, the contestant has a 50:50 winning chance for either sticking or switching. It is important to note that there is a difference between the case where Monty informs the contestant of his decision process before a door is chosen and the case in which he informs the contestant after. In the latter case, Monty has gained additional information, which is not accounted for by frequentist inferences.
The frequentist approach entails using a fixed model to the inference [
assumption (
problem [
This indicates that the events of
Using the law of total probability, we can calculate the bias parameter q:
There exists a bias parameter q that characterizes the result of Bayesian inference, which facilitates an assess- ment of epistemic and statistical probabilities for Bayesian and frequentist inferences [
A few remarks are immediately in order. 1) As a conditional probability problem, it is important to remove ambiguities through the rules of the game [
limit the winning chance of the contestant to
chooses the correct door, Monty then faces with a choice as to which door to open. The frequentist approach uses a “coin-flipping” method [
answer varies. In the frequentist inference, q is assumed to be
Bayesian inferences lead to different results.
To illustrate the differences, we examine an uneven probability model in which the odds of a car being behind a door are not uniform. A case of an unequal prior model could occur as follows. Assume Monty rolls a standard, 6-sided die. If he rolls a 1, 2, or 3, then the car is placed behind door A. If he rolls a 4 or 5, the car is placed behind door B. If he rolls a 6, the car is placed behind door C. Therefore, the winning probabilities for the doors
contestant’s initial choice.
Shown in
for
Monty, however, has a counter strategy available. If
the contestant switches to door B and wins. This counter-strategy limits the contestant’s winning chance to
In fact, the case where
As a consequence, if
where
domly between switching and sticking, i.e.,
It is worth noting that the Nash equilibrium is at (
Hall problem is unjustified. In contrast, using Bayesian inference (cf. the derivations of Equation (11)), we have
Therefore, the rational decision procedure inferred from Bayesian approach conforms to the Nash equilibrium
strategy. Furthermore, we have
To further pursue this point, we calculate the conditional entropy as the function of the bias parameter (q) for the initial unequal probability model,
where
As seen in
spectively. It is worth noting that Bayesian approach yields the maximum entropy value of q, which corresponds to the rational strategy for Monty. We summarize in
bias parameters. The value of
uneven probability model.
We have demonstrated that Bayesian inference corresponds to Maximum Entropy solution for the calculated conditional entropy. As a result, Bayesian approach can be employed to tackle the variants of the Monty Hall problem. In contrast, if the contestant miscalculates rational decisions by the “coin-flipping” model, we have demonstrated that there exists a counter strategy for Monty to take advantage of the situation.
In conclusion, we have devised a Bayesian inference approach for a systematic exploration of rational decisions for variants of the Monty Hall problem. The method employs the Maximum Entropy Principle [
The frequentist inference determines the winning probability using single conditional probability. Bayesian inference estimates the winning probability using expected values that are weighted averages of individual conditional probabilities. Bayesian inference considers Monty’s decision process with respect to the change of information at the various stages of the game. We examined a few variants of the Monty Hall problem and showed that the “coin-flipping” assumption was in general not consistent with maximum entropy solutions. Our analysis on the uneven prior probability Monty Hall variant reveals fallacies in the “coin-flipping” assumption, thereby providing convincing evidence that Bayesian inference is appropriate in tackling Monty Hall-like conditional
S | ||||
---|---|---|---|---|
0.667 | 0.809 | 0.809 | 0.541 | |
0.667 | 0.541 | 0.459 | 0.459 | |
0.918 | 0.978 | 0.874 | 0.645 | |
0.918 | 1.000 | 0.918 | 0.650 |
probability problems. We believe that our findings shed light on the application of Bayesian inferences and the Maximum Entropy Principle in quantum Monty Hall problems [
We remark, before closing, that the approaches developed in this paper can be applied to a variety of emerging fields, notably Big Data and Bioinformatics. Bayesian inference has some appealing features, including the capa- bility of describing complex data structures, characterizing uncertainty, and providing comprehensive estimates of parameter values, and comparative assessments. Bayesian methodology can be employed for a comprehensible means of integrating all available sources of information and of considering missing data. There is also of great benefit in using Bayesian approach as a mechanism for integrating mathematical models and advanced compu- tational algorithms.
The idea of the present research came up in discussions during a Math Team lecture regarding the Monty Hall problem and the Prisoner’s Dilemma. We are grateful for fruitful discussions with Professors P. Baumann, W. Mao, J. Rosenhouse, W. Seffens, and X. Q. Wang. The work at Clark Atlanta University was supported in part by the National Science Foundation under Grant No. DMR-0934142.
Jennifer L. Wang,Tina Tran,Fisseha Abebe, (2016) Maximum Entropy and Bayesian Inference for the Monty Hall Problem. Journal of Applied Mathematics and Physics,04,1222-1230. doi: 10.4236/jamp.2016.47127