2012. Vol.3, No.6, 489-493
Published Online June 2012 in SciRes (
Copyright © 2012 SciRes. 489
Pseudodiagnosticity: The Role of the Rarity Factor in the
Perception of the Informativeness of Data
Marco D’Addario*, Laura Macchi
Department of Psychology, University of Milano-Bicocca, Milan, Italy
Email: *
Received April 10th, 2012; revised May 1st, 2012; accepted June 2nd, 2012
This paper presents the results of a study designed to investigate the pseudodiagnosticity bias as a failure
to identify and select diagnostically relevant information. The reported experiment (N = 240) aims to
deepen understanding of the role played by the rarity of evidential features in a classical pseudodiagnos-
ticity task. The problem used for the experiment was a classical pseudodiagnosticity task. Six experimen-
tal versions were constructed: they differed in the rarity of features proposed and in the percentages (high
or low) associated with them. The results show that people’s responses appear to be influenced by the
percentage values associated with explicit information more than by a rarity factor. When an initial piece
of evidence is associated with a low percentage, the percentage of normatively diagnostic answers is
greater than when this percentage is high. Furthermore, rarity is not, in itself, a crucial factor in the oc-
currence of pseudodiagnosticity bias. Rather, the perception of the difference between two evidential fea-
tures in terms of informative value influences people’s responses when orienting a diagnostic evaluation.
When people perceive an initial piece of evidence as having greater informative value than a second piece
of evidence, they tend to (correctly) move their attention from the focal hypothesis to the alternative one.
Keywords: Pseudodiagnosticity; Rarity; Informativeness of Data
The term pseudodiagnosticity, first used by Doherty, Mynatt,
Tweney and Schiavo (1979), refers to the “failure to identify
and select diagnostically relevant information”. In a simple case
where participants are asked to choose between two hypotheses
(e.g., H and not-H), they tend to select and consider the data
that refer to only one hypothesis, without considering the in-
formation on the alternative hypothesis. In addition to pointing
out a sort of incapacity in diagnostic behavior, pseudodiagnos-
ticity is important because of its possible consequences. For
example, a physician who does not adequately analyze the risks
associated with a patient's symptoms may not recognize the
illness from which the patient is suffering, with easily imagin-
able consequences.
The definition of pseudodiagnosticity proposed by Doherty
et al. (1979) can be understood by framing it within the norma-
tive model provided by Bayes’ Theorem. This theorem allows
an evaluation of the probability of an event (e.g., the presence
of an illness) in relation to another event (e.g., the symptoms of
an illness) in which the diffusion within a certain population is
Consider the following equation:
H and D stand for (respectively) the hypothesis and data,
subscripts 1 and 2 label two mutually exclusive and exhaustive
hypotheses, and the subscript i indexes a set of data. The poste-
rior probability P(H1/Di) can be calculated only if both the
probability of a set of data given the hypothesis being tested,
P(D1/H1), and the probability of the same piece of data given
the alternative hypothesis, P(D1/H2), are known. The informa-
tion is diagnostically relevant only if it permits the completion
of the likelihood ratio
. The likeli-
hood ratio is independent of the base rate. In some cases, it is
possible that a datum provides strong evidence (high probabi-
lity) in favor of a less probable hypothesis (low base rate).
However, given that the numerator and the denominator of the
likelihood ratio are independent, “observing a datum that is a
necessary concomitant of H1, that is, P(D/H1) = 1, may be un-
informative if it is also a concomitant of H2” (Beyth-Marom &
Fischhoff, 1983).
A pseudodiagnosticity bias means not selecting the data that
are necessary to complete the likelihood ratio and instead choo-
sing the information that refers to a single hypothesis. In this
sense, there is a strong connection with confirmation bias (Ev-
ans, 1989; Klayman, 1995; Wason, 1960), the tendency to se-
lect data favoring one’s hypothesis even in the presence of in-
formation that does not support it.
Doherty et al. (1979) demonstrated the strong consistency of
the phenomenon of pseudodiagnosticity with an experimental
paradigm that, even with various modifications, was accepted
by almost all researchers engaged in the analysis of pseudodi-
agnosticity. The experimental paradigm for the study of pseu-
dodiagnosticity (pd task) is substantially an information selec-
tion problem, which can be conceptualized as a 2 × 2 table (see
Table 1).
In the tasks used to study pseudodiagnosticity bias, partici-
pants are given information on cell A in such a way as to con-
centrate the participant’s attention on hypothesis H1, also de-
fined as the focal hypothesis. To decide which of the two hy-
potheses is true, participants are asked to select information
*Corresponding author.
Table 1.
Standard Pd task structure.
H1 H2
D1 Cell A: P(D1/H1) Cell B: P(D1/H2)
D2 Cell C: P(D2/H1) Cell D: P(D2/H2)
Note: Cell A represents the probability of the datum D1, given the focal hy-
pothesis H1; cell B represents the probability of the datum D1, given the
alternative hypothesis H2; cell C represents the probability of the alternative
datum D2, given the focal hypothesis H1; cell D represents the probability of
the alternative datum D2, given the alternative hypothesis H2.
contained in one of the three remaining cells (B, C, or D). Fol-
lowing Bayes’ theorem as a normative model, participants
should choose B to obtain all the data necessary for the comple-
tion of the likelihood ratio. However, participants engaged in
this type of task concentrate on the focal hypothesis and select
cell C.
A theoretical interpretation of this phenomenon is suggested
by some authors who claim that “the reluctance of participants
to select information about both decision alternatives on infe-
rence problems is due to the limitation on the number of hy-
potheses that can be maintained and operated upon in working
memory” (Doherty & Mynatt, 1987; Mynatt, Doherty, & Dra-
gan, 1993). According to this perspective, participants would
only be able to operate on a single hypothesis at a time because
the pd task requires aware reflection inside working memory,
which can necessitate serial-type processing instead of parallel
processing. The working memory system, because of its limited
capacity, would supply “insufficient attentional resources to
update two hypotheses at once” (Mynatt, Doherty, & Dragan,
1993). The theoretical interpretation by Doherty et al. (1979)
seems consistent with the Dual Process Theory by Evans (2006,
2009; Evans & Over, 1996). According to this theory, the
pseudodiagnosticity bias depends on the involvement of the
implicit system (pragmatic and automatic) that replaces the
action of the explicit system (rational and sequential) because
of the limited capacities of the latter working system. This type
of limit depends on the limits of the working memory, to which
the explicit system is tightly connected (Evans, Venn, &
Feeney, 2002).
Several studies on pseudodiagnosticity have shown the con-
sistency of this phenomenon (Beyth-Marom & Fischhoff, 1983;
Doherty, Chadwick, Garavan, Barr, & Mynatt, 1996; Doherty,
Schiavo, Tweney, & Mynatt, 1981; Evans et al., 2002; Feeney,
Evans, & Clibbens, 1997, 2000; Feeney, Evans, & Venn, 2000a,
2000b, 2008; Kern & Doherty, 1982; Mynatt, Doherty, & Sul-
livan, 1991; Mynatt, Doherty, & Dragan, 1993).
Some recent studies have analyzed the conditions that favor
or limit the occurrence of this bias. In particular, a series of
very interesting studies investigated how rare information can
affect the pseudodiagnosticity task. These studies showed that,
in a classical pd task, when an initial piece of information con-
cerned a rare feature (that provided some supporting evidence
for the focal hypothesis), participants tended to select a further
piece of information on that rare feature that could provide
evidence for the alternative hypothesis, leading to the avoid-
ance of the habitual pseudodiagnosticity bias (Feeney, Evans, &
Clibbens, 1997; Feeney, Evans, & Venn, 2000a, 2008). The re-
sults obtained were replicated in many experimental studies
with different versions of the classical pd task, showing that
“people are significantly more confident in a hypothesis sup-
ported by rare rather than common evidence” (Feeney et al.,
In one experiment (Feeney, Evans, & Venn, 2000a), the au-
thors asked the participants to identify the model of car that was
bought by the participant’s sister (Model X or Model Y),
knowing that she was interested in two features: the presence of
a car radio (very common feature) and a maximum speed
higher than 165 mph (rare feature). Participants were given in-
formation about the percentage of cars X able to reach a speed
higher than 165 mph (80%) and were asked to select a second
piece of information to establish which car was bought by their
sister. The results showed that rarity significantly affects the
pseudodiagnosticity task: 43% of the participants chose to
know Cell B (the percentage of cars Y able to reach a speed
higher than 165 mph, the normatively correct choice), whereas
44% chose Cell C (the percentage of cars X that have a radio,
the pseudodiagnostic choice). Even if C choices were a very
small majority, they were significantly lower than the choices
obtained in the classical tasks (with two common items). The
authors submitted another group of participants to the same test,
providing 10% as the percentage value for the focal hypothesis
(Cell A). The results obtained in this version of the test were
substantially equal to the ones previously described, demon-
strating that the “rarity” factor affects participants’ choices
independently of the percentage (80% - 10%) associated with it.
These results led the authors to conclude that “the effect of
feature rarity is mediated via a hard-wired heuristic rather than
any sophisticated on-line processing of probabilities” (Feeney,
Evans & Venn, 2000a). They claimed that the heuristic princi-
ple is sensitive to the rare object features but insensitive to the
statistical changes in the information. In a further study, the
authors pointed out that, when faced with incomplete informa-
tion, participants use their “past” knowledge to make inferences.
From this perspective, even the perception of rarity seems to
depend on their own knowledge. Every participant is able to
consider whether a feature is common or rare and to use such
an estimate to solve the task (Feeney, Evans, & Venn, 2000b).
This result seems coherent with other studies (McKenzie &
Mikkelsen, 2000; Oaksford & Chater, 1994) that emphasize the
importance of rarity in lay hypothesis testing.
Our study investigates the pseudodiagnosticity bias, espe-
cially analyzing the role of the rarity factor in perceptions of the
informativeness of data. Given that, as suggested by some au-
thors (Feeney, Evans, & Venn, 2008; McKenzie & Mikkelsen,
2000), people seem to be sensitive to rarity in judging whether
the available evidence supports a given hypothesis, our inten-
tion was to analyze how and under what conditions rarity can
change habitual pseudodiagnostic behavior. In fact, the rela-
tionship between the perception of a datum as “common” or
rare” and the perception of its level of informativeness seems
less predictable. For example, as suggested by Maggi et al.
(1998), in some experimental tasks participants had to evaluate
features that were so common that they could assign them a low
informative value, at the risk of perceiving these features as
completely useless for the task. In this sense, the use of very
common features, such as the presence of a car radio in Feeney,
Evans & Venn (2000a), could be problematic. Only by adopt-
ing scenarios with evidential features that are perceived by
participants as sufficiently informative (both with common
features and with rare ones) it is possible to understand the
relationship between rarity and informativeness.
Our intention was to analyze the case in which the two evi-
Copyright © 2012 SciRes.
dential features chosen for the pd task were both rare. Our hy-
pothesis was that the effect of rarity should be attenuated given
that a rare datum turns out to be more informative if it is com-
pared with a common datum. From this point of view, rarity is
not informative in itself, but it is the comparison between the
rarity of each datum that turns out to be fundamental for the
task solution.
Our study aimed to investigate the rarity effect in the classi-
cal pd task. In particular, we investigated the following:
the role of the rarity effect in the perception of the informa-
tiveness of data;
the generality of this factor by analyzing the role played by
the percentages associated with the explicit information
(rare or common) shown to participants, given that there is
little agreement in the literature about its role. For example,
Feeney, Evans & Venn (2000a) found that the perception of
the rarity of the data led to the reduction of errors with both
high and low percentages, whereas Mynatt, Doherty, &
Dragan (1993) showed an increase in the number of cell-B
choices (resulting in a decrease of the pseudodiagnosticity
bias) when P(D1/H1) was less than .5.
Our hypothesis was that the rarity effect (i.e., the perception
of the rarity of the evidential features) played a crucial role in
the perception of the informativeness of the data shown and that
this effect was mediated by the rarity of the second evidential
feature and the percentage associated with the evidential feature
shown (i.e., D1). In the first case, we thought that rarity could
reduce “pseudodiagnosticity” only when the evidential feature
was perceived as rare in comparison with D2 (with D1 and D2
both being rare, we hypothesized a minor effect). In the second
case, we thought that when P(D1/H1) was less than .5, partici-
pants could evaluate H1 as less plausible, thus considering the
importance of the information associated with cell B (the cor-
rect choice) or cell D, both referring to H2.
Two hundred forty students at the University of Milano-
Bicocca (who were not experts in statistics) were randomly
selected for inclusion in the study.
The problem used for the experiment was structurally identi-
cal to the problems used by Mynatt et al. (1993; the car prob-
lem) and subsequently used (in revised forms) in the literature.
Six experimental versions of the car problem were used. These
versions differed in the type of features proposed (in terms of
rarity) and in the percentages (high or low) associated with
them (see Tab l e 2).
A pre-test was done to select for our study only features that
were perceived by people as sufficiently informative (see Table
2). In our pre-test, the feature used by Feeney, Evans and Venn
(the presence of a car radio) was perceived as “not informa-
Example: V ersion 1 of th e C ar Pr o bl e m
Your sister bought a new car. You cannot remember whether
it is a model X or a model Y, but you do remember two things:
the car has a top speed higher than 220 km/h, and it has front
Table 2.
Experiment structure.
Version Feature 1 Feature 2
1 - 2 Rare: Top speed higher
than 220 km/h
Common: Front electric window
winders in car equipment
3 - 4 Rare: Top speed higher
than 220 km/h
Rare: Navigation System in
car equipment
5 - 6
Common: Manual air
conditioning in car
Common: Front electric
window winders in car equipment
electric window winders.
You have already asked the following question, and we have
given you the answer:
What is model X cars top speed?
Answer: 80% of model X cars have a top speed higher than
220 km/h.
You have the possibility of knowing only one of the following
pieces of information:
B) the percentage of model Y cars with a top speed higher
than 220 km/h;
C) the percentage of model X cars with front electric window
D) the percentage of model Y cars with front electric window
Which piece of information (B, C or D) would you want to
decide which car your sister owns?
Each participant was given only one version of the car prob-
lem. A between-subjects design was used (40 participants for
each version). Given P(D1/H1) (the percentage of model X cars
with a top speed higher than 220 km/h (for Versions 1, 2, 3 and
4) or the percentage of model X cars with manual air condi-
tioning in car equipment (for Versions 5 and 6)), participants
were asked whether they wanted to discover cell B, cell C or
cell D (i.e., P(D1/H2), P(D2/H1) or P(D2/H2)). Participants per-
formed the task on their own. The instructions were given ver-
bally, and there were no time limits.
A chi-square analysis was conducted to determine the influ-
ence of the rarity of the features and of the percentages associ-
ated with them for pseudodiagnosticity bias: normatively cor-
rect choices (B cell choices) were compared with incorrect
choices (C cell and D cell choices were aggregated).
The results (see Table 3 for the overall results) showed a
minor effect of rarity on pseudodiagnosticity that partially dis-
confirmed the results obtained by Feeney, Evans and Venn
The rarity of the evidential feature was not sufficient to ori-
ent participants’ choices to reduce the pseudodiagnosticity bias.
The percentage of correct choices did not significantly differ in
relation to the presence (30% of cell B choices; data from Ver-
sions 1, 2, 3 and 4 were aggregated) or absence of a rare first
feature (31.3% of cell B choices; data from Versions 5 and 6
were aggregated): χ2(1, N = 240) = .039, p > .05.
In contrast, as we hypothesized, the percentage of correct
choices differed significantly in relation to the difference in ra-
rity between the two features. When there was a difference
Copyright © 2012 SciRes. 491
Table 3.
Experiment results: percentage of choices.
Version Cell B choices
correct choices)
Cell C
Cell D
1: Rare (80%)
Common 25% 62.5% 12.5%
2: Rare (10%)
Common 55% 42.5% 2.5%
3: Rare (80%)
Rare 10% 70% 20%
4: Rare (10%)
Rare 30% 50% 20%
5: Common (80%)
Common 25% 70% 5%
6: Common (10%)
Common 37.5% 45% 17.5%
(data from Versions 1 and 2 were aggregated), the percentage
of correct choices was greater (40%) than when there was no
difference (25.6%; data from Versions 3, 4, 5 and 6 were ag-
gregated): χ2(1, N = 240) = 5.207, p < .05. Data supporting the
importance of this element were obtained by a chi-square
analysis that showed a marginally significant difference in cor-
rect choices between Version 1 and Version 3 (χ2(1, N = 80) =
3.117, p < .10) and a significant difference between Version 2
and Version 4 (χ2(1, N = 80) = 5.115, p < .05).
The results also supported our second prediction by revealing
a significant role of the percentages associated with the eviden-
tial feature. A chi-square analysis showed that when the per-
centage associated with the evidential feature was low (data
from Versions 2, 4 and 6 were aggregated), the percentage of
correct choices was greater (40.8%) than when this percentage
was high (20%; data from Versions 1, 3 and 5 were aggregated):
χ2(1, N = 240) = 12.304, p < .001. It was noteworthy that the
only condition in which percentages did not seem to be influen-
tial is when both of the features were common (χ2(1, N = 80) =
1.455, p > .05).
Spontaneous justifications by participants suggested that they
tended to combine the information from the rarity of the evi-
dential feature and from the percentage associated with it.
When the evidential feature was rare and the percentage was
high (in Versions 1 and 3), participants tended to mentally fill
in the empty B cell with a low percentage (e.g., “...given that
this feature is rare, the probability that model Y cars also have
this feature should be low...otherwise the feature would be too
common”), therefore opting for the common pseudodiagnostic
error (cell C choice).
The role attributed to the rarity of the evidential feature in the
classical pseudodiagnosticity task should be reduced. First, our
results showed that the rarity factor did not act like a heuristic,
independently of the associated percentage values, as hypothe-
sized by Feeney, Evans and Venn (2000). Participants’ answers
appeared to be influenced by the percentage values associated
with the explicit information more than by the rarity factor.
When high percentages (80%) were provided, participants
tended to focus on a single (focal) hypothesis, therefore exhibi-
ting the pseudodiagnosticity bias. In contrast, when low per-
centages (10%) were provided, participants seemed to move
their attention to the alternative hypothesis, answering in a di-
agnostically correct way. These results were in line with the
ones reported by Mynatt et al. (1993).
Furthermore, rarity is not, in itself, a crucial factor in the oc-
currence of the pseudodiagnosticity bias; rather, the crucial
factor is people’s perceptions of the difference between the two
features (D1 and D2) in terms of their informative value (see
similar conclusions drawn from the analysis of the Wason se-
lection task by Oaksford and Cheater, 1994, 1997). In this dire-
ction, our results appeared similar to those obtained by Vallée-
Tourangeau and Villejoubert (2010; Villejoubert & Vallée-
Tourangeau, 2012), that underlined the importance of informa-
tion relevance in pseudodiagnostic reasoning.
Our study, though showing the consistency of the pseudodi-
agnosticity bias, contributes to highlight some limitations of the
standard pseudodiagnosticity paradigm, first introduced by
Doherty et al. (1979) and adopted successively by many authors,
given that the rigidity and standardized form of the paradigm
could limit its practical applicability. This paradigm (in which
two hypotheses and two pieces of data are shown and then,
after providing information on one hypothesis, participants are
asked to select a further piece of information to identify the
correct hypothesis) has the advantage of being clear and intelli-
gible. However, it appears to be not very flexible and, perhaps,
barely applicable. Beginning from this perspective, some recent
studies have attempted to overcome the limits imposed by this
paradigm. For example, Feeney et al. (1997) proposed a task
that, while maintaining the standard pseudodiagnosticity task
structure, introduces rating scales for the participants’ confi-
dence in the hypotheses. This additional task should permit an
analysis of how the participants change their opinions as a con-
sequence of the obtained information, allowing a more precise
and qualitative investigation of pseudodiagnosticity.
The use of a qualitative methodology is particularly appro-
priate to deepen understanding of the reasons behind the pseu-
dodiagnosticity bias. For example, answers that seem discor-
dant may depend on similar cognitive strategies (and motiva-
tions). The selection of the pseudodiagnostic option may be
guided not only by a confirmation strategy but also by a falsifi-
catory one. Given the dichotomous structure of the scenarios
usually adopted, selecting the pseudodiagnostic option does not
necessarily involve verification of the focal hypothesis because
of the possibility of testing whether the alternative hypothesis is
false. If one chooses the normatively wrong option C and finds
no evidence, one could use this datum—a low percentage—to
support the alternative hypothesis.
Future research on pseudodiagnosticity should attempt to
identify the precise role played by the different factors involved
in a pseudodiagnosticity task, under more realistic experimental
conditions, if possible.
Beyth-Marom, R., & Fischhoff, B. (1983). Diagnosticity and pseu-
dodiagnosticity. Journal of Personality and Social Psychology, 45,
1185-1195. doi:10.1037/0022-3514.45.6.1185
Doherty, M. E., & Mynatt, C. R. (1987). The magical number one. In D.
R. Moates, & R. Butrick (Eds.), Proceedings of the Ohio University
Interdisciplinary Inference conference, Athens, 221-230.
Doherty, M. E., Chadwick, R., Garavan, H., Barr, D., & Mynatt, C. R.
(1996). On people’s understanding of the diagnostic implications of
probabilistic data. Memory and Cognition, 24, 644-654.
Doherty, M. E., Mynatt, C. R., Tweney, R. D., & Schiavo, M. D.
Copyright © 2012 SciRes.
Copyright © 2012 SciRes. 493
(1979). Pseudodiagnosticity. Acta Psychologica, 49, 11-21.
Doherty, M. E., Schiavo, M. B., Tweney, R. D., & Mynatt C. R. (1981).
The influence of feedback and diagnostic data on pseudodiagnostic-
ity. Bulletin of the Psychonomic Society , 18, 191-194.
Evans, J. St. (1989). Bias in human reasoning: Causes and conse-
quences. Brighton: Erlbaum.
Evans, J. St. (2006). The heuristic-analytic theory of reasoning: Exten-
sion and evaluation. Psychonomic Bulletin and Review, 13, 378-395.
Evans, J. (2009). How many dual-processing theories do we need? One,
two, or many? In J. Evans, & K. Frankish (Eds.), In two minds. Ox-
ford: Oxford University Press.
Evans, J. St., & Over, D. E. (1996). Reasoning and rationality. Hove,
UK: Erlbaum.
Evans, J. St. B. T., Venn, S., & Feeney, A. (2002). Implicit and explicit
processes in a hypothesis testing task. British Journal of Psychology,
93, 31-46. doi:10.1348/000712602162436
Feeney, A., Evans, J. St. B. T., & Clibbens, J. (1997). Probabilities,
utilities and hypothesis testing. In M. G. Shafto, & P. Langley (Eds.),
Proceedings of the 19th Annual conference of the Cognitive Science
Society (pp. 217-222). Hillsdale, NJ: Erlbaum.
Feeney, A., Evans, J. St. B. T., & Clibbens, J. (2000). Background
beliefs and evidence interpretation. Thinking and Reasoning, 6, 97-
124. doi:10.1080/135467800402811
Feeney, A., Evans, J. St. B. T., & Venn, S. (2000a) A rarity heuristic
for hypothesis testing. In L. R. Gleitman, & A. K. Joshi (Eds.), Pro-
ceedings of the 22nd Annual Conference of the Cognitive Science
Society (pp. 119-124). Mahawah, NJ: Erlbaum.
Feeney, A., Evans, J. St. B. T., & Venn, S. (2000b) The effects of be-
liefs about the evidence on hypothesis testing. Unpublished manu-
script, Department of Psychology, University of Durham.
Feeney, A., Evans, J. St. B. T., & Venn, S. (2008) Rarity, pseudodiag-
nosticity and Bayesian reasoning. Thinking and Reasoning, 14, 209-
230. doi:10.1080/13546780801934549
Fischhoff, B., & Beyth-Marom, R. (1983). Hypothesis evaluation from
a Bayesian perspective. Psychological Review, 90, 239-260.
Kern, L., & Doherty, M. E. (1982). “Pseudodiagnosticity” in an ideal-
ized medical problem-solving environment. Journal of Medical
Education, 57, 100-104. doi:10.1097/00001888-198202000-00004
Klayman, J. (1995). Varieties of confirmation bias. In J. Busemeyer, R.
Hastie, & D. L. Medin (Eds.), Decision making from a cognitive per-
spective (pp. 365-418). New York: Academic Press.
Klayman, J., & Ha, Y.-W. (1987). Confirmation, disconfirmation, and
information in hypothesis testing. Psychological Review, 94, 211-
228. doi:10.1037/0033-295X.94.2.211
Maggi, J., Butera, F., Legrenzi, P., & Mugny, G. (1998). Relevance of
information and social influence in the pseudodiagnosticity bias.
Swiss Journal of Psychology, 57, 188-199.
Mynatt, C. R., Doherty, M. E., & Dragan, W. (1993). Information rele-
vance, working memory, and the consideration of alternatives. Quar-
terly Journal o f Experimental Psychology, 46A , 759-778.
Mynatt, C. R., Doherty, M. E., & Sullivan, J. A. (1991). Data selection
in a minimal hypothesis testing task. Acta Psychologica, 76, 293-
305. doi:10.1016/0001-6918(91)90023-S
Oaksford, M., & Chater, N. (1994). A rational analysis of the selection
task as optimal data selection. Psychological Revi e w , 101, 608-631.
Oaksford, M., Chater, N., Grainger, B., & Larkin, J. (1997). Optimal
data selection in the reduced array selection task (RAST). Journal of
Experimental Psychology: Learning, Memory and Cognition, 23,
441-458. doi:10.1037/0278-7393.23.2.441
Vallée-Tourangeau, F., & Villejoubert, G. (2010). Information rele-
vance in pseudodiagnostic reasoning. In S. Ohlsson, & R. Catram-
bone (Eds.), Proceedings of the 32nd Annual Conference of the Cog-
nitive Science Society (pp. 1172-1177). Austin, TX: Cognitive Sci-
ence Society.
Villejoubert, G., & Vallée-Tourangeau, F. (2012). Relevance-driven
information search in “pseudodiagnostic” reasoning. Quarterly Jour-
nal of Experimental Psychology, 65, 541-552.
Wason, P. C. (1960). On the failure to eliminate hypotheses in a con-
ceptual task. Quarterly Journal of Experimental Psychology, 12,
129-140. doi:10.1080/17470216008416717