![](paperimages\23743_7.jpg)
S. ROY, P. MAJUMDAR
458
termined from the mock test data and the estimation of
performance has to be made for different values of β. The
larger the value of , greater will be the validity and
applicability of this approach where these parameters can
be used to predict one’s performance. This model shows
that one is likely to make better performance in examina-
tions having larger number of questions. This depend-
ence on N is a mathematical consequence that cannot
generally be guessed from common sense. It has also
been assumed that the process of attempting a question
and its result is independent of attempting any other
question. This assumption does not hold for linked com-
prehension questions where, the process of attempting a
question and its result depends on attempting other
linked questions. In this regard a modification of our
simple theory using the conditional probability [21] is
required. Let the events of attempting successive ques-
tions in a linked comprehension be A, B, C, etc. Then,
according to the conditional probability [21] we have
PBAPBA PA, (26)
PCBPCB PB , (27)
and so on.
These ideas can be incorporated for theoretical inter-
ests. Calculations, based on such ideas, are likely to
make this model so complicated that it would not be very
useful to examinees preparing for competitive examina-
tions. The mathematical simplicity in its present form is
important in the sense that one can use this model suc-
cessfully with considerable ease, for an estimation of
performance, without making too much effort to grasp
the underlying concept. The present analysis reveals im-
portant and useful features, which one can’t discover just
by intuition. It enables one to make an effective self-
assessment, and thereby modify one’s plans, while pre-
paring for an important examination.
REFERENCES
[1] D. K. Srinivasa and B. V. Adkoll, “Multiple Choice
Questions: How to Construct and How to Evaluate?” In-
dian Journal of Pediatrics, Vol. 56, No. 1, 1989, pp. 69-
74.
[2] M. Tarrant and J. Ware, “Impact of Item-Writing Flaws in
Multiple-Choice Questions on Student Achievement in
High-Stakes Nursing Assessments,” Medical Education,
Vol. 42, No. 2, 2008, pp. 198-206.
doi:10.1111/j.1365-2923.2007.02957.x
[3] P. Costa, P. Olivera and M. E. Ferrao, “Equalizacãâo de
Escalas com o Modelo de Resposta as Item de Dois
Parâmetros,” In: M. Hill, et al., Eds., Estatistica-da Teo-
ria à Pratica, Actas do XV Congresso Annual da So-
ciedade Portuguesa de Estatistica, Edicões SPE, 2008, pp.
155-166.
[4] P. Steif and J. Dantzler, “Astatics Concept Inventory:
Development and Psychometric Analysis,” Journal of
Engineering Education, Vol. 33, 2005, pp. 363-371.
[5] P. Steif and M. A. Handsen, “Comparisons between Per-
formances in a Statics Concept Inventory and Course
Examinations,” International Journal of Engineering
Education, Vol. 22, No. 3, 2006, pp. 1070-1076.
[6] E. Ventouas, D. Triantis, P. Tsiakas and C. Stergiopoulos,
“Comparison of Examination Methods Based on Multiple
Choice Questions,” Computers & Education, Vol. 54, No.
2, 2010, pp. 455-461.
doi:10.1016/j.compedu.2009.08.028
[7] D. Nicol, “E-Assessment by Design: Using Multiple-
Choice Tests to Good Effect,” Journal of Further &
Higher Education, Vol. 31, No. 1, 2007, pp. 53-64.
doi:10.1080/03098770601167922
[8] L. Thompson, “The Uses and Abuses of Multiple Choice
Testing in a University Setting,” Annotated Bibliography
Prepared for the University Centre for Teaching and
Learning, University of Canterbury, Canterbury, 2005.
[9] P. Nightingale, et al., “Assessing Learning in Universi-
ties,” Professional Development Centre, University of Ne w
South Wales, 1996, pp. 151-157.
[10] J. Heywood, “Assessment in Higher Education: Student
Learning, Teaching Programmes and Institutions,” Jessica
Kingsley Publishers, London, 2000.
[11] N. Falchikov, “Improving Assessment through Student
Involvement: Practical Solutions for Aiding Learning in
Higher and Further Education,” Routledge Falmer, Lon-
don, 2005.
[12] D. Krathwohl, “A Revision of Bloom’s Taxonomy: An
Overview,” Theory into Practice, Vol. 41, No. 4, 2002,
pp. 212-218. doi:10.1207/s15430421tip4104_2
[13] S. Brown, “Institutional Strategies for Assessment,” In: S.
Brown and A. Glasner, Eds., Assessment Matters in Higher
Education, SRHE and Open University Press, Bucking-
ham, 1999, pp. 3-13.
[14] M. Culwick, “Designing and Managing MCQs,” Univer-
sity of Leisester, The Castle Toolkit, 2002.
[15] S. Kvale, “Contradictions of Assessment for Learning in
Institutions of Higher Education,” In: D. Boud and N.
Falchikov, Eds., Rethinking Assessment in Higher Educa-
tion: Learning for the Longer Term, Routledge, London,
2007, pp. 57-71.
[16] M. Paxton, “A Linguistic Perspective on Multiple Choice
Questioning,” Assessment and Evaluation in Higher Edu-
cation, Vol. 25, No. 2, 2000, pp. 109-119.
doi:10.1080/713611429
[17] G. Gibbs and C. Simpson, “Conditions under Which As-
sessment Supports Students’ Learning,” Learning and
Teaching in Higher Education, Vol. 1, No. 1, 2004, pp. 3-
29.
[18] K. Scouller, “The Influence of Assessment Method on
Students’ Learning Approaches: Multiple-Choice Ques-
tion Examination versus Assignment Essay,” Higher Edu-
cation, Vol. 35, No. 4, 1998, pp. 453-472.
doi:10.1023/A:1003196224280
[19] L. Ding and R. Beichner, “Approaches to Data Analysis
of Multiple Choice Questions,” Physical Review Special
Copyright © 2012 SciRes. OJS