D. HUTCHINSON, J. WELLS
the post test results. It is clear to see that the majority of as-
sessment results were higher than the post-test results. This
provides further justification that the creation of MCQs resulted
in effective learning.
All the analysis presented in this section demonstrated that
there were statistical significant differences in the pre and post
test results and that having students create MCQs in ITCS as
part of their assessment does have a valid learning benefit.
Conclusion
This inquiry set out to implement a new learning strategy to
facilitate a more active and inquisitive learning atmosphere than
what was currently being experienced. It was contemplated by
incorporating an assessment requiring students to construct MCQs
which could have a positive impact on students’ approach to
learning. By quantitatively analyzing the results from the pre
and post tests and comparing the results of the post test to the
assessment task, the desired outcome of improving student
knowledge and understanding in relation to the concepts and
practices of computer security was realized. The significance of
the method was proven in raising the level of knowledge gained
by these students.
Using “Test Monkey” enabled a unique style of feedback al-
lowing students to review the expert rating for each of their
MCQs as marked by the teacher. This process provided each
student with a customized view of their results, and importantly
not just a mark. It is our belief that this provided opportunity
for self-reflection to enhance their learning experience and
motivated students to study and keep up to date by encouraging
a more interactive approach to learning.
The results suggested that students may have been involved
in a deeper approach to learning and the development of cogni-
tive skills through a more intimate interaction with the unit
material. However the degree of deeper learning that this in-
quiry achieved is not conclusive at this stage. This is under-
standable as it is “not a realistic expectation for students to
produce MCQs testing higher order cognitive skills at their first
attempt”. Also “exercises of this nature are not likely to be
greeted with much enthusiasm as they involve learning methods
unfamiliar to many students” (Palmer & Devitt, 2006). Further
analysis of the data is required, for instance to determine and
rank the overall quality of the MCQs created.
From a teaching perspective the experience was certainly
valuable on several levels. It provided the opportunity to try
something new. It exposed what students did not know and
enabled informed adjustments to the teaching to target those
areas earlier on in the semester. It has provided a resource for
future use and most importantly feedback about ITCS for re-
flection and improvement.
The major dilemma experienced was the marking of ap-
proximately 2000 MCQs all at one time. This created a bottle-
neck in the assessment process. To overcome this dilemma the
assessment model should be adjusted so that students would
create MCQs on a weekly basis; thus the assessment would be
spread over the duration of the semester. This would be ex-
pected to encourage a more progressive and active approach to
learning allowing students to receive feedback on a regular
basis to help them improve their learning strategy. This again
would provide direction for teaching; more towards those spe-
cific areas on what our students don’t know or understand.
In terms of future work it is planned to perform additional
inquiries across multiple disciplines to assess and validate the
effectiveness of this assessment approach on a wider scale.
Also it is intended to introduce another feature into the software
to allow a peer review process of the MCQs. In this way stu-
dents would be able to learn from each other which is thought
to add additional value to the learning experience.
REFERENCES
ACVIM (1997). Question-writing guidelines. American College of Ve-
terinary Internal Medicine.
http://www.acvim.org/uploadedFiles/Candidates/exam/ABIMQuesti
onWritingGuidelines.pdf
Blackboard (2007). Blackboard homepage, Bl ack boa rd Inc.
http://www.blackboard.com/us/index.Bb
Brown, S., Rust, C., & Gibbs, G. (1994) Strategies for diversifying
assessment in higher education. Oxford: Oxford Centre for Staff De-
velopment.
Burton, S. J., Sudweeks, R. R., Merrill, P. F., & Wood, B. (1991). How
to prepare better multiple-choice test items: Guidelines for university
faculty. Bringham Young University Testing Services and the De-
partment of Instructional Science.
http://testing.byu.edu/info/handbooks/betteritems.pdf
Censeo (2007). Guidelines for writing effective tests: A practical “short
course” for test auth ors. Censeo Corporation.
http://www.censeocorp.com/downloads/whitepapers/guidelines-for-
writing-effective-tests.asp
Elton, L., & Johnston, B. (2002) “Setting the scene”, assessment in
universities: A critical review of research. London: Generic Centre
Learning and Teaching Support Network (LTSN).
http://eprints.soton.ac.uk/59244/1/59244.pdf
English, L. D. (1998). Children’s problem posing within formal and in
formal context. Journal for Research in Mathematics Education, 29,
83-106. doi:10.2307/749719
James, R., McInnes, C., & Devlin, M. (2002). Assessing learning in
Australian universities. Canberra: Australian Universities Teaching
Committee.
King, A. (1995). Inquiring minds really do want to know: Using ques-
tioning to teach critical thinking. Teaching of Psychology, 22, 13-17.
doi:10.1207/s15328023top2201_5
Leung, S. S., & Wu, R. X. (1999). Problem posing with middle grades
mathematics: Two real classroom examples. Mathematics teaching
in the middle school. Reston, VA: National Council of Teachers of
Mathematics.
Yu, F., & Liu, Y. (2004a). Perceived potential value of student multi-
ple-choice question-construction in the introductory physics labora-
tory. Proceedings of International Conference on Engineering Edu-
cation ICEE-2004, Gainesville.
Yu, F., & Liu, Y. (2004b). Active learning through student generated
question in physics experimentation classrooms. Proceedings of In-
ternational Conference on Engineering Education ICEE-2004, Gai-
nesville.
Lublin, J. (2000). Guidelines for good practice in assessment. Univer-
sity of Western Sydney, Campbelltown: Centre for Enhancement of
Learning and Teaching.
Miller, A. H. (1987). Course design for university lecturers. New York:
Nichols Publishing Company.
Palmer, E., & Devitt, P. (2006). Constructing multiple choice questions
as a method for learning. Singapore: Annals Academy of Medicine.
http://www.annals.edu.sg/PDF/35VolNo9Sep2006/V35N9p604.pdf
Parker, L. (2004). Intersecting sets: Carrick, assessment and evaluation.
Key Note Address Presented at the Evaluations and Assessment Con-
ference. Melbourne: RMIT.
Parry, S. (2004). Student assessment. Key Note Address Presented at
the Evaluations and Assessment Conference. Melbourne: RMIT.
Phye, G. D. (1997). Handbook of academic learning—Construction of
knowledge. California: Academic Press.
Ramsden, P. (1988). Improving learning—New perspectives. London:
Kogan Page.
Copyright © 2013 SciRes.
124