
B. BYRNE, R. GUY
measurements e.g. to determine the relationship between pres-
age, process and product, as in the Biggs 3P model, (Biggs, 1989)
however it is much rarer to find this approach used to examine
links within a class. The “absolute agreement” of students within
a class can be measured. However, Lüdtke et al. (2006) indicate
that very little educational research has been conducted on the
agreement of student ratings of group-level constructs. A study
describing instruments designed for evaluating clinical faculty
by learners found only 9 of 21 relevant studies measured inter-
rater reliability (Beckman, Ghosh, Cook, Erwin, & Mandrekar,
2004). Our data demonstrates a clear difference between the
degree of inter-rater agreement within the “new” student group
and the “continuing” student group. The high level of correlation
between different elements of the survey shown by the latter
group both within their comparison between old and new labo-
ratory formats and within their perception of the new format may
indicate their greater ability to benchmark their answers against
prior experience. On the other hand the lack of correlation be-
tween student ratings within the “new” group may indicate their
lack of experience and lack of reference to appropriate bench-
marks.
If one accepts that the low correlations for inter-rater re-
sponses amongst the “new” students indicates a lack of prior
relevant experience, then their higher ratings of the innovative
physiology laboratory might either reflect a high level of moti-
vation related to their lack of experience and their need to do
well or that they produced an inflated response due to lack of
benchmarks. Conversely the lower positive ratings of the “con-
tinuing” group may provide a more realistic measure of the
innovation given the strong relationships between the student
responses and the more relevant prior experience of these stu-
dents. Although the correlational technique is not the only
method of measuring inter-rater agreement, one is left with the
conclusion that with respect to student perceptual ratings, high-
est is not necessarily the best indicator of innovation success.
When evaluating learning and teaching innovations attention
should be paid to the diversity of the student cohort and the
presence of subgroups and it may be appropriate to investigate
analysis methods other than simply measuring the level of stu-
dent ratings. The same conclusion may also apply when con-
sidering higher-level analysis built on course level feedback.
REFERENCES
Ainley, M. (2006). Connecting with learning: Motivation, affect and
cognition in interest processes. Educational Psychology Review, 18,
391-405. doi:10.1007/s10648-006-9033-0
Artino, A. R. (2009). Online learning: Are subjective perceptions of
instructional context related to academic success? The Internet and
Higher Education, 12, 117-125. doi:10.1016/j.iheduc.2009.07.003
Beckman, T. J., Ghosh, A. K., Cook, D. A., Erwin, P. J., & Mandrekar,
J. N. (2004). How reliable are assessments of clinical teaching? A
review of the published instruments. Journal of General Internal
Medicine, 19, 971-977. doi:10.1111/j.1525-1497.2004.40066.x
Biggs, J. (1985). The role of metalearning in study process. British
Journal of Educational Psychology, 55, 185-212.
doi:10.1111/j.2044-8279.1985.tb02625.x
Biggs, J. B. (1989). Approaches to enhancement of tertiary teaching.
Higher Education Research & Development, 8, 7-25.
doi:10.1080/0729436890080102
Cassidy, S. (2007). Assessing “inexperienced” students’ ability to
self-assess: Exploring links with learning style and academic per-
sonal control. Assessment & Evaluation in Higher Education, 32,
313-330. doi:10.1080/02602930600896704
Deci, E. L., & Ryan, R. M. (2000). The “What” and “Why” of goal
pursuits: Human needs and the self-determination of behavior.
Psychological Inquiry, 11, 227-268.
doi:10.1207/S15327965PLI1104_01
Diseth, Å., Pallesen, S., Brunborg, G. S., & Larsen, S. (2010).
Academic achievement among first semester undergraduate psy-
chology students: The role of course experience, effort, motives and
learning strategies. Higher Education, 59, 335-352.
doi:10.1007/s10734-009-9251-8
Fazey, D. M. A., & Fazey, J. A. (2001). The potential for autonomy in
learning: Perceptions of competence, motivation and locus of control
in first-year undergraduate students. Studies in Higher Education, 26,
345-361. doi:10.1080/03075070120076309
Forrest, K. D., & Miller, R. L. (2003). Not another group project: Why
good teachers care about bad group experiences. Teaching of
Psychology, 30, 244-246.
Fredericks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School
engagement: Potential of the concept, state of the evidence. Review
of Educational Research, 74, 59-109.
doi:10.3102/00346543074001059
Handelsman, M. H., Briggs, W. L., Sullivan, N., & Towler, A. (2005).
A measure of college student course engagement. The Journal of
Educational Research, 98, 184-191. doi:10.3200/JOER.98.3.184-192
Harper, S. R., & Quaye, S. J. (2009). Beyond sameness, with engage-
ment and outcomes for all. In S. R. Harper, & S. J. Quaye (Eds.),
Student Engagement in Higher Education (pp. 1-15). New York and
London: Routledge.
Hillyard, C., Gillespie, D., & Littig, P. (2010). University students’
attitudes about learning in small groups after frequent participation.
Active Learning in Higher Education , 11, 9-20.
doi:10.1177/1469787409355867
Hoffstein, A., & Lunetta, V. N. (1982). The role of the laboratory in
science teaching. Review of Educational Research, 52, 201-217.
Jonassen, D. (1999). Designing constructivist learning environments. In
C. M. Reigeluth (Ed.), Instructional theories and models (2nd ed., pp.
215-239). Hoboken: Taylor & Francis.
Kearney, M. (2004). Classroom use of multimedia-supported predict-
observe-explain tasks in a social constructivist learning environment.
Research in Science Education, 34, 427-453.
doi:10.1007/s11165-004-8795-y
Kember, D., Ng, S., Tse, H., Wong, E. T. T., & Pomfret, M. (1996). An
examination of the interrelationships between workload, study time,
learning approaches and academic outcomes. Studies in Higher
Education, 21, 347-358. doi:10.1080/03075079612331381261
Kulic, J. (2001). Student ratings: Validity, utility and controversy. New
Directions for Institutional Res e a rch, 109, 9-25. doi:10.1002/ir.1
Land, S., & Hannafin, M. (2000). Student centered learning environ-
ments. In D. H. L. Jonassen, (Ed.), Theoretical foundations of learn-
ing environments. Hoboken: Taylor & Francis.
Langendyk, V. (2006). Not knowing that they do not know: Self-
assessment accuracy of third-year medical students. Medical Educa-
tion, 40, 173-179. doi:10.1111/j.1365-2929.2005.02372.x
Lüdtke, O., Trautwein, U., Kunter, M., & Baumert, J. (2006). Relia-
bility and agreement of student ratings of the classroom environment:
A reanalysis of TIMSS data. Learning Environments Research, 9,
215-230. doi:10.1007/s10984-006-9014-8
Marton, F., & Booth, S. (1997). Learning and awareness. Hoboken, NJ:
Taylor & Francis..
Mayer, R. E. (2003). Theories of learning and their applications to
technology. In H. F. P. O’Neil, (Ed.), Technology applications in
education (pp. 127-157). Hoboken: Taylor & Francis.
Michael, J. (2006). Where’s the evidence that active learning works?
Advances in Physiology Education, 30, 159-167.
doi:10.1152/advan.00053.2006
Prince, M. (2004). Does Active Learning Work? A Review of the
Research. Journal of Engineering Education, 93, 223-231.
Ramsden, P. (1991). A performance indicator of teaching quality in
higher education: The course experience questionnaire. Studies in
Higher Education, 16, 129-150.
doi:10.1080/03075079112331382944
Ramsden, P. (1992). Learning to teach in higher education. London:
Copyright © 2012 SciRes. 759