Creative Education
2012. Vol.3, Special Issue, 903-907
Published Online October 2012 in SciRes (http://www.SciRP.org/journal/ce) http://dx.doi.org/10.4236/ce.2012.326136
Copyright © 2012 SciRes. 903
Classroom Assessment Techniques: An Assessment and
Student Evaluation Method
Dawn-Marie Walker
University of Nottingham, Nottingham, UK
Email: dawn-marie.walker@nottingham.ac.uk
Received August 9th, 2012; re vise d Sep tember 10th, 2012; accepted September 24th, 2012
Some of the challenges that face Higher Education are how to ensure that assessment is meaningful and
that feedback is prompt in order to promote learning. Another issue is how to provide lecturers with feed-
back regarding their efficacy, in a timely and non-judgmental manner. This paper proposes that Class-
room Assessment Techniques (Angelo and Cross, 1993), maybe a good way of answering both of those
issues. They are quick and easy tasks set within the lecture, which tests the students’ knowledge, provid-
ing an immediate opportunity for further elaboration if needed by the lecturer, therefore providing imme-
diate feedback to the students. It also ensures that the lecturer has delivered the most salient messages,
therefore also providing feedback to the lecturer.
Keywords: Assessment; Student Evaluation; Feedback; Classroom Assessment Techniques
Introduction
Appropriate assessment in Higher Education (HE) is a topic
which has been debated and researched over the years, as not
only is assessment respected as a necessary method of quanti-
fying students, but it is also required by clients themselves,
both students and employers. One of the major problems with
assessment is how to make it meaningful, and in a manner
which promotes deep learning to develop independent and self
motivated thinkers, whilst also fulfilling the assessment criteria.
This is often achieved by providing thorough feedback in a
timely manner after the assessment, which in large classes can
be difficult to the lecturer. Another area of much debate in HE
is how to evaluate what is taught. Student evaluation of teach-
ing and modules is prone to criticism; therefore many sugges-
tions of evaluation methods to improve accuracy have been put
forward. The present paper aims to draw on previous theories
about: 1) assessment, i.e. summative or formative; 2) feedback;
and 3) student evaluated teaching, to propose an assessment
method, which also combines an evaluation method.
Approaches to Learning
The deep approach to learning which is what HE strives to
achieve, involves the critical analysis of new ideas, with the
student relating their own previous knowledge to the new
knowledge, theoretical ideas, and evidence. This in turn leads to
understanding and long-term retention of concepts so that they
can be used for problem solving in unfamiliar contexts. The
surface approach to learning is the unquestioning acceptance
and memorization of information as isolated and unlinked facts
which lead to rote learning for examinations, most of which are
promptly forgotten about following the exam (Marton & Saljo,
1976; Biggs, 1987; Biggs, 1993; Ramsden, 1992), i.e. “brain
dump”. Deep learning is driven by challenging, open-ended
problems with lecturers acting as facilitators in an interactive
classroom. An interactive classroom promotes deep approaches
to learning and contributes towards positive student motivation
by allowing students to be in charge of their learning environ-
ment (Markett et al., 2006). A key strength of classroom inter-
action is that it provides scaffolding which allows the student to
develop content into context, therefore developing cognitive
structures (Moore, 1989). Therefore to promote deep learning,
there has to be dialogue and an interactive classroom, and also
great care needs to be given when choosing assessment tech-
niques to prevent surface learning (Table 1 compares and con-
trasts these two approaches to learning).
Assessment
Assessment can provide a framework for sharing educational
objectives with students and for mapping their progress. For
these reasons there is strong support for assessment to be part
of the learning process (Dochy & McDowell, 1997). In general,
assessment is divided into two concepts: formative and summa-
tive. Formative assessment is intended to assist student learning
via deep learning approaches. Summative assessment on the
other hand, e.g. assessments involving short questions, multiple
choice or unseen exams, checks the level of learning at the end
of a course/module and often takes the form of an exam or
piece of course work which is graded. Exams lend themselves
to rote learning, or surface approaches by encouraging students
to concentrate on performance goals (passing the test) rather
than learning goals (understanding the subject) (Dweck, 1999).
This leads some to argue that summative assessment in itself
can control, and arbitrarily classify students whilst impairing
the student’s own sense of self and leads to a limitation of their
educational development (Barnett, 2007). Therefore it is argued
that formative assessment should be an integral part of teaching
and learning in HE and that it should be systematically embed-
ded in curriculum practices (Juwah et al., 2004).
To optimize the learning from the assessment procedure the
marking criteria for that assessment should be transparent and
explicit, as this will enable students to understand what is re-
D.-M. WALKER
Table 1.
Compare and contrast deep learning with surface learning (based on
Ramsden, 1992) .
Deep learning
Takes a br oa d view Surface learning
Takes a narrow view
Looks for meaning Relies on rote learni n g
Focuses on the concepts and
arguments to solve the problem Focuses on the formula to solve
the problem
Relates new knowledge to
previously learnt knowledge Focuses on learning unrelated bits
of a task
Relates k no wledge across mod-
ules/courses Information is memorized solely
for assessment
Relates theory to practice Theory is not reflected upon in
real life
Evidence and argument between
theories is developed No cross referencing between
theories
Emphasis is student centered Emphasis is external, i.e.
assessment driven
quired of them to gain a top mark and enables them to gain
feedback, via reflection on their own work when compared with
the criteria, and so will encourage deep learning (Norton et al.,
2001). Feedback is an extremely important part of learning and
the assessment process. For any assessment to be useful to the
student in their personal development there needs to be a timely
feedback loop that will encourage the student to learn from the
process, to reflect on their work and to assimilate the knowl-
edge for future practice. When assessment (often formative)
encompasses a feedback loop, it results in positive benefits on
learning and achievement across all content areas; knowledge,
skills and levels of education (Black & William, 1998).
Feedback
Feedback is information about how the student’s present
state (of learning and performance) relates to the desired goals
and standards (Nicol & Macfarlane-Dick, 2006) and systematic
reviews show that effective feedback leads to learning gains
(Black & William, 1998). Lecturer feedback serves as an au-
thoritative external reference point against which students can
evaluate, and self-correct their progress and their own internal
goals (Juwah et al., 2004). Hence the main aim of feedback is
in developing self regulated students, which requires them to
internalize personal goals against which their performance can
be compared and assessed by themselves (Nicol & Macfarlane-
Dick, 2006). However providing meaningful feedback in a
timely manner can be difficult. Although there has been ex-
panding numbers of students attending HE, the actual resource
allocated per student in the largest classes may be much less
than ten years ago (Gibbs, 2006). The work load of lecturers is
often calculated by “class contact hour” which ignores class
size, therefore assessment loads are sometimes ignored (Gibbs,
2006). These time constraints, together with modularization of
degrees, often without any increase in staffing, can increase the
utilization of summative assessment (Gibbs, 2006) and there-
fore leads to a decrease in timely and relevant feedback which
would have enhanced learning.
It is also important that the feedback is in a loop and is part
of a dialogue which encourages engagement. Dialogue between
the lecturer and student will help develop the student’s under-
standing of expectations and standards, to correct misunder-
standings and to get an immediate response to difficulties
(Freeman & Lewis, 1998). It can also inform the lecturers as to
whether they are teaching appropriately and whether it is at the
right level, therefore providing an immediate opportunity for
realignment of their teaching. A common method of closing the
loop and providing feedback to the lecturer is “Student Evalu-
ated Teaching (SET)”.
Student Evaluated Teaching
The need for greater accountability and improvement in the
quality of teaching has become a major issue in HE in recent
years (Coaldrake & Stedman, 1998; Ballantyne et al., 2000).
Therefore SET has become an integral part of HE’s approach to
maintaining teaching standards via a summative method: to
gain data for administrative purposes, to provide information to
students and to meet government guidelines; and a formative
purpose: giving diagnostic feedback to lecturers about their
teaching effectiveness (Marsh, 1987). SET is often the only
measure of teaching effectiveness (Perry, 1997), so it is of pa-
ramount importance that the students give meaningful input.
Literature however, suggests that SET is currently not fulfilling
all its objectives as there doesn’t appear to be a consensus as to
what “effective teaching” includes (Shelvin et al., 2000). For
example, Lowman and Mathie (1993) identifies lecturers’ ef-
fectiveness, as comprising 1 intellectual excitement; and 2 in-
terpersonal rapport; whilst Swartz et al. (1990) view it as com-
posing: a) clear instructional presentation; and b) good man-
agement of student behavior. However in reality it’s probably
all of these items compounded with others such as encouraging
students to have self worth (Covington, 1997), etc. Other prob-
lems with this system relates to the validity of the student
evaluations as it is human nature to be subjective in voting, for
example Shelvin et al. (2000) found that student evaluation
frequently measures other factors such as 63% of the variance
of the “lecturer effectiveness” score being accounted for by
charisma.
Therefore HE establishments are wrong if they quantify
teaching effectiveness on SET, or see students as customers,
and shape their educational provision to meet their wishes or
evaluations, as students objectives centre around getting the
highest grades with the least amount of effort, or time (Chad-
wick & Ward, 1987). Therefore good lecturers who use tech-
niques to promote deep approaches to learning, which are by
their nature, often harder work and more difficult tasks than
surface approaches, may be looked upon less favorably than a
teacher who “spoon feeds” information to the students (Platt,
1993).
The author has some unpublished data from staff and stu-
dents at the University of Nottingham where she is based re-
garding the SET procedure. Significantly more students than
staff thought the SET aimed to maintain/improve teaching stan-
dards, and to help initiate dialogue between the staff and stu-
dents. Although is within the SET remit, the fact that staff are
less likely to agree with these statements, means that the SET is
not fulfilling its capabilities. Another telling analysis is that
SET procedures do not seem to be followed, such as students
are significantly less likely to believe that enough time has been
set aside for this task, and that the feedback loop is not closed
with dialogue from the lecturer to the students. Due to the lack
of feedback, it appears that the students believe that SET is just
Copyright © 2012 SciRes.
904
D.-M. WALKER
Copyright © 2012 SciRes. 905
to fulfill government requirements, although they maintain that
teaching needs to be evaluated significantly more than the staff.
So it appears that students value this process, but become disil-
lusioned by the lack of feedback/impact, and not giving enough
time to complete the form thoughtfully.
Classroom Assessment Techniques
The use of Classroom Assessment Techniques (CATs) is one
way of resolving all of these problems. CATs offer an egalitar-
ian and productive method of student evaluation, gives the op-
portunity for immediate formative feedback to both students
and staff, and is also a formative assessment, therefore pro-
motes deep learning techniques, and thus enhancing knowledge
and motivation. CATs were first presented and described in
detail in a book by Angelo and Cross in 1993. CATs are quick
and simple activities which are designed to give both the lec-
turer and the students’ useful, immediate feedback. They also
assess the teaching-learning process rather than other con-
founding issues such as the charisma of the lecturer, or how
easy the course is. They are defined as “small-scale assessments
conducted continually in college classrooms by discipline-
based teachers to determine what students are learning in that
class”, (Cross & Steadman, 1996: p. 8).
CATs are sometimes called test-feedback cycles, and imple-
mentation of them allows teachers and students to share, on a
regular basis, their conceptions about both the goals and proc-
esses of learning (Stefani & Nicol, 1997) thereby opening up
dialogue opportunities. They are usually not graded to enable
the student to interact with the feedback, rather than become
obsessed with the grade. However some authors argue that even
making CATs count for 1% of the final grade will encourage
students to take them seriously (Enerson et al., 2007). CATs
rely on self-assessment, thus promoting the internal resources
necessary for lifelong learning, and autonomy which enhances
the learning process. In an evaluation of CATs use forty-five
out of forty-six faculties in a university setting reported that
there were no negative experiences associated with their use of
CATs (Catlin & Kalina, 1993), although there is still some de-
bate regarding their efficacy as Cottell & Harwood (1998)
found no difference in grades, participation, or perceptions of
learning between students who used CATs and those who
didn’t.
There are various types of CATs one can adopt (Table 2) al-
though perhaps the most commonly used one is the “one-min-
ute paper,” where students are asked to write down answers to
questions such as, “What was the most important thing you
learned during this class?” or “What questions do you still have
on this topic?” This type of technique enables the lecturer to
discover how the students are processing and synthesizing the
presented material as well as which points need to be reiterated
or elaborated on before progressing. Therefore this method
assesses student knowledge and also offers the lecturer imme-
diate feedback regarding whether the students have grasped the
most salient pieces of information from the lecture giving an
opportunity to recap on any misunderstood items. Although,
arguably, some CATs could be regarded as summative in nature,
such as the minute paper, because of the immediate feedback
and dialogue ensuing, they therefore become formative. CATs
differ from tests and other forms of student assessment in that it
provides timely opportunity for course improvement, with the
goal of understanding the students’ learning and therefore im-
proving teacher effectiveness. Another benefit of this system is
Table 2.
Examples of CATs (adapted from a table on the National Teaching and Learning Forum, 2008).
CAT Method Feedback Effort
Knowledge probe
At the beginning of class as
students to answer preset questions (open, or
multiple choice ) to assess student s existing
knowledge.
If multiple choice, use vote pads for
immediate discussion. Note any
weaknesses in knowle dge for
elaboration. If open ended, could also
utilize peer assessment.
Prep: low
In class: medium
Analysis: low
Minute paper
At the end of class ask student to w ri t e
“what is the most important point y ou
learned today?” and “What is the least clear
to you?”
Collect and review responses.
Ensure that they have obt ained the
correct message. In the next class
comment on th e findings. Or, ask for
peer revie w and swop with partner.
Discuss any discrepancies.
Prep: low
In class: low if collected, higher
if peer assessed
Analysis: low
One-sentence summary Can be use d at any tim e du ri ng class to test
knowledge about an important topic you
expect them to be able to summarie s.
Ensure the students have the
message. Can be done with a vote pad .
Prep: low
In class: low
Analysis: low
Directed paraphrasing Ask students to write a layman’s summary
of any pri nciple taught. This assesses their
ability to compre hend and transfer concepts.
Peer or teacher assessed. Ensure the
salient points are c o v ered.
Prep: low
In class: medium
Analysis: me di um
Application cards A s k students to write down on real-world
application for a theory, principle or
procedure you have just covered.
Collect an d pi ck out a bro ad range of
examples to present to the class. Or
peers assess and discuss.
Prep: low
In class: low
Analysis: med
Muddiest point Ask students to write down the “muddiest
point” of the lecture, i.e., the concept they
feel they haven’t understood.
Collect written answers or get them to
discuss with their peers. Or have them
vote on prede t ermined i tems using
hand held voting systems
Prep: low
In class: medium
Analysis: low
D.-M. WALKER
that there is very little time investment when compared to more
traditional assessment such as essays or exams, especially when
one bears in mind the time taken to provide feedback.
This method also fosters open dyadic communication and
good rapport. CATs can be used within any size of classroom,
large lectures or small seminars, and can personalize learning
and lend themselves to peer led teaching/feedback. They have
also been used in e-learning/distance formats (Henderson, 2001)
and so are extremely versatile. They are well suited to the ad-
vent of the Interactive Voting Systems that many universities
have adopted. These are systems that can be built into Power-
Point presentations and which use individual voting pads. The
lecturer can then build a CAT into their presentation, ask the
students to vote with their key pad and the system will then
calculate the results immediately, presenting them on the screen
for the lecturer and students to analyze. Mobile phones and
SMS technology have also been shown to work when used in
this manner (Markett et al., 2006). Using media in this way can
enhance the learning experience as it is interactive therefore
promoting deep learning. For example, Laurillard (1996) claims
that by changing the media used in class, the student activity is
changed and hence improves the learning situation i.e. “peda-
gogical re-engineering” (enhancing learning by changing the
balance or combination of the components used) (Collis, 1996).
Case Study
In the author’s own teaching, she has used CATs with great
effect. In a module consisting of approximately 10 lectures, she
built in around 5 CATs. The students were not told which lec-
tures the CATs would appear in prior. This ensured that atten-
tion was maintained throughout the module. She used a CAT
when there was an important theory/fact for the students to
understand due to the ensuing lectures/work developing on
from it. As she wanted to make the use of CATs fun, she de-
cided not to mark them, but rather got students to debate around
the topics with students next to them. Dependent upon when the
author needed feedback about her teaching, and which impor-
tant theory the students needed to grasp, informed which CAT
was used, and also where in the lecture. She found that using
voting systems built into the PowerPoint presentation engaged
the students and gave immediate feedback about whether they
were correct or not. She also found that students discussing
CATs in small groups, such as the one-sentence summary pro-
moted deep learning. The students would then write down the
agreed answer on a card anonymously which were then col-
lected so that the author would get feedback regarding the ef-
fectiveness of her teaching.
With virtual learning environments (VLE) become more in-
tegrated into HE teaching, the author has also used CATs
within the VLE used at the University of Nottingham. Along-
side putting the PowerPoint slides and associated handouts up
online, she has found success with an online survey replicating
the knowledge probe CAT which asks one or two questions. It
appears that the students value the engagement that using CATs
offer, as the SET scores for her modules are always high and
the pass rate of assessments are also high. The author also val-
ues the timely feedback on her teaching, so she can detect any
problems early and give her opportunity to approach the theory
in another manner, encourage peer teaching and learning, and
identify key items of literature for them.
Another area she is currently exploring is working with small
groups of students, so that they can design a suitable CAT for
their target lecture. This involves meeting with the students
shortly after their target lecture to discuss what they believe
were the salient points and how they might assess the student’s
grasp of them. Working in their small groups they then design
the CAT they feel would be the most appropriate (or design a
new one), and carry it out at the start of the next lecture. They
then collect the data back from the students and review it in
their groups and report back to the author with any deficits in
the learning identified. If any deficits are observed, some ideas
from them about how these could be remedied such as re-
sources which they could be referred to for further reading,
typed up study notes, etc. are required from them. These re-
sources are then given back to the class by the students, and
feedback obtained e.g. whether other students knew of any
further useful resources which were omitted, etc. To ensure com-
mitment, the author has allocated 5% to this task. This task not
only promotes deep learning (to both “teachers” and “students”),
but also encourages team work and hones their teaching/public
speaking ability. For the author, it also ensures some student
designed teaching, from which she can also learn. Although it
seems to have worked out well, it does take some organization,
such as getting students to form groups, and staggering them
throughout the module. However the feedback from all students
was positive so far.
Conclusion
CATs encourage the view that teaching and learning is a
formative process that evolves over time. By being able to react
swiftly to student answers, they provide the opportunity for
immediate feedback to the lecturer which can be promptly
acted upon, therefore giving the chance to the teacher to close
the feedback loop. It encourages self-assessment by the student
and reflection amongst both the lecturers and students. How-
ever care must be taken in choosing the appropriate CAT and
also allowing enough time in class to ensure that they are
worthwhile. It may also be a good idea to give the CATs a
nominal grade of 5% or 10% to ensure that the students value
them.
Tips for successful use of CATs (Angelo & Cross, 1993):
Don’t ask for feedback on things you can’t or won’t
change;
Don’t collect more feedback than you can analyze and re-
spond to by the next lecture;
Before you use a CAT, ask yourself: How might responses
to this question(s) help me and my students improve? If you
can’t answer that question, don’t do the assessment;
Don’t use too many different CAT techniques in one se-
mester. Student responses are more useful when the stu-
dents are comfortable with a particular technique and un-
derstand it (Martin, 2011).
REFERENCES
Angelo, T. A. (1991). Classroom research: Early lessons from success.
San Francisco, CA: Jossey-Bass.
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques—A
handbook for college-teachers. San Francisco, CA: Jossey-Bass Pub-
lishers.
Ballentyne, R., Borthwick, J., & Packer, J. (2000). Beyond student
evaluation of teaching: Identifying and addressing academic staff
development needs. Assessment and Evaluation in Higher Education,
Copyright © 2012 SciRes.
906
D.-M. WALKER
25, 211-236.
Barnett, R. (2007). Assessment in Higher Education. In D. Boud, & N.
Falchicov (Eds.), Re-thinki ng ass ess me nt in higher education: Learn-
ing for the longer term (pp. 29-40). London: Routledge Taylor and
Francis.
Biggs, J. (1987). Student Approaches to Learning and Studying. Haw-
thorn, VIC: Australian Council for Edu cation al Re search.
Biggs, J. (1993). What do inventories of students’ learning process
really measure? A theoretical review and clarification. British Jour-
nal of Educational Psychology, 83, 3-19.
doi:10.1111/j.2044-8279.1993.tb01038.x
Black, P., & William, D. (1998). Assessment and classroom learning.
Assessment in Education, 5, 7- 74. doi:10.1080/0969595980050102
Catlin, A., & Kalina, M. (1993). What Is the Effect of the Cross/Angelo
Model of Classroom Assessment on Student Outcome? A Study of the
Classroom Assessment Project at Eight California Community Col-
leges. Sacramento, CA: California Community College Chancellor’s
Office.
Chadwick, K., & Ward, J. (1987). Determinants of consumer satisfac-
tion with education: Implications for college & university adminis-
trators. College and University, 2, 236-246.
Coaldrake, O. P., & Stedman, L. (1998). On the brink: Australia’s
universities confronting their future. St. Lucia, QLD: University of
Queensland Press.
Collis, B. (1996). Tele-learning in a Digital World: The future of dis-
tance learning. London: International Thomson Computer Press.
Covington, M. V. (1997). A motivational analysis of academic life in
college. In R. P. Perry, & J. C. Smart (Eds.), Effective teaching in
higher education: Research and practice (pp. 61-100). New York:
Agathon Press.
Cross, P., & Steadman, M. (1996). Classroom Research: Implementing
the scholarship of teaching. San Francisco, CA: Jossey-Bass.
Dochy, F., & McDowell, L. (1997). Assessment as a tool for learning.
Studies in Educational Evaluation, 23, 279-298.
doi:10.1016/S0191-491X(97)86211-6
Dweck, C. (1999). Self-Theories: their role in motivation, personality
and development. Philadelphia, PA: Psychology Press.
Enerson, D. N., Plank, K. M., & Johnson, R. N. (2007). An introduction
to classroom assessment techniques. University Park, PA: Penn State.
Freeman, R., & Lewis, R. (1998). Planning and implementing assess-
ment. London: Kogan Page.
Gibbs, G. (2006). Why assessment is changing. In C. Bryan, & K.
Clegg (Eds.), Innovative assessment in higher education (pp. 11-22).
Abingdon: Routledge.
Henderson, T. (2001). Classroom assessment techniques in asynchro-
nous learning networks. The Technology Source. URL (last checked
12 October 2012).
http://ts.mivu.org/default.asp?show=article&id=1034
Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D., &
Smith, B. (2004). Enhancing student learning through effective for-
mative feedback. The Higher Education Academy. URL (last
checked 12 October 2012).
http://www.heacademy.ac.uk/assets/documents/resources/database/id
353_senlef_guide.pdf
Laurillard, D. (1996). How should UK higher education make best use
of new technology? Glasgow: Association for L earning Technology.
Lowman, J., & Mathie, V. A. (1993). What should graduate teaching
assistants know about t eaching? Teaching of Psycholo gy, 20, 84-88.
doi:10.1207/s15328023top2002_4
Markett, C., Sanchez, I. A., Weber, S., & Tangney, B. (2006). Using
short message service to encourage interactivity in the classroom.
Computers & Education, 46, 280-293.
doi:10.1016/j.compedu.2005.11.014
Marsh, H. W. (1987). Students’ evaluation of university teaching: Re-
search findings, methodological issues and directions for future re-
search. International Journal of Educational Research, 11, 253-388.
doi:10.1016/0883-0355(87)90001-2
Martin, M. B. (2011). Classroom assessment techniques designed for
technology. URL (last checked 29 August 2012).
http://online-course-design.pbworks.com/f/Classroom+Assessment+
Techniques+Designed+Technology.pdf
Marton, F., & Saljo, R. (1976). On qualitative differences in learning-1:
Outcome and process. British Journal of Education Psychology, 46,
4-11. doi:10.1111/j.2044-8279.1976.tb02980.x
Moore, M. G. (1989). Editorial: Three types of interaction. The Ameri-
can Journal of Distance Education, 3, 1-6.
doi:10.1080/08923648909526659
National Teaching and Learning Forum (1998). Classroom assessment
techniques (2nd ed.). New York: Jossey-Bass.
www.ntlf.com/html/lib/bib/assess.htm
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and
self-regulated learning: A model and seven principles of good feed-
back practice. Studies in Higher Education, 31, 199-218.
doi:10.1080/03075070600572090
Norton, L. S., Tilley, A. J., Newstead, S. E., & Franklyn-Stokes, A.
(2001). The pressures of assessment in undergraduate courses and
their effect on student behaviors. Assessment & Evaluation in Higher
Education, 26, 269-284. doi:10.1080/02602930120052422
Perry, R. P. (1997). Teaching effectively: Which students? What
methods? In R. P. Perry, & J. C. Smart (Eds.), Effective teaching in
higher education: Research and practice (pp. 154-170). New York:
Agathon Press.
Platt, M. (1993). What student evaluations teach. Perspectives in Po-
litical Science, 22, 29-40. doi:10.1080/10457097.1993.9944516
Ramsden, P. (1992). Learning to teach in higher education. London:
Routledge. doi:10.4324/9780203413937
Shelvin, M., Banyard, P., Davies, M., & Griffiths, M. (2000). The va-
lidity of student evaluation of teaching in higher education: Love me,
love my lectures? Assessment and Evaluation in Higher Education,
25, 397-405. doi:10.1080/713611436
Stefani, L., & Nicol, D. (1997). From teacher to facilitator of collabora-
tive enquiry. In S. Armstrong, G. Thompson, & S. W. Brown (Eds.),
Facing up to radical changes in universities and colleges (pp. 131-
140). London: Kogan Page.
Swartz, C. W., White, K. P., & Stuck, G. B. (1990). The factorial
structure of the North Carolina teacher performance appraisal in-
strument. Educational and Psychological Measurement, 50, 175-185.
doi:10.1177/0013164490501021
Copyright © 2012 SciRes. 907