Creative Education
2012. Vol.3, Special Issue, 946-950
Published Online October 2012 in SciRes (http://www.SciRP.org/journal/ce) http://dx.doi.org/10.4236/ce.2012.326144
Copyright © 2012 SciRes.
946
Investigation of Clinical Medical Teachers’ Opinion about
Validity-Feasibility of Clinical Assessment Tools in Medical
Sciences Universities in Tehran
Jalil Kuhpayeh Zadeh1, Helen Dargahi2, Jila Shajari2, Rafei Ali3, Mahsa Narenjiha2,
Soudabeh Afsharpour2, Darioush Mehdivarzi4
1Medical Education Research Center, Tehran Universities of Medical Sciences, Tehran, Iran
2Department of Medical Education, Tehran Universities of Medical Sciences, Tehran, Iran
3Department of Biostatistics, Tehran Universities of Medical Sciences, Tehran, Iran
4Department of Orthopedi c Surgery, Shahed Universities of Medical Sciences, Tehran, Iran
Email: Helen.dargahi@yahoo.com
Received August 1st, 2012; revised September 3 rd, 2012; accepted September 15th, 2012
The purpose of this study is investigation about validity and feasibility of clinical assessment methods in
the point of view of clinical instructors. The descriptive study was done in Tehran city universities. Popu-
lation study consisted of academic clinical experts. The instrument was a two-part questionnaire made by
using Accreditation Council for Graduate Medical Education (ACGME) suggested questionnaire and
valid scientific resources. Sampling was based-objected. Total of obtained questionnaires were 83 which
were collected from universities Tehran University of Medical Sciences (39), Iran University of Medical
Sciences (24) and Shahid Beheshti Medical University (20). Data analysis was conducted by SPSS16.
Data indicated that the majority of the study population believed that MCQ (97.6%) is used in clinical set-
ting. OSCE (92.8%) and Logbook (86.7%) are the next methods. Furthermore, Multi-Source Feedback
(MSF) (8.4%) and Portfolio (6%) are not often used; whereas the most suitable and feasible medical stu-
dents’ clinical assessment tools in variety of domains are completely different so that there are lots of
suggested methods for efficient evaluation. Also, the most suitable and feasible methods were the same in
60% cases. Clearly, no single rating is able to provide the whole story about any doctor’s ability to prac-
tice medicine, as this requires the demonstration of ongoing competence across a number of different
general and specific areas.
Keywords: Validity; Feasibility; Clinical Assessment Methods; Medical Sciences University; Tehran;
Clinical Medical Teachers
Introduction
Assessment has a powerful positive steering effect on learn-
ing and the curriculum (Tabish, 2010) and drives both how a
subject is taught and what is taught (Sultana, 2006). Assess-
ment and evaluation are crucial steps in educational process
(Tabish, 2010) that play major role in the process of medical
education, in the lives of medical students, and in society by
certifying competent physicians who are able to take care of the
public. The very foundation of medical curricula is built around
assessment milestones for students. Assessment becomes a
motivating force for them to learn (Shumway, 2003). Before
making a choice of assessment method, some important ques-
tions must be asked: what should be assessed? Why assess?
And for an assessment instrument one must also ask: Is it valid?
Is it reliable? Is it feasible? (Tabish, 2010) Since the 1950s,
there has been rapid and extensive change in the way assess-
ment is conducted in medical education. Several new methods
of assessment have been developed and implemented over this
time and they have focused on clinical skills (taking a history
from a patient and performing a physical examination), com-
munication skills, procedural skills, and professionalism. In
2005, Van der Vleuten and Schuwirth expanded on these fac-
tors for purposes of assessment in medical education and added
educational effect, feasibility, and acceptability to validity and
reliability. Feasibility is the degree to which the assessment
method selected is affordable and efficient for the testing pur-
pose; assessments need to have reasonable costs. Acceptability
is the extent to which stakeholders in the process endorse the
measurement and the associated interpretation of scores (Nor-
cini & McKinle, 2007). Mahara (1998) believes that clinical
evaluation processes should focus on reflection, meaning mak-
ing and student teacher partnerships. Evaluation tools are
needed that to capture the unity and context dependent nature of
clinical practice and support an empowering teacher-student
relationship (Bourbonnais, 2008). Unfortunately, in the vast
majority of medical schools, feedback to clinical clerks is neither
direct nor timely. The primary method for evaluation of both
junior and senior medical student performance on the wards is
typically a subjective and is written by the faculty (Colletti,
2000). On the other hand, because of faculty time constraints,
the increasing reliance on residents as teachers for medical
students in the clinical setting at our institution has arisen
largely (Johnson & Chen, 2006), and in many institutions, resi-
dents and fellows anticipate to write evaluation process. In
addition, particularly for junior rotations, many institutions
complement the subjective written evaluation of clinical ward
J. K. ZADEH ET AL.
performance with some form of cognitive evaluation (Colletti,
2000). Whereas for an assessment method to be acceptable, it
needs to be valid, reliable, and practical and have a positive
effect on a trainee’s learning (Brown & Doshi, 2006). Effective
evaluation not only increases the students’ motivation but also
helps instructors to determine the strength or weakness of their
educational activities for improvement of their performance
(Baral & Paudel, 2007). And poorly selected assessment methods
can lead to passive or rote learning (to get through an examina-
tion), which is associated with a rapid decay of knowledge and
sometimes an inability to apply it in real situations (Brown &
Doshi, 2006). Because of the importance of the topic research,
we decided to assess the validity and feasibility of the medical
student’s assessment tools in view of Iranian academic clinical
experts.
Methods & Material
This survey was a descriptive study that was done in Tehran
city universities. Population study consisted of academic clini-
cal experts that were working as a faculty member. The study
instrument was a two-part questionnaire. One of them was
about demographic and institutional data and the other one was
a thirteen-item table of medical students’ competencies in six
domains included patient care, medical knowledge, prac-
tice-based learning & improvement, interpersonal & communi-
cation skills, professionalism, systems-based practice and also
some clinical assessment tools that were used for medical student
as usual. The questionnaire was made by using of ACGME
suggested questionnaire and valid scientific resources. Ques-
tionnaire was adopted with conditions of universities after
translation and back translation. Also three pages were a brief
of clinical assessment tools enclosed to questionnaire. Content
validity of questionnaire was confirmed after doing a survey
from experts and accomplishing preliminary study. Reliability
of the results was assessed through calculating Cronbach alpha
coefficient for internal consistency. Sampling was based-ob-
jected, so that researcher found out some of key point persons
via educational development centers (EDC) and then the others
were introduced by them. Then after coordination and making
appointment with clinical professors, researcher by self deliv-
ered questionnaire with a short explanation about completion it
and made another appointment to receive the questionnaire.
Total of collected questionnaires were 83 that 39 of them were
of Tehran University of medical science, 24 Iran University of
medical science and 20 Shahid Beheshti University of medical
science. Analysis of data was conducted with SPSS version 17.
Clinical Assessment Tools
Multi-Rater (360˚) Evaluation
Multi-rater (360˚) evaluations provide multiple perspectives
on various aspects of the resident’s performance. For residents,
Multi-rater (360˚) assessment might entail evaluation by at-
tending, other residents, medical students, nurses, ancillary staff,
clerical/administrative support staff, and patients. Self-evalua-
tion is an important part of the Multi-Rater (360˚) assessment
(Joyce, 2006).
Portfolio
A learning portfolio is a collection of materials that repre-
sents a resident’s efforts in multiple areas of the curriculum.
The purpose of a learning portfolio is to improve ability.
Key components of a learning portfolio include:
Self-assessment and goal setting;
Mentored observation and feedback;
Works in progress with formative feedback;
Self reflection on work; and
Final materials documenting achievement (Joyce, 2006).
Chart Stimulated Recall Oral Examination (CSR)
In a chart stimulated recall (CSR) examination patient cases
of the examinee (resident) are assessed in a standardized oral
examination. A trained and experienced physician examiner
questions the examinee about the care provided probing for
reasons behind the work-up, diagnoses, interpretation of clini-
cal findings, and treatment plans. The examiners rate the ex-
aminee by a well established protocol and an accurate scoring
procedure (Wilkinson & Wade, 2005).
The Mini-Clinical Evaluation Exercise (Mini-CEX)
The mini-clinical evaluation exercise (mCEX) is a method of
clinical skills assessment. Faculty observe and evaluate a resi-
dent during a focused new or follow-up patient encounter. The
resident is evaluated along domains using a scale and then re-
ceives feedback. The mCEX is performed on multiple occa-
sions with different patients and different observers (Kogan et
al., 2003).
Assessment of Procedural Skills: DOPS
Directly observed procedural skills (DOPS) is a method of
assessment designed specifica lly by the RCP for the a ssessment
of practical skills. An assessor observes a trainee undertaking a
routine practical procedure and scores specific components of
the procedure at the time of the procedure. Finally, they give
the trainee an overall score on their performance (Wilkinson &
Wade, 2005).
Viva Voce (Oral Examination)
“... assessment in which a student’s response to the assess-
ment task is verbal, in the sense of being expressed or conveyed
by speech instead of writing” (Pearce & Lee, 2007).
Logbook
The logbook is a convenient tool for recording procedural
skills learned during training. The logbook will help trainees
record:
Understanding of the indications, limitations, contraindica-
tions and complications of diagnostic and therapeutic pro-
cedures;
Performance of diagnostic and therapeutic procedures;
Interpretation of diagnostic and therapeutic procedure re-
sults (Wilkinson & Wade, 2005).
Objective Structured Clinical Examination (OSCE)
In an objective structured clinical examination (OSCE) one
or more assessment tools are administered at 12 to 20 separate
standardized patient encounter stations, each station lasting 10 -
15 minutes. Between stations candidates may complete patient
Copyright © 2012 SciRes. 947
J. K. ZADEH ET AL.
Copyright © 2012 SciRes.
948
notes or a brief written examinationabout the previous patient
encounter. All candidates move from station to station in se-
quence on the same schedule. Standardized patients are the
primary assessmenttool used in OSCEs, but OSCEs have in-
cluded other assessment tools such as data interpretationexer-
cises using clinical cases, and clinical scenarios with manne-
quins, to assess technical skills (Jafarzadeh, 2009).
Written Examination (MCQ)
A written or computer-based MCQ examination is composed
of multiple-choice questions (MCQ) selected to sample medical
knowledge and understanding of a defined body of knowledge,
not just factual or easily recalled information. Each question or
test item contains an introductory statement followed by four or
five options in outline format. The examinee selects one of the
options as the presumed correct answer by marking the option
on a coded answer sheet (Mc Coubrie, 2004).
Results
83 out of 102 questionnaires which were delivered to experts
(
60% male and 40% female) were completed and returned (response
rate, 81.4%). Mean age of participants was 44 years (SD = 6.06 ),
mean year of service them as a clinical teacher was 13.7 years
(SD = 6.56).
Table 1 indicates that the majority of the study population
(97.6%) believes that MCQ is used in clinical setting. OSCE
(92.8%) and logbook (86.7%) are the next methods. Further-
more MSF (8.4%) and Portfolio (6%) are not used often.
Table 2 indicates that the most suitable and feasible medical
student’s clinical assessment tools in variety of domains are
completely different as there are lots of suggested methods for
efficient evaluation. Also as you see in sixty percent cases the
most suitable methods and feasible methods are the same.
Mini-CEX is the most suitable and the most feasible assess-
ment tool for competencies “Interviewing” and “Develop &
Carry out pt. Management plan”. To assess the competencies,
Mini-CEX is the most feasible method for evaluating “Patient
teaching”, “Interpersonal communication skills” and “Profes-
sionalism” and also MSF is the most suitable method to evalu-
ate “Practice-Based Learning”. MCQ and Oral Exams are suit-
able and feasible methods to evaluate competencies “Medical
Knowledge” and “System-Based Practice”. MCQ is the most
feasible and Portfolio is the most suitable methods. And finally
Table 1.
Frequency distribution and the percentage of academic clinical expert’s opinion about using the clinical assessment tools in medical science universi-
ties.
Assessment
tools MCQ Viva OSCE CSR Mini-CEXMSF Logbook DOPS Portfolio
Frequencies
(per c ent age) 81
(97.6%) 39
(47%) 77
(92.8%) 17
(20.5%) 27
(32.5%) 7
(8.4%) 72
(86.7%) 53
(63.9%) 4
(4.8%)
Table 2.
Academic clinical experts’ opinion about medical students’ clinical assessment tools in view of validity and feasib i lity.
Assessment tools
Competencies MCQ Viva OSCE CSR Mini-CEXMSF Logbook DOPS Portfolio
Interviewing S2, F2 S
1, F1
Informed decision-making F2 S1, F1 S2
Develop & Carry out pt.
management plan S
2, F2 S
1, F1
Patient teaching F1, S2 S1 F
2
Patient care
Medical proc edures S2, F2 S
1, F1
Investigatory & analytic
thinking F1 S1, F2 S2
Medical
knowledge
Knowledge & application of
basic science S1, F1 S2, F2
Application of research,
IT & statistical methods S2, F2 S
1, F1
Analyze own practice for
improvements S
2 S1, F1 F
2
Practice-based
learning
Facilitate learning of others S1, F1 S
2, F2
Interpersona l communication skills F1, S2 S1 F
2
Professionalism F1, S2 S1, F2
System-based practice F1 S
2 F2 S
1
Note: S1 = the most suitable; S2 = the next suitable; F1 = the most feasible; F2 = the next feasible.
J. K. ZADEH ET AL.
DOPS is the best method to assess competency “Medical pro-
cedures” and OSCE is the next.
Discussion
The evaluation of clinical competence is a major responsibil-
ity of medical educators (Tabish, 2010). Effective evaluation
not only increases the students’ motivation but also helps in-
structors to determine the strength or weakness of their educa-
tional activities for improvement of their performance (Jafar-
zadeh, 2009). In our study the majority of the study population
(97.6%) believed that MCQ is used in clinical setting. Although
MCQs are a valid method of competence testing, they do not
guarantee competence as professional competence integrates
knowledge, skills, attitudes and communication skills (Mc
Coubrie, 2004). OSCE and logbook were the next methods that
were used. Furthermore MSF and Portfolio are not used often.
As we know a direct relationship between instructional objec-
tives and tests must exist. Thus, tests should come directly from
the objectives and focus on important and relevant content
(Collins, 2006). One of the barriers to use portfolio and MSF
(360˚) is that all raters must be trained in using these tools. In
portfolio scoring is difficult and in MSF you may need a large
number of evaluators to obtain a stable estimate of performance
and this assessment can increase cost (Joyce, 2006). Data indi-
cated that the most suitable and feasible medical student’s
clinical assessment tools in sixty percent cases are the same,
that it could be a acceptable result and it shows there are ap-
propriate educational environments that you can improve clini-
cal assessment methods to evaluate medical students.
In July 2002, the Accreditation Council for Graduate Medi-
cal Education (ACGME) began requiring residency programs
to demonstrate resident competency in six areas: patient care,
medical knowledge, practice—based learning and improvement,
interpersonal and communication skills, professionalism, and
systems—based practice (Tabish, 2010) and developed a
“Toolbox” to suggest possible techniques for evaluating each
competency (Cogbill & O’Sullivan, 2005) though validity and
reliability suggested tools have not been demonstrated for most,
and many tools may have limited feasibility because of time
constraints and other reasons (Gigante & Swan, 2010). Previ-
ous studies indicated that measuring both professional (Tabish,
2010) and medical (Ronald & Epstein, 2007) competences are
extremely complex. Assessment techniques have limitations,
and therefore multiple strategies are recommended (Tabish,
2010) and because of that the assessment tools are selected
should be practical in residency program, so in this way adds
valuable information about a resident’s performance, and as-
sists in making promotion and graduation decisions (Joyce,
2006). For example a 360-degree evaluation can be used to
assess interpersonal and communication skills, professional
behaviors, and some aspects of patient care and systems-based
practice or MCQ may not be the suitable method to determine
how a resident will perform with a patient (Dannefer et al.,
2005) but it can assess taxonomically higher-order cognitive
processing if they construct appropriate. Also portfolio is often
used to assess professional development (Michels, 2009). CSR
is to evaluate the trainee’s clinical decision-making, reasoning
and application of medical knowledge with real patients and
DOPS is appropriate for competencies patient care, profession-
alism, interpersonal skills, communication (Gigante & Swan,
2010) and anywhere practical skills are important (Brown &
Doshi, 2006). The results of this study showed that Mini-CEX
is the most suitable and the most feasible assessment tool for
competencies “Interviewing” and “Develop & Carry out pt.
Management plan”. Mini-CEX is the most feasible method, too
and MSF is the most suitable method. Although Mini-CEX
because of limitation to one patient and one assessor has limited
genera- lisabi lity, it make s a snapshot view for raters (Brown &
Doshi, 2006) and it is feasible to use in an inpatient and outpa-
tient medicine clerkship for formative assessment (Kogan et al.,
2003). Besides the main strength of mini-CEX is its ability to
provide immediate feedback, related to the task, from a knowl-
edgeable assessor (Singh & Sharma, 2010).
It also can be seen Portfolio and Logbook are suitable and
feasible methods to evaluate competency “Practice-Based
learning”. MCQ and oral exams are suitable and feasible me-
thods to evaluate competency “Medical Knowledge” and for
“System-based practice” MCQ is the most feasible and Port-
folio is the most suitable methods. And finally DOPS is the best
method to assess competency “Medical procedures” and OSCE
is the next.
Conclusion
The most suitable and feasible medical student’s clinical as-
sessment tools in variety of domains are completely different as
there are lots of suggested methods for efficient evaluation. All
methods of assessment have strengths and intrinsic flaws. The
use of multiple observations and several different assessment
methods over time can partially compensate for flaws in any
one method (Ronald & Epstein, 2007). A multi-method as-
sessment might include direct observation of the student inter-
acting with several patients at different points during the rota-
tion, a multiple-choice examination with both “key features”
and “script-concordance” items to assess clinical reasoning, an
encounter with a standardized patient followed by an oral ex-
amination to assess clinical skills in a standardized setting,
written essays that would require literature searches and syn-
thesis of the medical literature on the basic science or clinical
aspects of one or more of the diseases the student encountered,
and peer assessments to provide insights into interpersonal
skills and work habits (Ronald & Epstein, 2007). Clearly, no
single rating is able to provide the whole story about any doc-
tor’s ability to practice medicine, as this requires the demon-
stration of ongoing competence across a number of different
general and specific areas (Brown & Doshi, 2006). Multiple
assessment methods and multiple perspectives, however, pro-
vide rich data that support a resident’s ability (or inability) to
perform as a medical practitioner upon graduation and finally
assessment results provide feedback to both the resident and
faculty that the resident is making expected progress in achiev-
ing the knowledge, skills, and attitudes outlined by the objec-
tives (Joyce, 2006).
Acknowledgements
The authors would like to thank all of academic clinical ex-
perts tha t participate in thi s study because of their worthy opinions
and also Dr. Soltani Arabshahi, Dr. Mohamad Ali Mohagheghi,
Dr. Shahram Yazdani, Dr. Amir Hosein Emami and Dr. Kurosh
Vahidshahi because of their helpful leads.
Copyright © 2012 SciRes. 949
J. K. ZADEH ET AL.
REFERENCES
Baral, N., & Paudel, B. H. (2007). An evaluation of training of teachers
in medical education in four medical schools of Nepal. Nepal Medi-
cal College Journal, 9, 157-161.
Bourbonnais, F., Langford, S., & Giannantonio, L. (2008). Develop-
ment of a clinical evaluation tool for baccalaureate nursing students.
Nurse Educational Practice, 8, 62-71.
doi:10.1016/j.nepr.2007.06.005
Brown, N., & Doshi, M. (2006). Assessing professional and clinical
competence: The way forward. Advances in Psychiatric Treatment,
12, 81-91.
Cogbill, K. K., O’Sullivan, P. S., & Clardy, J. (2005). Residents’ per-
ception of effectiveness of twelve evaluation methods for measuring
competency. Academic Psychiatry, 29, 76-81.
doi:10.1176/appi.ap.29.1.76
Colletti, L. M. (2000). Difficulty with Negative Feedback: Face-to-face
evaluation of junior medical student clinical performance results in
grade inflation. The Journal of Surgical R esearch, 90, 82-87.
doi:10.1006/jsre.2000.5848
Collins, J. (2006). Writing multiple choice questions for continuing
medical education activities and self-assessment modules. Radio-
graphcs, 26, 543-551. doi:10.1148/rg.262055145
Dannefer, E., Henson, L., Bierer, S., et al. (2005). Peer assessment of
professional competence. Medical Education, 39, 713-722.
doi:10.1111/j.1365-2929.2005.02193.x
Gigante, J., & Swan, R. A. (2010). Simplified observation tool for
residents in the outpatient clinic. Journal of Graduate Medical Edu-
cation, 2, 108-110. doi:10.4300/JGME-D-09-00090.1
http://www.acgme.org/outcome/e-learn/redir_module3manual.asp
Jafarzadeh, A. (2009). Designing the OSCE method for evaluation of
practical immunology course of medical students: In comparison to
written-MCQ and oral examination. Rawal Medical Journal, 34,
219-222.
Johnson, N., & Chen, J. (2006). Medical student evaluation of teaching
quality between obstetrics and gynecology residents and faculty as
clinical preceptors in ambulatory gynecology. American Journal of
Obstetrics & Gynecology, 195, 1479-1483.
doi:10.1016/j.ajog.2006.05.038
Joyce, B. (2006). Developing an assessment system. Facilitator’s Guide.
Accreditation Council for Graduate Medical Education, 1, 15-17.
Kogan, J., Bellina, L., & Shea, J. (2003). Feasibility, reliability, and
validity of the Mini-CEX in a medicine core clerkship. Academic
Medicine, 78, 533-535.
Mc Coubrie, P. (2004). Improving the fairness of multiple-choice ques-
tions: A literature review. Medical Teaching, 26, 709-712.
doi:10.1080/01421590400013495
Michels, N. R., Driessen, E. W., Muijtjens, A. M., Van Gaal, L. F.,
Bossaert, L. L., & De Winter, B. Y. (2009). Portfolio assessment
during medical internships: How to obtain a reliable and feasible as-
sessment procedure? Education Health, 22, 313.
Norcini, J., & McKinle, D. (2007). Assessment methods in medical
education. Teaching and Teacher Education, 23, 239-250.
doi:10.1016/j.tate.2006.12.021
Pearce, G., & Lee, G. (2007). Marketing student perceptions of viva
voce (oral examination) as an Assessment Method. ANZMAC 2007:
Reputation, Responsibility, Relevance, 31, 8.
Ronald, M., & Epstein, M. D. (2007). Assessment in medical education.
The New England Journal of Medicine, 356, 387-396.
doi:10.1056/NEJMra054784
Shumway, J. M., & Harden, R. M. ( 2003). AMEE education guide No.
25: The assessment of learning outcomes for the competent and re-
flective physician. Medical Teaching, 25, 569-584.
doi:10.1080/0142159032000151907
Singh, T., & Sharma, M. (2010). Mini-clinical examination (CEX) as a
tool for formative assessment. The National Medical Journal of India,
23, 100-102.
Sultana, C. (2006). The objective structured assessment of technical
skills and the ACGME competencies. Obstetrics and Gynecology
Clinics of North America, 33, 259-265.
Tabish, A. (2010). Assessment methods in medical education. Intena-
tional Journal of Health Sciences, 2, 3-7.
Wilkinson, J., & Wade, W. (2005). New methods of performance as-
sessment for trainees. RCPath Bulletin, 132, 12-15.
Copyright © 2012 SciRes.
950