J. K. ZADEH ET AL.
DOPS is the best method to assess competency “Medical pro-
cedures” and OSCE is the next.
Discussion
The evaluation of clinical competence is a major responsibil-
ity of medical educators (Tabish, 2010). Effective evaluation
not only increases the students’ motivation but also helps in-
structors to determine the strength or weakness of their educa-
tional activities for improvement of their performance (Jafar-
zadeh, 2009). In our study the majority of the study population
(97.6%) believed that MCQ is used in clinical setting. Although
MCQs are a valid method of competence testing, they do not
guarantee competence as professional competence integrates
knowledge, skills, attitudes and communication skills (Mc
Coubrie, 2004). OSCE and logbook were the next methods that
were used. Furthermore MSF and Portfolio are not used often.
As we know a direct relationship between instructional objec-
tives and tests must exist. Thus, tests should come directly from
the objectives and focus on important and relevant content
(Collins, 2006). One of the barriers to use portfolio and MSF
(360˚) is that all raters must be trained in using these tools. In
portfolio scoring is difficult and in MSF you may need a large
number of evaluators to obtain a stable estimate of performance
and this assessment can increase cost (Joyce, 2006). Data indi-
cated that the most suitable and feasible medical student’s
clinical assessment tools in sixty percent cases are the same,
that it could be a acceptable result and it shows there are ap-
propriate educational environments that you can improve clini-
cal assessment methods to evaluate medical students.
In July 2002, the Accreditation Council for Graduate Medi-
cal Education (ACGME) began requiring residency programs
to demonstrate resident competency in six areas: patient care,
medical knowledge, practice—based learning and improvement,
interpersonal and communication skills, professionalism, and
systems—based practice (Tabish, 2010) and developed a
“Toolbox” to suggest possible techniques for evaluating each
competency (Cogbill & O’Sullivan, 2005) though validity and
reliability suggested tools have not been demonstrated for most,
and many tools may have limited feasibility because of time
constraints and other reasons (Gigante & Swan, 2010). Previ-
ous studies indicated that measuring both professional (Tabish,
2010) and medical (Ronald & Epstein, 2007) competences are
extremely complex. Assessment techniques have limitations,
and therefore multiple strategies are recommended (Tabish,
2010) and because of that the assessment tools are selected
should be practical in residency program, so in this way adds
valuable information about a resident’s performance, and as-
sists in making promotion and graduation decisions (Joyce,
2006). For example a 360-degree evaluation can be used to
assess interpersonal and communication skills, professional
behaviors, and some aspects of patient care and systems-based
practice or MCQ may not be the suitable method to determine
how a resident will perform with a patient (Dannefer et al.,
2005) but it can assess taxonomically higher-order cognitive
processing if they construct appropriate. Also portfolio is often
used to assess professional development (Michels, 2009). CSR
is to evaluate the trainee’s clinical decision-making, reasoning
and application of medical knowledge with real patients and
DOPS is appropriate for competencies patient care, profession-
alism, interpersonal skills, communication (Gigante & Swan,
2010) and anywhere practical skills are important (Brown &
Doshi, 2006). The results of this study showed that Mini-CEX
is the most suitable and the most feasible assessment tool for
competencies “Interviewing” and “Develop & Carry out pt.
Management plan”. Mini-CEX is the most feasible method, too
and MSF is the most suitable method. Although Mini-CEX
because of limitation to one patient and one assessor has limited
genera- lisabi lity, it make s a snapshot view for raters (Brown &
Doshi, 2006) and it is feasible to use in an inpatient and outpa-
tient medicine clerkship for formative assessment (Kogan et al.,
2003). Besides the main strength of mini-CEX is its ability to
provide immediate feedback, related to the task, from a knowl-
edgeable assessor (Singh & Sharma, 2010).
It also can be seen Portfolio and Logbook are suitable and
feasible methods to evaluate competency “Practice-Based
learning”. MCQ and oral exams are suitable and feasible me-
thods to evaluate competency “Medical Knowledge” and for
“System-based practice” MCQ is the most feasible and Port-
folio is the most suitable methods. And finally DOPS is the best
method to assess competency “Medical procedures” and OSCE
is the next.
Conclusion
The most suitable and feasible medical student’s clinical as-
sessment tools in variety of domains are completely different as
there are lots of suggested methods for efficient evaluation. All
methods of assessment have strengths and intrinsic flaws. The
use of multiple observations and several different assessment
methods over time can partially compensate for flaws in any
one method (Ronald & Epstein, 2007). A multi-method as-
sessment might include direct observation of the student inter-
acting with several patients at different points during the rota-
tion, a multiple-choice examination with both “key features”
and “script-concordance” items to assess clinical reasoning, an
encounter with a standardized patient followed by an oral ex-
amination to assess clinical skills in a standardized setting,
written essays that would require literature searches and syn-
thesis of the medical literature on the basic science or clinical
aspects of one or more of the diseases the student encountered,
and peer assessments to provide insights into interpersonal
skills and work habits (Ronald & Epstein, 2007). Clearly, no
single rating is able to provide the whole story about any doc-
tor’s ability to practice medicine, as this requires the demon-
stration of ongoing competence across a number of different
general and specific areas (Brown & Doshi, 2006). Multiple
assessment methods and multiple perspectives, however, pro-
vide rich data that support a resident’s ability (or inability) to
perform as a medical practitioner upon graduation and finally
assessment results provide feedback to both the resident and
faculty that the resident is making expected progress in achiev-
ing the knowledge, skills, and attitudes outlined by the objec-
tives (Joyce, 2006).
Acknowledgements
The authors would like to thank all of academic clinical ex-
perts tha t participate in thi s study because of their worthy opinions
and also Dr. Soltani Arabshahi, Dr. Mohamad Ali Mohagheghi,
Dr. Shahram Yazdani, Dr. Amir Hosein Emami and Dr. Kurosh
Vahidshahi because of their helpful leads.
Copyright © 2012 SciRes. 949