Creative Education
Vol.06 No.08(2015), Article ID:56589,5 pages
10.4236/ce.2015.68084

Medical Students Progress in the Practice Assessment of Knowledge, Skills, and Attitudes

Smrr Passeri1, L. M. Li2, W. Nadruz Jr.3, A. M. Bicudo4

1Medical School, University of Campinas-Unicamp, Campinas, Brazil

2Department of Neurology, Medical School, University of Campinas-Unicamp, Campinas, Brazil

3Department of Internal Medicine, Medical School, University of Campinas-Unicamp, Campinas, Brazil

4Department of Pediatrics, Medical School, University of Campinas-Unicamp, Campinas, Brazil

Email: spasseri@fcm.unicamp.br

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 28 April 2015; accepted 22 May 2015; published 25 May 2015

ABSTRACT

We have developed an alternate instrument of evaluation named ACHA-Avaliação de Conhecimento, Habilidades e Atitude (Assessment of Knowledge, Skills and Attitude) in our institution. We feel that the construct of this tool is more comprehensive than OSCE (Objective Structured Clinical Examination), as it includes two other domains; knowledge and attitude. This study aims to present results of ACHA in medical students during internship as a tool of monitoring their progress. We selected students enrolled in Medicine course from 2007 to 2011. We included in the analysis only 5th and 6th year students, and only those who participated in four consecutive assessments in this period of time. There was a linear progression during the four assessments. The lowest averages were found in the first tests, during the fifth and sixth year (I5Y and I6Y). The scores were separated by stations (Surgery, Internal Medicine, Obstetrics/Gynecology, and Pediatrics) to evaluate student progress. There was a significant difference between assessments in all specialties (the highest value in the II5Y assessment for Surgery (Mean = 7.43 SD = 1.59), and the lowest in the I5Y for Pediatrics (Mean = 4.49, SD = 2.33). The best correlation of improvement progression (score over time) was observed in the Internal Medicine (R2 = 0,678), while the poorest was seen in Gynecology/Obstetrics (R2 = 0.144). We felt that ACHA went beyond being only an assessment tool for evaluating student performance, but also it involved other domains of education and learning process. Perhaps the key element would be the acceptance by everyone involved (teachers and students) in the process which forced a reflection and developed actions to improve the quality of the course and the evaluation itself. We understand that it is essential for the evaluation process to be dynamic and for such the motivation of those involved is vital.

Keywords:

Medical Education, Practice Assessment, OSCE

1. Introduction

The certification process for medical doctors to obtain their MD degree is one way for ensuring good standards for medical practice. Evaluation system has been adopted in several countries. Nevertheless, the evaluation system is different from place to place, and varies in its construct from cognitive to skill assessment.

There is a consensus that the medical profession comprises mostly of skills and therefore assessment on this domain would be desirable on medical evaluation. As matter of fact, after publication of OSCE (Objective Structured Clinical Examination), we have an innovative and objective tool to evaluate the student’s medical practice (Miller, 1990; Smee, 2003; Newble, 2004; Boursicot & Trudie, 2005) .

We have developed an alternate instrument of assessment named ACHA-Avaliação de Conhecimento, Habilidades e Atitude (Assessment of Knowledge, Skills and Attitude) in our institution. We feel that the construct of this tool is more comprehensive than OSCE as it includes two other domains; knowledge and attitude.

The method of ACHA is as follows: the student walks through five stations: Surgery, Internal Medicine, Obstetrics/Gynecology, Pediatrics, and Public Health conducting the medical care of a planned clinical outpatient case, under the observation of a teacher-evaluator with a checklist of the items to be observed. The student’s knowledge, the skill, and the attitude regarding the simulated patient are evaluated at each station. The patients are trained for the standardized simulation, and the evaluators are orientated regarding the items to be observed. One hundred standard outpatient offices are available to the 220 Medical Internship students. The assessment process involves 300 people including evaluators, simulated patients, and support staff. At the end of each case, if there is time available, the medical student receives a brief feedback highlighting the positive aspects of consultation and opportunities for improvement. After the test, the coordinator from each station discusses the issue with the students, presenting the expectations for the development of clinical cases.

The feedback is seen as a key factor and part of the evaluation-learning process, to be offered soon after the action in private, allowing for a constructive dialogue between student and evaluator (Collins, 2004; Gordon, 2003; Henderson & Ferguson-Smith, 2006) . The evaluator is trained on how to effectively provide feedback, with clear, objective and respectful communication, aiming to motivate behavioral changes, when needed (Salerno et al., 2002; Kilminster & Jolly, 2000 ).

We have adopted the evaluation of knowledge, skills, and attitudes (ACHA) for students in Medical Internship (5th and 6th year) since 2007 as part of our Institutional educational strategy. The policy deployment goal was to include a more objective evaluation model, integrated in the three levels of learning (cognitive, affective and psychomotor) with standardized items and scenarios, which would provide feedback to the student, and repeated over a period of time (twice a year) to assess student’s learning process. It is important to point out that there are regular assessments in each internship disciplines and ACHA is offered as a parallel tool of assessment. Its adhesion is voluntary, despite of encompassing virtually 100% of the student body.

This study aims to present results of our medical evaluation system (ACHA) in medical students during internship as a tool of monitoring progress of their knowledge, skills, and attitudes.

2. Methods

This research project was submitted to the Research Ethics Committee in Human Beings at the School of Medical Sciences in the University of Campinas-Unicamp, which granted the ethical approval (P-502/2011) and it had support of Capes Foundation, Ministry of Education of Brazil (Proc. 6999/14-0 and BEX 7101/14-7).

We selected students enrolled in Medicine course at the Unicamp from 2007 to 2011. We included in the analysis only 5th and 6th year students, and only those who participated in four consecutive assessments in this period of time.

The assessments considered in this study were composed of four clinical stations: Surgery, Internal Medicine, Pediatrics, and Obstetrics/Gynecology, and for which the clinical cases were compatible to the 5th year curriculum. The student went through the four stations providing medical care to the simulated patient. At each station, the student performance was observed by a evaluator using a structure grading system which provided a final score from a 0.0 to 10.0 for each station.

To describe the sample’s profile according to the study variables, descriptive statistics were calculated and the comparison between the four sequential evaluations was performed with the Friedman test for related samples, owing to the absence of normal distribution of the variables (Conover, 1971; Siegel & Castellan, 2006; Tukey, 1977) . The statistical analysis was performed using the Statistical Analysis System for Windows, version 9.1.3 software.

3. Results

The study population comprised of 552 students. Of those, 230 students (56% women) with a mean age of 24 (range from 21 to 32 years) had completed the four consecutive assessments in the period from 2007 to 2011.

Figure 1 shows the descriptive statistics from the students’ scores in the four assessments first assessment of 5th year (I5Y), second assessment of 5th year (II5Y), first assessment of 6th year (I6Y) and second assessment of 6th year (II6Y). There is a difference between assessments (p < 0.001) with higher values in the II5Y and II6Y.

There was a linear progression during the four assessments. The lowest averages are found in the first tests, during the fifth and sixth year (I5Y and I6Y).

The scores were separated by stations (Surgery, Internal Medicine, Obstetrics/Gynecology, and Pediatrics) to evaluate student progress according to the medical specialties, as shown in Figure 2. There is a significant difference between assessments in all specialties (the highest value in the II5Y assessment for Surgery (Mean = 7.43 SD = 1.59), and the lowest in the I5Y for Pediatrics (Mean = 4.49, SD = 2.33). The best correlation of improvement progression (score over time) was observed in the Internal Medicine (R2 = 0,678), while the poorest was seen in Gynecology/Obstetrics (R2 = 0.144).

4. Discussion and Conclusion

There are several challenges to be faced by the Medical courses when it comes to student assessment. Assessment in Medical Education tackles complex competencies; thus, requiring qualitative and quantitative information from different angles of observation (Van der Vleuten & Schuwirth, 2005) . The difficulty of establishing a good assessment tool increases in the integrated teaching models, since the organization involves several medical specialties, and hence, the involvement of many teachers. A consensus regarding the assessment tool’s suitability for an integrated discipline seems unattainable.

Figure 1. Descriptive statistics of the four assessments applied to 5th and 6th year Medical Internship students.

Figure 2. The students’ progress in the four stations.

It becomes necessary to invest in new resources besides the traditional performance assessment tools, mainly because over the years the medical curricula change its structure, not only in expansion of the workload regarding the practice activities of medical care, but also in the integration between the specialties.

Since implementation of ACHA, being a modern evaluation tool, this resource has attracted our teachers who were motivated by its use of simulation in the learning process. Teachers can be involved by the dynamic motivation present in the objective evaluation with simulation.

Other key aspects that motivate teachers to join this type of assessment are the fact that this model allows for the standardization of the patients, and the items under observation, written in a structured way through the checklist. The acceptance of this assessment tool is also great for the students, motivated primarily by the feedback, provided immediately after the completion of the call.

The medical course in our university has students with the highest academic performance. This is bias as the course’s admission process, based on rigorous criteria, selects highly capable young people. However, this dedication to the studies is maintained over the six years, with an average of 8.0 based on theoretical and oral evaluations, portfolios and concepts routinely assigned to our students along the course.

The ACHA method with simulated patients results in average student’s performance score lower than the average score obtained in the regular official disciplines’ assessment. This finding is interesting and allows some reflections on the assessment tool, and on the structure of the course’s curriculum.

On the perspective of the assessment tool, being it an extra-curricular practice assessment, without any bearings on the course’s grade may be a factor influencing the performance, i.e., the student is motivated to participate in the process, but does not prepare adequately for the evaluation, since it will not significantly affect his academic life. In this sense, we believe that the excessive teaching time in the 5th and 6th years, coupled with on call medical activities performed during night shifts, and weekends, compromises the student’s performance, leaving him with no available free time either to refine his knowledge, or to put effort toward this assessment.

Our limiting physical structure is another factor that can influence student performance. Our 100 doctors offices are not equipped with two ways mirrors; thus, forcing the teacher-evaluator to remain in the same room with the student during the assessment. The evaluator’s presence emotionally influences student’s performance, although many students mention that this presence is “all but forgotten” after the first two minutes of the consultation.

Regarding the questions applied to this practice evaluation, we are confident that all students are able to carry out the medical care within the framework of each question, because they are based on the medical clinics. Thus, the students routinely attend these types of clinical cases during the 5th year. Therefore, we do not believe that the questionnaire is an impediment to the student’s performance.

The evaluator’s checklist is also carefully crafted, albeit detailed, with its multiple-choices alternatives; it is completion is not challenging. Moreover, the evaluators receive training regarding the posture to be adopted during the consultation, scoring, completing the checklist. Most important, they are instructed to how to provide the feedback in a respectful manner, by praising the achievements, and correcting the oversights. We assume that there are degrees of subjectivity bias on the evaluator’s side; however, we do not believe that it would significantly interfere in the student’s performance, in view of the constraints of the objectivity of the tool.

In regard to our curriculum, during the Medical Internship (5th and 6th years), the students rotate in the secondary and tertiary levels health care facilities. Practice activities are gradual and independently developed at the university hospital, considered one of the largest general hospitals in the state of São Paulo, and a medical center of excellence, which serves over 500,000 patients a year. In this sense, facing the student’s performance on this assessment, and the environment where the medical practice is developed; we can infer that the medical course may be developing highly specialized skills for the medical doctor’s training. The fact is that the 6th years students’ performance is either below or equivalent to the 5th years’. It is then understood that the specialized development prevents the student from completing simple procedures. This hypothesis initiates a try-storm in our Institution in regard our curricular revision and adjustment, which we understand as a part of continuous improvement necessary for updating dynamic and complex medicine course.

In our medical school, the ACHA became a regular tool in the students’ evaluation legitimize by students and faculty staff. However, implementation of ACHA required the institution’s support for the appropriate physical infrastructure and allocation of skilled human resources, all essential to carry out a practice evaluation with simulated patients.

Moreover, we felt that ACHA went beyond being only an assessment tool for evaluating student performance, but also it involved other domains of education and learning process. Perhaps the key element would be the acceptance by everyone involved (teachers and students) in the process which forced a reflection and developed actions to improve the quality of the course and the evaluation itself. We understand that it is essential for the evaluation process to be dynamic and for such the motivation of those involved is vital.

References

  1. Boursicot, K., & Trudie, R. (2005). How to Set up an OSCE. The Clinical Teacher, 1, 16-20. http://dx.doi.org/10.1111/j.1743-498X.2005.00053.x
  2. Collins, J. (2004). Education Techniques for Lifelong Learning: Principles of Adult Learning. Radiographics, 24, 1483-1489. http://dx.doi.org/10.1148/rg.245045020
  3. Conover, W. J. (1971). Practical Nonparametric Statistics. New York, NY: John Wiley & Sons.
  4. Gordon, J. (2003). ABC of Learning and Teaching in Medicine: One to One Teaching and Feedback. British Medical Journal, 326, 543-545. http://dx.doi.org/10.1136/bmj.326.7388.543
  5. Henderson, P., Ferguson-Smith, A. C., & Johnson, M. H. (2005). Developing Essential Professional Skills: A Framework for Teaching and Learning about Feedback. BMC Medical Education. http//www.biomedcentral.com/l 472-6920/5/11
  6. Kilminster, S. M., & Jolly, B. C. (2000). Effective Supervision in Clinical Practice Settings: A Literature Review. Medical Education, 34, 827-840. http://dx.doi.org/10.1046/j.1365-2923.2000.00758.x
  7. Miller, G. E. (1990). The Assessment of Clinical Skills/Competence/ Performance. Academic Medicine, 65, S63-S67. http://dx.doi.org/10.1097/00001888-199009000-00045
  8. Newble, D. (2004). Techniques for Measuring Clinical Competence: Objective Structured Clinical Examinations. Medical Education, 38, 199-203. http://dx.doi.org/10.1111/j.1365-2923.2004.01755.x
  9. Salerno , S. M., O’Malley, P. G., Pangaro, L. N., Wheeler, G. A., Moores, L. K., & Jackson, J. L.(2002). Faculty Development Seminars Based on the One-Minute Preceptor Improve Feedback in the Ambulatory Setting. Journal of General Internal Medicine, 17, 779-787. http://dx.doi.org/10.1046/j.1525-1497.2002.11233.x
  10. Siegel, S., & Castellan Jr., N. J. (2006). Estatística Não-Paramétrica para Ciências do Comportamento. Porto Alegre: Artmed, 2ª Edição.
  11. Smee, S. (2003). ABC of Learning and Teaching in Medicine: Skill Based Assessment. British Medical Journal, 326, 703- 706. http://dx.doi.org/10.1136/bmj.326.7391.703
  12. Tukey, J. W. (1977). Exploratory Data Analysis. Reading, Massachusetts: Addison-Wesley.
  13. Van der Vleuten, C. P. M., & Schuwirth, L. W. T. (2005). Assessing Professional Competence: From Methods to Programmes. Medical Education, 39, 309-317. http://dx.doi.org/10.1111/j.1365-2929.2005.02094.x