Creative Education
Vol.05 No.21(2014), Article ID:51846,9 pages
10.4236/ce.2014.521212
Medical Students’ Knowledge of Clinical Practical Procedures: Relationship with Clinical Competence
Patricia Katowa-Mukwato, Sekelani S. Banda
Department of Medical Education Development, School of Medicine, University of Zambia, Lusaka, Zambia
Email: patriciakatowamukwato@gmail.com, ssbanda2007@gmail.com
Copyright © 2014 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
Received 8 September 2014; revised 5 October 2014; accepted 21 October 2014
ABSTRACT
Clinical competence is an attribute expected of every practicing doctor while proficiency in procedural skills is a requirement by certifying bodies. To attain competency in the performance of procedural/psychomotor skills, possession of conceptual knowledge has been documented as a fundamental pre-requisite in reference to medical education literature. At the University Of Zambia School Of Medicine, the matter of cognitive knowledge in relation to competence in clinical practical skills for undergraduate medical students was investigated in a project which was conducted in 2013. Fifty-six (56) students from a class of 60 (93% response rate) of the final year medical students’ class of 2012/2013 completed a Multiple Choice Question (MCQ) knowledge test which was administered to ascertain the level of knowledge on 14 selected clinical practical procedures. Knowledge levels of clinical practical procedures of the final year medical students were found to be inadequate represented by a 39% pass rate with students’ scores lower than the Angoff determined pass mark on most items. Expectedly, students were more knowledgeable in those procedures where they were formally taught and those where there was a high likelihood of being assessed. The correlation between knowledge and self-perceived competence was positive Spearman rho of 0.360, while a negative correlation was recorded between knowledge and manifest competence (objectively measured competence) Pearson r −.116. The positive correlation between knowledge and self-perception of competence is an indication of the role of knowledge in improving self-concept about a skill, which may consequently lead to improved performance.
Keywords:
Medical Students, Knowledge, Clinical Practical Procedures, Psychomotor Skills, Clinical Competence
1. Introduction
Clinical competence has been defined as “habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values and reflection in daily practice for the benefit of individuals and community being served” (Epstein & Hundert, 2007). Alongside humanistic qualities, clinical competence is an attribute expected of every practicing doctor (Carr, 2004) while demonstration of proficiency and appropriate use of procedural skills pre-registration is a requirement by certifying bodies (Morris, Gallagher, & Ridgway, 2012) .
Regarding clinical competence, the undergraduate medical education curriculum aspires to initiate the process of transforming novices to experts, although it is acknowledged that transformation to a large extent happens after qualification (Lai, Sivalingam, & Ramesh, 2007) . In order to meet this aspiration, curricula of many medical schools state the clinical practical skills so that students must demonstrate competence by the time they graduate, yet many graduate without learning these mostly common and some potentially lifesaving skills to the detriment of the quality of care for patients (Moercke & Eika, 2002; Colberly & Godenhar, 2006; Elango et al., 2007; Wu et al., 2008; Promes et al., 2009; Institute for Health Care Improvement, 2010) .
At the University Of Zambia School Of Medicine (UNZA-SOM), the matter of competence in clinical practical skills for undergraduate medical students had not been studied prior to a PhD project that was conducted in 2013. The project investigated how medical students of the University of Zambia acquire competence in selected clinical practical skills, their knowledge level, self-perception of competence as compared to manifest competence, and self-rated experience in selected set of skills. Although several aspects of clinical competence were studied, this article focuses on the clinical practical procedures knowledge level of Final Year Medical Students of the University of Zambia in the last six months of the undergraduate medical education. The article address one out of the three research questions which were investigated; what is the clinical practical procedures knowledge level of Final Year Medical Students of the University of Zambia in the last six months of the undergraduate medical education?
The role of conceptual knowledge in psychomotor skills is documented in literature (Kopta, 1971; Hamdorf & Hall, 2000; Amin & Hoon-Eng, 2003; Buckley, Manalo, & Lapitan, 2011). Hamdorf & Hall (2000) assert that individuals who are provided with clear description in addition to clear demonstration of a task are more likely to master a skill than those who are not. Description which is cognitive in nature involves explanation of what the procedure is, when it is indicated or contraindicated, where it is performed (anatomical structures involved), and guiding principles, while demonstration involves actual performance or process of undertaking the procedure.
In a study conducted by Buckley, Manalo & Lapitan (2011) at the University of the Philippines, Manila to assess the knowledge and practices of medical interns relating to urethral catheterization and iatrogenic urethral injury secondary to traumatic catheter insertion, findings were that slightly more than half (55.6%) of the respondents stated that they had adequate theoretical training and (66.7%) adequate practical training. Despite relatively high levels of experience, deficits were identified in detailed knowledge of correct catheterization procedures and of risks associated with urethral injury. Those not trained by demonstration and re-demonstration methods were less likely to lubricate the urethra in line with widely accepted good practice. Compared with those who reported adequate theoretical training, those who reported minimal or no theoretical training were less likely to take a history, an aspect considered critical in identifying risk factors for urethral strictures. One limitation of study of the Buckley, Manalo, and Lapitan was that a survey questionnaire was used to assess acquired competence. From a skills’ training methodology point of view, direct observation is a better method than a questionnaire survey. Despite the methodological limitation, to some extent findings of this study demonstrated the importance of cognitive knowledge in clinical skills performance.
The role of cognition in skill performance is further demonstrated by Miller (1990) in the Pyramid of clinical competence. The pyramid conceptualizes the essential facets of clinical competence. It illustrates four levels of demonstrated learning as shown in Figure 1.
The base represents the knowledge component of competence; the student has factual knowledge of a skill or field “Knows”. At level two, the learner “Knows how” the skill is performed in theory (applied knowledge). At level three the learner “Shoes how” to do so in a controlled environment or simulated environment and finally in stage four, the leaner “Does” in actual clinical practice. The first two levels of Miller’s Pyramid (Knows and Knows how) are more cognitive in nature while the last two (Shows how and Does) are psychomotor in nature,
Figure 1. Miller’s 1990 pyramid of clinical competence- adapted from miller 1990.
implying that for an individual learner to “Show how” a clinical procedure is done and consequently perform it “Does”, one should possess the cognitive knowledge as a foundation for the psychomotor activity.
Birnbaumer (2011) clearly delineated the role of cognition in teaching and learning procedural skill in medical education.
“To teach procedures successfully, medical educators must focus on teaching both a thorough understanding of the cognitive aspects of the procedure and the ‘hands on’ component. Before picking up an instrument, learners must understand the proper indications, contraindications, alternatives, steps involved, complications, and documentation needed for its use. Teaching this cognitive component should precede the student using that instrument or device. In fact, the learner should never attempt a skill until after a successful verbal ‘walk through’ of the procedure. Many procedures are taught in the clinical environment with the teacher simultaneously demonstrating and describing the skill to the learner. To maximize acquisition of the cognitive information, however, some educators suggest that mental and manual skills should not be taught in the same session because learners tend to focus on the hands-on skill at the expense of understanding the thought processes involved. To facilitate learning the cognitive component, checklists provide an organized approach to teaching and learning the components of a procedure. These checklists should include a series of detailed, simple, sequential steps for the procedure being taught. They provide a reference for the learner to review and for the teacher to use while teaching the procedure as well as while watching the learner performs it”.
It is on the basis of such documented relationships between conceptual knowledge and skill performance that the clinical practical procedures knowledge level of Final Year Medical Students of the University of Zambia was investigated and correlated to both manifest and self-perceived competency in clinical practical procedures. It is however acknowledged that Possession of cognitive knowledge is, not the only factor underlying adequate skill performance. There are other factors that determine both acquisition and development of clinical competence including: curricular model and sequencing, clinical teaching and assessment and skill/procedure related factors (AAMC, 2008) . Others are deliberate practice, quality of clinical supervision and feedback (Griffith et al., 1997; Ericsson, 2006; Wimmers, Schmidt, & Splinter, 2006; Duvivier et al., 2011) . Discussion of these factors is beyond the scope of this article.
2. Methods
2.1. Design
The overall design of the project from which this article stems was a non-interventional cross sectional correlation study that utilized a concurrent transformative with concurrent embedded mixed method strategy. The overall study was correlational due to the determination of relationships among four variables (knowledge, self-per- ceived and manifest competence, and self-rated experience with selected skills), and transformative because it was underpinned by theoretical models. The study was also concurrent embedded due to the use of both quantitative and qualitative methods with the smaller qualitative component nested within the large quantitative component. In addition, the simultaneous collection of both qualitative and quantitative data qualified it as “concurrent” embedded. The qualitative arm was used to investigate how medical students of the University of Zambia acquire competency in clinical practical procedure, while the quantitative component investigated the students’ knowledge level regarding procedural skills and their levels of manifest versus self-perceived competence. This article however focuses only on one component of the quantitative arm of the project that is the clinical practical Procedures knowledge levels of final year medical students of the University of Zambia although correlations to both manifest and self-perceived competence are presented.
2.2. Data Collection
A Multiple Choice Question (MCQ) knowledge test was administered to ascertain the level of knowledge on 14 selected clinical practical procedures. The knowledge test had 14 categories of questions based on the 14 selected clinical practical procedures that is: 1) lumbar puncture; 2) cardiopulmonary resuscitation; 3) endotracheal intubation; 4) urethral catheterization; 5) nasogastric intubation; 6) gastric Lavage; 7) examination of the new-born; 8) vaginal delivery; 9) vaginal examination; 10) examination of the placenta; 11) intravenous cannula insertion; 12) suturing; 13) intramuscular drug administration; 14) intravenous drug administration. Fifty-six (56) students from a class of 60 (93% response rate) of the final year medical students’ class of 2012/2013 at the University of Zambia completed MCQ test.
The MCQs were purposively selected from relevant published question banks on each category. The questions in the test focused mainly on the indications/contraindications, equipment used, principles of performing the procedure, correct technique, volumes or ratios, correct positions/sites/landmarks, and precautions while performing the procedures. The Q-Q plot test for normality revealed a normal distribution and Cronbach alpha for reliability was 0.774. Using a criterion referenced pass mark set at 60%, each answer script was manually scored for the right answer on all the items and totaled into a percentage to determine the knowledge level for each students.
The 60% pass mark for the knowledge test in our study was determined using the original Angoff Procedure. Angoff method is a criterion-based method of standard setting. This method was initially developed by Angoff in 1971.The method uses experts in the field to determine the cut-off point/score. The cut-off score is defined as a score that a minimally competent candidate is likely to achieve (Canadian Association of Medical Technologists, 2006) . Scores below the Angoff cut off point are deemed as failure. Using this method, five local experts two from internal medicine, one from surgery, one from obstetrics and gynaecology and one from Paediatrics and Child Health determined the pass mark depending on the difficulty level of each question. Two out of the five Angoff Panelist were Consultants while the other three were Senior Registrars; therefore they were considered experts in their medical fields. The inter-rater agreement for all questions was above 85%.
2.3. Data Analysis for the Knowledge Test
Each answer script for the 56 student was manually scored for the right answer on all the test items and totaled into a percentage to determine the knowledge level for each student. Following marking, responses from the MCQ knowledge test scripts was entered into SPSS version 17. The marking key was used to identify the correct answer for each question which was entered as the correct option for each respective question in SPSS. Following data entry, frequencies were computed for correct scores for each question. Computation of frequencies for each question determined the performance (Pass rate) of students on each individual question. The Q-Q plot test was performed which revealed a normal distribution. Given that the data for the knowledge test was normally distributed, descriptive statistics that is the Mean, Standard Deviation and Range were computed.
Since the MCQ knowledge test tool also had questions on demographic characteristics and clinical medical education context, responses on the other two components were entered onto SPSS at the same time as the knowledge test. Consequently, frequencies for the demographic variables and clinical medical education context were computed.
Based on documented relation between cognitive knowledge and practical skills performance, and the correlational nature of the overall study, apart from simply determining the knowledge levels of students, tests for association between the knowledge test scores and manifest-competence (OSCE scores). Correlation coefficient (Pearson r) test was performed. To facilitate rank order correlation of knowledge which was originally measured at interval level with self-perception of competence and self-rated experience with procedural skills which were measured at ordinal level, the knowledge test scores were re-categorized from the specific numeric scores in percentage form to ordinal level with three categories. The categories were; Fail (scores of 0% - 59% [below pass mark]), Bare Pass (score 60% - 80% [pass mark up to mid-point between pass mark and total]), and absolutes pass (score of 81% - 100% [above mid-point between pass mark and total]). Following re-categoriza- tion, rank order correlations (Spearman rho) test were performed between knowledge and self-perceived competence, and knowledge and self-rated experience with procedural skills.
3. Results
Out of the 56 medical students who participated in the study, majority 38 (67.9%) were male compared to only 18 (32.1%) females. Only 5 (8.9%) of the respondents had training in other health care related fields prior to undergraduate medical education while 49 (87.5%) had not had. All the 56 (100%) had had clerkships in all the designated medical disciplines and their sub-specialties; Internal Medicine, Surgery, Obstetrics and Gynaecology, Peadiatrics and Child Health, Ophthalmology, Community Medicine, Dermatology, ENT/Maxilofacial, and Radiology. Major variations were noted regarding the numbers of students who were formally taught in the 14 selected practical procedures. The least formally taught was administration of intravenous drug with only 8 (14.3) of the students reporting having been formally taught. Procedures that received the highest score for being formally taught are Cardiopulmonary Resuscitation and normal vaginal delivery at 46 (82.1%). Others where at least two thirds of students were formally taught were examination of new born, Lumbar puncture and vaginal examination. Similarly, majority of students had no formal assessment in most of the clinical practical procedures. The least formally assessed was wound suturing where only 7 (12.5%) of participants indicated having been formally assessed followed by intravenous drug administration at 8 (14.3%). The highest formally assessed practical procedure was lumbar puncture where 52 (92.9%) of participants reported having been formally assessed.
Overall, the knowledge levels of clinical practical procedures of the final year medical students were found to be inadequate represented by a 39% pass rate on a 48-item MCQ test. The Angoff determined pass mark was 60%, Mean Score was 53.38, Standard Deviation 10.44 and Range 50. Most scores were skewed to the left or below average (Skewness of −124). Figure 2 shows the pass mark on the MCQ Knowledge test, while Table 1 shows the Knowledge Scores (Percentage) on Selected Questions on the 14 Clinical Practical Procedures.
Using criterion referencing for grading with Angoff Pass mark at 60%, only 22 (39.3%) of participants passed the knowledge test.
Table 1 shows the proportions of students with correct scores on selected Multiple Choice (MCQs) from the knowledge test in comparison to the Angoff determined pass mark. The best performance with 80.4% pass rate was on a question about vaginal examination (correct interpretation of fetal presenting part for a head that is visible at vaginal introitus), followed by knowledge of recommended CPR compressions where 78.6% of the respondents knew the recommended compression rate for adults. The worst performance with 12.6% pass rate was on a question on endotracheal intubation regarding the best clinical sign that the ETT is in the trachea and not the esophagus. Other poor performance were noted on questions regarding intramuscular injections (21%
Figure 2. Criterion referencing (pass/fail rate). Angoff pass mark at 60%. N = 56.
Table 1. Knowledge scores (percentage) on selected questions on the 14 practical procedures.
Angoff pass mark = proportion of students who according to experts are expected to get the item correct; Level of difficulty = proportion of students with correct scores on an item.
pass rate), lumbar puncture (28% pass rate), nasogastric tube insertion (30%), and urethral catheterisation (36%).
Table 2 shows a negative correlation between knowledge test scores and OSCE scores. Pearson r −.116 and p value .395. With the negative correlation, regression analysis of the two variables was not performed.
Table 3 shows the comparisons in terms of knowledge, self-perception, manifest-competence and self-rated experience with the three procedures included in the OSCE. Despite 65.2% of students getting correct scores on questions on CPR, majority of them (>50%) were not competent and had low self-perception probably due to low-experience with the procedure. On the other hand, despite students reporting high experience with intravenous drug administration, majority were only barely competent with moderate self-perception.
All the participants who had bare passes (score of 60% - 80%) had either moderate 12 (54.5%) or high 10 (45.2) self-perception of competence while the three with low-self competence failed the knowledge test. In addition, all the participants who passed had either moderate or high self-perception. Therefore, a significant association was observed between self-perceived competence and knowledge p = 0.007 with rank-order correlation Spearman rho of 0.360.
4. Discussion
The role of cognition in skill performance is well documented in medical education literature (Kopta, 1971; Miller, 1990; Hamdorf & Hall, 2000; Amin & Hoon-Eng, 2003; Buckley, Manalo, & Lapitan, 2011) . In addition, cognition (mastery of a body of knowledge) is regarded as one of the dimension of clinical competence as can be construed from Newble et al. (1994) and Epstein & Hundert (2002) definition of clinical competence. Therefore in order to fully understand the concept of “clinical competence”, an understanding of cognitive attributes was inevitable. The role of cognition in skill performance is further demonstrated by Miller (1990) in the Pyramid of clinical competence. Miller’s pyramid of clinical competence together with two other conceptual models underpinned the present study (Figure 1). Miller (1990) demonstrated the value of cognition in skills performance by placing the knowledge component of clinical competence at the base of the pyramid; “Knows” = possession of factual knowledge of a skill or field and “Knows how” = applied knowledge before the “Shoes how” = performance in a controlled environment and “Does” = performance in clinical practice. The first two levels of Miller’s Pyramid (Knows and Knows how) are more cognitive in nature while the last two (Shows how and Does) are psychomotor in nature. The implication being, that for an individual learner to “show how” a clinical procedure is done and consequently perform it “Does”, one should possess the cognitive knowledge to underline psychomotor activity.
Table 2. Correlation coefficient (Pearson r) of overall manifest competence (OSCE scores) and knowledge of clinical practical procedures.
Table 3. Relationship of knowledge, self-perception, manifest competence, and experience with the three practical procedures included in the OSCE.
It is with the above understanding that a knowledge test was administered. Using Angoff pass mark of 60% only 39.3% of students passed the test (Figure 2). Mean Score was 53.38, Standard Deviation 10.44, Range 50 and Skewness −124. With most scores skewed to the left or below average (Skewness of −124). The mean score of 53.38% was low considering that the students assessed were in their last six months of medical education and are expected to possess adequate cognitive traits of common core clinical practical procedures. When the student pass rates for each question were compared to Expert (Angoff) determined pass rates, the Angoff pass mark was high on most questions compared to actual student scores (Table 1). This entails that teachers expected their students to know more than what actually students knew.
When Item analysis was performed to determine the level of difficulty of each question, there were major variations in the levels of difficulty for different questions. Level of difficult is defined as the proportion of students who answer the question correctly (Amin & Hoon-Eng, 2003) . The lower the proportion is, the more difficult the item. From the knowledge test administered in our study, the highest correct score of 80.4% was obtained from a question on Vaginal Examination (VE) followed by 78.6% on a question on Cardiopulmonary Resuscitation (CPR) (Table 1). The lowest correct score of 12.5% was from a question on endotracheal intubation (Table 1). Consequently the level of difficulty for VE (0.80) and CPR (0.78) were lower than that for endotracheal intubation (0.12). The results therefore meant that students found the question on endotracheal intubation more difficulty than the ones on VE and CPR. Several reasons can be advanced for variations in the pass rates for different questions; probably the question on VE and CPR were truly less difficulty compared to that on endotracheal intubation or probably students had better understanding of VE and CPR compared to endotracheal intubation, as both VE and CPR were among the top five formally taught procedures.
Wass et al. (2001) and Epstein (2007) assert that assessment drives learning. With this assertion, the three procedures; vaginal examination, CPR and endotracheal intubation, were considered in terms of the number of students who had been formally assessed during the clinical years. It was established that 44.6% of students were formally assessed on VE, 30.4 on CRP compared to 19.6% who were formally assed on endotracheal intubation. Literature alerts us that students feel overburdened by work and respond by studying only the parts of the course that is assessed ( Hakstian, 1971 cited in Epstein, 2007 and Wass et al., 2001 ). Therefore, the likelihood of being formally assessed could have lead students to study more literature related to vaginal examination and CPR compared to endotracheal intubation consequently resulting in high pass rate in VE and CPR.
In our study, there was also a negative correlation (Pearson r −.116) between knowledge of clinical practical procedures and overall manifest competence on the seven practical procedures (Table 2). Disagreements were also recorded between knowledge and manifest competence for two out of the three practical procedures that were in both the knowledge test and manifest competence test (OSCE) (Table 3). Whereas 65.2% of students had correct scores on CPR knowledge questions, more than 50% were not competent on the CPR OSCE station (manifest-competence). For nasogastric tube insertion, only 30.4% had correct scores in the knowledge test, whereas more than 50% were absolutely competent on the OSCE station. For intravenous drug administration, more than half (57.5%) of students had correct scores in the knowledge test, similarly more than 50% were barely competent on the nasogastric tube insertion station of OCSE, (Table 3). With these finding, it could be concluded that knowledge of a clinical practical procedures is not related to actual performance. However, caution should be taken in applying this conclusion because for our study, manifest competence was measured using the end of year final OSCE results. OSCE being a final examination, students could have studied compared to the knowledge test which was not part of the School evaluation systems. On the other hand, OSCE being a final examination could have caused anxiety among students compared to the knowledge test which was not. These factors could have therefore caused variations in performance between the knowledge and OSCE performance.
With the assertion that high self-efficacy (self-perception) is related to high achievement in educational settings (Bandura, 1997) , self-perception of competency was correlated to the knowledge of clinical practical procedures (Table 4). A positive association was established between self-perception of competency and knowledge of clinical practical procedures Spearman rho 0.360. Although the correlation was relatively weak (0.360), this finding still supports assertions by Bandura (1997) that individuals who perceive themselves highly, are likely to work hard and subsequently get better scores in examinations and tests. This was probably the reason why students who had high self-perception on clinical practical procedures got better scores in the knowledge test, consequently the recorded association between self-perception and knowledge. Similarly, the high knowledge related to clinical practical procedures would have led to high self-perception.
Table 4. Cross tabulation and rank-order correlation between self-perceived competence and knowledge of core-clinical practical procedures.
5. Conclusion
From our study, we concluded that knowledge of clinical practical procedures was inadequate represented by 39% pass rate. Teachers’ expectations (Angoff pass marks) were higher than actual student scores on most questions and the entire MCQ knowledge test. For specific MCQ items, the pass rate was high on those items from practical procedures that were formally taught. Students were more knowledgeable in those procedures where there was a high likelihood of being assessed, thus supporting the assertion that assessment drives learning. If you desire students to learn some concept, you need to find means to assess such a component. We also concluded that self-perception of competence is related to the cognitive knowledge that one possesses as there was a positive correlation between knowledge of clinical practical procedures and self-perception. The positive correlation between knowledge and self-perception of competence is an indication of the role of knowledge in improving self-concept about a skill, which may consequently lead to improved performance.
Acknowledgements
We acknowledge the University Of Zambia School Of Medicine for the opportunity to pursue my doctoral studies and the National Institutes of Health (NIH) through the Medical Education Partnership Initiative (MEPI) programmatic award No. 1R24TW008873 entitled Expanding innovative multidisciplinary medical education in Zambia for the financial support towards the project.
References
- AAMC (Association of American Medical Colleges) (2008). Recommendations for Clinical Skills Curricula for Undergraduate Medical Education. Washington, DC: Association of American Medical Colleges.
- Amin, Z., & Hoon-Eng, K. (2003). Basics in Medical Education. New Jersey: World Scientific Publication.
- Bandura, A. (1997). Self-Efficacy: The Exercise of Control. New York, NY: Freeman
- Birnbaumer, D. M. (2011). Teaching Procedures: Improving “See One, Do One, Teach One”. CJEM, 13, 390-394. http://dx.doi.org/10.2310/8000.2011.110386
- Buckley, B. S., Manalo, M., & Lapitan, M. C. M. (2011). Medical Interns’ Knowledge and Training Regarding Urethral Catheter Insertion and Insertion Trelated Urethral Injuries in Male Patients. BMC Medical Education, 11, 73. http://www.biomedcentral.com/1472-6920/11/73
- Canadian Association of Medical Technologists (2006). Angoff Method of Standard Setting for Licensure and Certification Examination. Canada: Canadian Association of Medical Technologists.
- Carr, S. J. (2004). Assessing Clinical Competences in Medical Senior House Officers: How and Why Should We Do It? Postgraduate Medical Journal, 80, 63-66. http://dx.doi.org/10.1136/pmj.2003.011718
- Colberly, L., & Goldenhar, L. M. (2006). Ready or Not, Here They Come: Acting Interns, Experience and Perceived Competency Performing Basic Medical Procedures. Society of General Internal Medicine, 22, 491-494. http://dx.doi.org/10.1007/s11606-007-0107-6
- Duvivier, R. J., van Dalen, J., Muijtjens, A. M., Moulaert, V. R. M. P., Van der Vleuten, C. P. M., & Scherpbier, A. J. J. A. (2011). The Role of Deliberate Practice in Acquisition of Clinical Skills. BMC Medical Education, 11, 1011. http://dx.doi.org/10.1186/1472-6920-11-101
- Elango, S. , Jutti, C. R., Kandasami, P., Teng, L. C., Loh, L. C., & Motilal, T. (2007). Assessment of Basic Skills in an Undergraduate Medical Curriculum. LeJSME, 1, 41-45.
- Epstein, M. (2007). Assessment in Medical Education. New England Journal of Medicine, 356, 387-396. http://dx.doi.org/10.1056/NEJMra054784
- Epstein, R. M., & Hundert, E. M. (2002). Defining and Assessing Professional Competence. JAMA, 287, 226-235. http://dx.doi.org/10.1001/jama.287.2.226
- Ericsson, K. A. (Ed.) (2006). Cambridge Handbook on Expertise and Expert Performance. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511816796
- Griffith, C. H., Wilson, J. F., Haist, S. A., & Lucier, M. (1997). Relationship of How Well Attending Physicians Teach to Their Students’ Performance and Residency Choices. Academic Medicine, 72, 118-120. http://dx.doi.org/10.1097/00001888-199710001-00040
- Hakstian, R. A. (1971). The Effects of Type of Examination Anticipated on Test Preparation and Performance. Journal of Educational Research, 64, 319-324.
- Hamdorf, M. J., & Hall, J. C. (2000). Acquiring Surgical Skills. British Journal of Surgery, 87, 28-37.
- Institute for Health Care Improvements (2010). Improving Outcomes for High Risk and Critically Ill Patients. http://www.ihi.org/IHI/programs/collaboration/improvingOutcomesforHigh-risk
- Kopta, J. A. (1971). An Approach to Evaluation of Operative Skills. Surgery, 70, 297-303. http://dx.doi.org/10.1046/j.1365-2168.2000.01327.x
- Lai, N. M., Sivalingam, N., & Ramesh, J. C. (2007). Medical Students in Their Final Six Months of Training: Progress in Self-Perceived Clinical Competence, and Relationship between Experience and Confidence in Practical Skills. Singapore Medical Journal, 48, 10178-1027.
- Miller, G. (1990). The Assessment of Clinical Skills/Competence/Performance. Academic Medicine, 65, S63-S67.
- Moercke, A. M., & Eika, B. (2002). What Are the Clinical Skills Level of Newly Graduated Physicians? Self-Assessment Study of an Intended Curriculum Identified by a Delphi Process. Medical Education, 36, 472-478. http://dx.doi.org/10.1046/j.1365-2923.2002.01208.x
- Morris, M. C., Gallagher, T. K., & Ridgway, P. F. (2012). Tools Used to Assess Medical Students Competence in Procedural Skills at the End of Primary Care Medical Degree: A Systematic Review. Medical Education Online, 17, 18398. http://dx.doi.org/10.3402/meo.v17i0.18398
- Newble, D. I., Joly, B. C., & Workford, R. E. (Eds.) (1994). The Certification and Recertification of Doctors: Issues in the Assessment of Clinical Competence. Cambridge: Cambridge University Press.
- Promes, S. B., Chudgar, S. M., Grochowski, O. C., Shayne, P., Isenhour, J., Glickman, S. W., & Charles, C. B. (2009). Gaps in Procedural Skills and Competency in Medical School Graduates. Academic Emergency Medicine, 16, S58-S62. http://dx.doi.org/10.1111/j.1553-2712.2009.00600.x
- Wass, V., Van der Vleuten, C., Shatzer, J., & Jones, R. (2001). Assessment of Clinical Competence. The Lancet, 357, 945- 949. http://dx.doi.org/10.1016/S0140-6736(00)04221-5
- Wimmers, P. F., Schmidt, G. H., & Splinter, T. A. (2006). Influence of Clerkship Experience on Clinical Competence. Medical Education, 40, 450-458. http://dx.doi.org/10.1111/j.1365-2929.2006.02447.x
- Wu, H. E., Elnicki, M., Alper, E. J., Bost, E. J., Corbett, E. C., Fagan, M. J., Mechaber, A. J., Ogden, E. P., Sebastian, L. J., & Torre, D. M. (2008). Procedural and Interpretive Skills of Medical Students: Experiences and Attitudes of Fourth-Year Students. Academic Medicine, 83, S63-S67.