Creative Education
Vol.08 No.11(2017), Article ID:79242,12 pages
10.4236/ce.2017.811122

The Use of Mind Maps as an Assessment Tool in a Problem Based Learning Course

Remigio Zvauya*, Shilpa Purandare, Nicola Young, Miranda Pallan

College of Medical and Dental Sciences, Institute of Clinical Sciences, Birmingham University, Birmingham, UK

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: May 27, 2017; Accepted: September 17, 2017; Published: September 20, 2017

ABSTRACT

The use of mind maps as an assessment tool is investigated. The mind map in the current study represents the student knowledge structure at the beginning of the student learning curve unlike previous studies in which the maps are drawn after students have acquired the knowledge already. The study compares the inter-rater reliability of two mind map scoring methods and correlates the marks from these methods with other end of year outcomes. The mind maps were scored independently by three examiners using two mind map scoring rubrics (MMR): a structural and a holistic qualitative rubric. The structural MMR scoring method gave moderate inter-rater reliability with total score ICC values of 0.71 for absolute agreement and 0.57 for consistency between the three examiners. The qualitative MMR scoring method had poor inter-rater reliability with values of 0.33 and 0.32 for absolute agreement and consistency respectively. The concurrent validity with other end of year assessments was poor for both methods. Although the mind map scores did not correlate with other end of year assessments, it is likely that mind maps are assessing a different aspect of the student knowledge construct not assessed by traditional assessments. The inter-rater reliability was better for the structural MMR than the qualitative MMR.

Keywords:

Assessment, Mind Maps, PBL, Medical Course

1. Introduction

The constructivist theory proposes that meaningful learning occurs when prior knowledge and previous life experience are activated and integrated with new knowledge being constructed in context (Daley & Torre, 2010; Davies, 2011) . The leaner is actively engaged in the process and collaborates with colleagues in order to process, interpret and construct new knowledge. Constructivism implies that previous knowledge is stored in such a way that it can be accessed and used easily. It is therefore important that the learner is aware of and uses strategies that facilitate activation of prior knowledge and its integration with new knowledge being acquired (Davies, 2011; Eppler, 2006) . Mind maps are multi-coloured, image-centred, visual, non-linear representations of ideas, and their relationships which can be used to activate prior knowledge and integrate new information with previous knowledge (D’Antoni et al., 2010; Noonan, 2012) . When creating mind maps, free and spontaneous thinking is required. In addition associations are made between ideas so that mind maps can assist in integrating concepts across domains (D’Antoni et al., 2010; Eppler, 2006) . The use of mind maps has been explored in an attempt to move towards a student centred, cooperative learning environment (Rosciano, 2015) . Mind maps have recently been used in medical education to develop critical thinking (D’Antoni et al., 2010) , assist memory recall (Farrand et al., 2002) and as an assessment tool (D’Antoni et al., 2009; Evrekli et al., 2010) .

Rubrics are scoring guides consisting of specific predetermined criteria used in making academic judgements in evaluating students work (Mertler, 2001) . Two types of rubric are described in literature, a holistic rubric where an overall score is given without reference to individual components and an analytic rubric where component or individual parts of the assignment are scored followed by summing up to get a total score (Mertler, 2001) . A few rubrics used to assess mind maps have been described in literature (D’Antoni et al., 2009; Evrekli et al., 2010) . These authors’ adapted methods originally used to score concept maps, which are top to down diagrams presenting information in node link node format with linking descriptive propositions (West et al., 2002; Tergan et al., 2006). Both concept and mind maps promote active, meaningful learning at the metacognitive level and differ only in their structure and organisation of information (D’Antoni et al., 2009) . Concepts maps have been scored using structural and relational analytical methods (Daley & Torre, 2010; Kassab & Hussain, 2010; Hung & Lin, 2015; West et al., 2002) . The structural method assigns a value to hierarchal structure, concept-concept link and cross link. Relational methods, on the other hand are based on the quality of individual links taking into account the structure of the concept map. The structural methods have been shown to be sensitive to changes in students evolving knowledge (West et al., 2002) .

An adaptation of the concept map scoring methods for mind map scoring is the addition of the dimensions of colours and figures to the rubric. D’Antoni et al. (2010) reported good agreement between intra and inter-rater reliability measurements for total and subtotal scores for the mind map rubric.

The published rubrics have been used to assess knowledge of students who use mind maps in a limited number of modules. The present study differs in that it explores the use of mind map assessment rubrics in a cohort of students where mind maps are an important and integral part of the learning process throughout the academic year. The mind map in the current study represents the student knowledge structure at the beginning of the student learning curve unlike previous studies in which the maps are drawn after students have acquired the knowledge already.

2. Setting

The study was carried out during the first year at a UK Medical School which admits life science graduates on a fast track, four year Graduate Entry (GE) course. The first year uses problem based learning (PBL) as its method of instruction with heavy reliance on collaborative and self-directed learning. Students are given written and practical guidance on how to generate mind maps at the beginning of the course.

There are six core modules, each with a set of problems or clinical scenarios which combine concepts from biological sciences, anatomy, ethics, public health and behavioural sciences. Students work in groups of 8 - 10 in dedicated PBL rooms to produce a mind map for each problem. The weekly development of a mind map by each PBL group is an integral part of exploring each clinical scenario. The mind map allows the students to deconstruct the scenario presented to them and explore their collective existing knowledge. The developed mind map is then used as a basis for identifying focused learning objectives relating for further study.

The course assessment methods include short answer questions (SAQ), multiple choice questions (MCQ), clinical, cognitive and communication skills examinations. Since mind maps are an important and integral part in the learning process in the first year, part of the cognitive assessment process involves individual students generating a mind map from a given unseen clinical scenario under examination conditions.

As mind maps are assessed in this way, there is a need for a robust scoring rubric to enable equity of scoring and a vehicle for feeding back to students on their use of mind maps as a learning tool during PBL. Gasaymeh (2011) discusses the need to develop rubric criteria that assist in the process of knowledge acquisition whilst facilitating student’s engagement with the learning environment. In this study mind maps generated by individual students were scored using two mind map scoring rubrics (MMR): an analytical, structural MMR and a holistic, qualitative MMR. The objectives of this study were:

1) To assess mind maps developed by individual students from a given unseen scenario using a modified analytical structural and a holistic qualitative MMR scoring method.

2) To compare the inter-rater reliability of the modified analytical structural and holistic qualitative MMR scoring methods.

3) To assess the concurrent validity of the two scoring methods with respect to the other end of year examination marks.

3. Method

3.1. Participants

The participants were first year GE medical students with previous life science degrees. There were no international students on the cohort. The average age of the cohort was 24 years. There were 19 male and 35 female students in the cohort. All GE first year medical students (n = 54) sat a cognitive assessment as part of their end of year examinations in the 2011-12 academic year.

3.2. Generation of the Mind Maps

A practice session prior to the examination was organised for students to individually draw mind maps under examination conditions. Students were already familiar with the process of developing mind maps as they used them on a weekly basis across all modules as part of the problem based learning cycle.

As part of the cognitive assessment students were given a previously unseen clinical scenario. They were instructed to draw a mind map under examination conditions on an A3 sheet of paper using different colours. The students were then required to develop 10 learning objectives with the aid of their mind map. The time given for this process was 40 minutes. Following this part of the examination students were examined in a 10 minute oral examination where they were questioned about how they developed their learning objectives and further information that they would seek to fulfil these objectives. Separate marks were awarded for the mind maps, the learning objectives and the oral examination. These were then combined to give an overall cognitive examination mark. Figure 1 shows an example of a mind map generated by a student.

3.3. Mind Map Scoring

The mind maps were scored using both methods independently by three markers, RZ, NY and SP, who were experienced in mind map generation and PBL facilitation. The markers had been previously trained in mind map scoring. For each scoring method, the examiners marked three sample mind maps independently and then met to discuss how each marker had arrived at their score. The marking criteria were further refined at this stage and definitions’ clarified and agreed among the markers.

3.4. The Analytical, Structural MMR Scoring Method

This was originally developed for concept maps (Srinivasan et al., 2008; West et al., 2002) and later adapted for mind maps (D’Antoni et al., 2009; Evrekli et al., 2010) . The analytical structural MMR was adapted such that concept links near the patient at the centre (P-MC) carried more weighting than those lower down the hierarchal structure as seen in Figure 1 and Table 1. This was because we identified that these patient-concept (P-MC) links represented the major concepts from the clinical scenario which need further consideration, and also assisted in identification of lower level concept concept (C-C) links as shown in Figure 1.

In the modified scheme used in this study marks were given to the number of valid patient-main concept links (P-MC) identified, concept-concept (C-C)

Figure 1. Mind map from a patient with Irritable Bowel Syndrome: examples of patient-concept (P-MC) links; concept-concept (C-C) links; relationship links; cross links and examples, figures and equations are shown.

Table 1. Mind map analytical, structural MMR scoring scheme: examples of the components on a mind map are shown in Figure 1.

links, cross links, relationship links, colours, pictures, signs, symptoms and examples as shown in Figure 1 and Table 1. For each mind map, sub scores and total scores were computed.

The modifications to the structural MMR were made to allow for the fact that the purpose of scoring the mind map in this cohort was to assess the process of logical development of ideas from the clinical case, rather than assessing knowledge per se. Therefore less weighting was given for the accuracy of the factual information on the mind map, and more weighting given for breadth and integration of concepts arising from the clinical case.

3.5. The Holistic, Qualitative MMR Scoring Method

It was evident that the structural MMR scoring method was time consuming and thus resource intensive to use as an assessment tool. We thus developed a qualitative MMR method based on the holistic scoring method used for scoring concept maps by previous workers (McClure et al., 1999; van der Heidt, 2015) . The holistic method described was modified such that more guidance was given to the marker scoring the mind map. Thus the criteria below were used in assessing each mind map.

1) Identification of triggers in the problem: the degree to which the student is

able to identify the key concepts in the problem.

2) Development of valid concept links: the ability of the students to explore their knowledge by developing concepts further.

3) Development of hierarchies: the arrangement of concepts in a logical manner with the more fundamental concepts at the centre and more specific as concepts on the periphery of the map.

4) Identification of cross links and relationship links: the ability to show the meaningful connections between different concepts (cross links) and links within a concept (relationship link).

5) Use of colours and pictures to enhance the mind map making it visually easy to follow.

These individual criteria were not given scores, but were set out as a guide for markers to divide mind maps into very good 75% - 100%, good 65% - 74%, average 54% - 64%, borderline 48% - 53% and fail below 48%. Thus an overall percentage mark was then given for each mind map based on the overall quality of each mind map.

4. Data Analysis

Anonymised end of year assessment data were obtained from university records. These included mind map scores, scores from the cognitive, clinical, communication assessment examination and overall knowledge scores, derived from the written examinations designed to assess knowledge across the different modules covered in the GE first year. Data were analysed using a statistical software package (stata v11; StataCorp LP). Descriptive analysis was undertaken to visually inspect differences in marks between the three markers for each of the scoring rubrics. A two way random ANOVA model was used to calculate inter-rater reliability and an intraclass correlation coefficient (ICC). We calculated two different ICCs; one to assess consistency between markers; whether the markers were ranking the students in the same way, but not necessarily awarding similar value marks, and the other to assess absolute agreement; how similar the value of the marks given by each of the markers for each student were.

To explore agreement between the two scoring methods, Pearson’s correlation coefficients were calculated for the two MMRs for each individual marker. Mean marks for each student were then calculated from the 3 markers’ scores for each method and correlation coefficients were calculated for these mean scores.

Pearson’s correlation coefficients were also calculated for mind map scores and other end of year outcomes to assess whether mind map scores correlate with other assessment outcomes.

5. Results

Table 2 shows the mean and range values for the marks from the qualitative and structural MMR methods from the three independent markers, RZ, NY and SP. For the patient-main concept links (P-MC) using the analytical, structural MMR scoring method, the mean values were 33.3; 29.8; 30.9 for RZ, NY and

SP respectively.

The concept-concept links (C-C), examples and symptoms were grouped together as C-C-ESS during the analysis as they had equal weighting. The C-C- ESS for the MMR structural method had means of 169.6, 157.3 and 171.0 for RZ, NY and SP respectively.

The total scores for the analytical, structural MMR scoring method had mean values of 255.5, 212.9 and 259.3 while that for the holistic, qualitative MMR scoring method were 61.0, 63.2 and 59.3 for RZ, NY and SP respectively.

Table 3 shows the ICC values for the three examiners. The holistic, qualitative MMR scoring method gave low inter-rater agreement with an ICC (95% CI) of 0.33 (0.16 - 0.51) for consistency and 0.32 (0.15 - 0.49) for absolute agreement. Conversely the analytical, structural MMR total scores had an ICC (95% CI) value of 0.71 (0.59 - 0.81) and 0.57 (0.25 - 0.77) for consistency and absolute agreement respectively.

P-MC links, pictures, C-C-ESS and colours had high ICC values as shown in Table 3.

The Pearson correlation coefficients for individual markers were all moderately low with the highest value being 0.31 (p = 0.03) as shown in Table 4. The Pearson correlation coefficients for the average scores when comparing the two methods was moderate with a value of 0.47 (p < 0.001).

Table 2. Mean and range of marks from the qualitative and structural MMR methods from the three independent markers, RZ, NY and SP (N = 54).

Table 3. ICC values for consistency and absolute agreement for qualitative marking and structural MMR scoring methods.

*ICC consistency assesses whether the markers are ranking the students in the same way, but not necessarily awarding similar value marks; **ICC agreement assesses how similar the value.

Table 4. Agreement between the analytical, structural MMR and holistic, qualitative MMR marking methods using Pearson’s correlation coefficient.

We also explored whether the mind map scores awarded to students as part of their end of year assessment correlated with scores achieved by students in other aspects of their end of year assessment. Pearson’s correlation coefficients were calculated for mind map scores and other end of year outcomes (overall cognitive assessment score, overall knowledge score, communication skills score, and community based medicine clinical skills score). Correlation coefficients are shown in Table 5 (correlation coefficient values can range from 0 to 1). The low correlation coefficients indicate that mind map scores obtained by both methods correlated poorly with scores in other end of the year assessments. Most correlation coefficients were also non-significant, as indicated by their p values.

6. Discussion

This study investigates the use of two methods of mind map scoring as part of a

cognitive assessment at the end of the first year of a graduate entry medicine PBL programme. This entails two aspects: inter-rater reliability of the two scoring methods, and consistency and agreement between the two methods.

Of the two methods used to score the mind maps, the analytical, structural MMR scoring method had a higher inter-rater reliability than the holistic,

Table 5. Correlations between average analytical, quantitative and holistic, qualitative MMR method scores and other end of year exam scores using Pearson’s correlation coefficient.

qualitative MMR method. The ICC for consistency (0.71) indicates that agreement between the three markers in terms of how they ranked the students in the same order was good. The ICC for absolute agreement between examiners for individual student scores was lower (0.57), but still indicates moderate agreement. These values are similar to the values obtained by previous researchers who assessed mind maps generated by other cohorts of students (D’Antoni et al., 2009) . The qualitative MMR method on the other hand gave low ICC values for both consistency and agreement indicating that the examiners neither ranked the mind maps in the same way nor were the marks from the three markers for individual students similar. The qualitative MMR scoring method possibly presents a cognitive challenge, requiring the marker to make simultaneous evaluations of various aspects of the mind map. This is likely to make heavy demands on the markers working memory. This may result in each individual marker tackling the mind map complexity differently which would in turn affect examiners consistency and agreement and hence the reliability of this scoring system. Similar results have been reported for concept maps (McClure et al., 1999) . Whilst the analytical, structural MMR method gave more structure and guidance to the examiners, there were problems in defining what constitutes concepts, examples, signs and symptoms on the mind maps and hence what constitutes concept-concept links. This again could have affected the agreement between markers for the analytical, structural MMR scoring method and hence reliability. Marker training is important in scoring mind maps for assessments. To minimise the effect of training the markers were trained and had discussions on how to grade three mind maps prior to embarking on the scoring exercise. Marker training is important in scoring mind maps for assessments.

The quality of a mind map is determined by the template used in the construction of a mind map. Previous workers have used key words as template when constructing maps (McClure et al., 1999) . If an unconstrained template is used to construct the mind map, this may lead to variation in the nature and quality of the resultant mind maps generated by individual students. The unconstrained nature of the template used in this study, an unseen clinical scenario may have led to some variation. Students’ prior knowledge in the subject also affects the quality of the mind maps. Since students in the present study had different educational backgrounds in life sciences, this may have affected the quality of the mind maps generated.

The low Pearson correlation coefficients between the structural and the qualitative MMR scoring methods suggests that the two methods maybe assessing different aspects of the mind maps. We are not aware of studies comparing different scoring methods for mind maps in similar settings; however such studies have been carried out for concept maps (West et al., 2002; von der Heidt, 2015) . The Pearson correlation coefficients obtained by previous researchers (McClure et al., 1999) of r = 0.193 to 0.608 (p < 0.01) in their studies comparing various methods of scoring concept maps are similar to those calculated in the present study with mind maps (r = 0.31, p = 0.03).

Our results indicate that mind map scores did not correlate with other end of year outcomes irrespective of the MMR scoring method used. However the holistic, qualitative MMR method moderately correlated with overall end of year scores compared to the structural method. The mind maps were thus assessing a different aspect of the student knowledge construct from that assessed in other examinations. Although we could not find any previous reports on the concurrent validity of mind maps, our finding is in agreement with previous reports of studies with concept maps (West et al., 2002) .

The use of minds maps described here is different from that described in previous studies (D’Antoni, Zipp and Olson, 2009; D’Antoni et al., 2010; Evrekli, Inel and Ali, 2010) in which the maps are drawn after students have acquired the knowledge already. In the present study mind maps are used to brain storm ideas from given clinical scenario, reactivate prior knowledge and decide what learning issues require further study. The mind map thus represents the student knowledge structure at the beginning of the student learning curve. Future research could correlate the mind map scores to learning objective scores. The development of a combined scoring rubric for mind maps and learning objective is a potential area for further study.

7. Conclusion

Mind maps could be used as part of an overall assessment strategy in a course using a PBL instructional method. Our results indicate that the structural MMR scoring method had a high inter-rater reliability. The poor correlation with other end of year assessments suggests that the mind maps could be assessing different constructs of student knowledge. However, this study is limited to one institution and a specific context, and further work on the use of mind maps in assessments in medical education is required.

Acknowledgements and Declaration of Interest

The authors report no conflict of interest during the study. We would like to thank Dr. C Taylor for reading the manuscript. The study was approved by the University Ethics Committee ERN_12-1369.

Notes on Contributors

REMIGIO ZVAUYA BSc, MSc, PhD is currently the Graduate Entry Course MB ChB Phase 1 lead and Senior PBL facilitator at the College of Medical and Dental Sciences, University of Birmingham.

SHILPA PURANDARE, MBBS, MMedSc, MRCOG, MRCGP is currently a PBL Facilitator and a Senior Clinical Tutor at the College of Medical and Dental Sciences, University of Birmingham.

NICOLA YOUNG, BSC PhD, FHEA is currently a PBL facilitator at the College of Medical and Dental Sciences, University of Birmingham.

MIRANDA PALLAN BSc, MB ChB, PhD was a PBL facilitator and is a Senior Clinical Research Fellow in Public Health at the College of Medical and Dental Sciences, University Birmingham.

Cite this paper

Zvauya, R., Purandare, S., Young, N., & Pallan, M. (2017). The Use of Mind Maps as an Assessment Tool in a Problem Based Learning Course. Creative Education, 8, 1782-1793. https://doi.org/10.4236/ce.2017.811122

References

  1. 1. D’Antoni, A. V., Zipp, G. P., & Olson, V. G. (2009). Interrater Reliability of the Mind Map Assessment Rubric in a Cohort of Medical Students. BMC Medical Education, 9, 19. https://doi.org/10.1186/1472-6920-9-19 [Paper reference 6]

  2. 2. D’Antoni, A. V., Zipp, G. P., Olson, V. G., & Cahill, T. F. (2010). Does the Mind Map Learning Strategy Facilitate Information Retrieval and Critical Thinking in Medical Students? BMC Medical Education, 10, 61. https://doi.org/10.1186/1472-6920-10-61 [Paper reference 5]

  3. 3. Daley, B. J., & Torre, D. M. (2010). Concept Maps in Medical Education: An Analytical Literature Review. Medical Education, 44, 440-448. https://doi.org/10.1111/j.1365-2923.2010.03628.x [Paper reference 2]

  4. 4. Davies, M. (2011). Concept Mapping, Mind Mapping and Argument Mapping: What Are the Differences and Do They Matter? Higher Education, 62, 279-301. https://doi.org/10.1007/s10734-010-9387-6 [Paper reference 2]

  5. 5. Eppler, M. J. (2006). A Comparison between Concept Maps, Mind Maps, Conceptual Diagrams, and Visual Metaphors as Complementary Tools for Knowledge Construction and Sharing. Information Visualization, 5, 202-210. https://doi.org/10.1057/palgrave.ivs.9500131 [Paper reference 2]

  6. 6. Evrekli, E., Inel, D., & Ali, G. (2010). Development of a Scoring System to Assess Mind Maps. Procedia-Social and Behavioural Sciences, 2, 2330-2334. https://doi.org/10.1016/j.sbspro.2010.03.331 [Paper reference 4]

  7. 7. Farrand, P., Hussain, F., & Hennessy, E. (2002). The Efficacy of the “Mind Map” Study Technique. Medical Education, 36, 426-431. https://doi.org/10.1046/j.1365-2923.2002.01205.x [Paper reference 1]

  8. 8. Gasaymeh, A. H. (2011). The Implications of Constructivism for Rubric Design and Use. In Higher Education International Conference (HEIC 2011). https://Gasaymeh/publication/310460039_ The_implications_of_constructivism_for_rubric_design_and_use/ links/582ef29908ae004f74be300f.pdf https://www.researchgate.net/profile/Al_Mothana_Gasaymeh/publication/310460039_ The_implications_of_constructivism_for_rubric_design_and_use/ links/582ef29908ae004f74be300f.pdf [Paper reference 1]

  9. 9. Hung, C. H., & Lin, C. Y. (2015). Using Concept Mapping to Evaluate Knowledge Structure in Problem-Based Learning. BMC Medical Education, 15, 1. https://doi.org/10.1186/s12909-015-0496-x [Paper reference 1]

  10. 10. Kassab, S. E., & Hussain, S. (2010). Concept Mapping Assessment in a Problem-Based Medical Curriculum. Medical Teacher, 32, 926-931. https://doi.org/10.3109/0142159X.2010.497824 [Paper reference 1]

  11. 11. McClure, J. R., Sonak, B., & Suen, H. K. (1999). Concept Map Assessment of Classroom Learning: Reliability, Validity, and Logistical Practicality. Journal of Research Science Teaching, 36, 475-492. https://doi.org/10.1002/(SICI)1098-2736(199904)36:4<475::AID-TEA5>3.0.CO;2-O [Paper reference 5]

  12. 12. Mertler, C. A. (2001). Designing Scoring Rubrics for Your Classroom. Practical Assessment, Research & Evaluation, 7, 1-10. [Paper reference 1]

  13. 13. Noonan, M. (2012). Mind Maps: Enhancing Midwifery Education. Nurse Education Today, 33, 847-852. https://doi.org/10.1016/j.nedt.2012.02.003 [Paper reference 1]

  14. 14. Rosciano, A. (2015). The Effectiveness of Mind Mapping as an Active Learning Strategy among Associate Degree Nursing Students. Teaching and Learning in Nursing, 10, 93-99. https://doi.org/10.1016/j.teln.2015.01.003 [Paper reference 1]

  15. 15. Srinivasan, M., McElvany, M., Shay, J. M., Shavelson, R. J., & West, D. C. (2008). Measuring Knowledge Structure: Reliability of Concept Mapping Assessment in Medical Education. Academic Medicine, 83, 1196-1203. https://doi.org/10.1097/ACM.0b013e31818c6e84 [Paper reference 1]

  16. 16. Tergan, S. O., Graber, W., & Neumann, A. (2006). Mapping and Managing Knowledge and Information in Resource-Based Learning. Innovations in Education and Teaching International, 43, 327-336. https://doi.org/10.1080/14703290600973737 [Paper reference 1]

  17. 17. von der Heidt, T. (2015). Concept Maps for Assessing Change in Learning: A Study of Undergraduate Business Students in First-Year Marketing in China. Assessment and Evaluation, 40, 286-308. https://doi.org/10.1080/02602938.2014.910637 [Paper reference 2]

  18. 18. West, D. C., Park, J. K., Pomeroy, J. R., & Sandoval, J. (2002). Concept Mapping Assessment in Medical Education: A Comparison of Two Scoring Systems. Medical Education, 36, 820-826. https://doi.org/10.1046/j.1365-2923.2002.01292.x [Paper reference 6]