Open Journal of Medical Imaging
Vol.09 No.01(2019), Article ID:92107,17 pages

Radiological Errors: Implications and Causes with a Focus on Mammographic Misdiagnosis

Mohammad Rawashdeh1*, Sarah Lewis2, Patrick Brennan2

1Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, Jordan

2Medical Image Optimisation and Perception Group (MIOPeG), and the Brain and Mind Centre, the Faculty of Health Sciences, The University of Sydney, Sydney, Australia

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

Received: February 20, 2019; Accepted: March 28, 2019; Published: March 31, 2019


Radiological diagnostic errors may have serious clinical and medico-legal implications. Previous work has reported that radiology has a reasonable incidence of error, a number of which are resulted from observer mistakes. The radiologists’ interaction with the image is critical, and studying the types of diagnostic errors to improve patient and radiologist wellbeing, reduce cost and improve the public perception of the health care system is well justified. Therefore, the aim of current review is to consider the primary types of diagnostic errors in radiology, as well as their causes and implications with a focus on mammographic misdiagnosis.


Radiological Errors, Radiologists, Breast Cancer, Radiologists’ Performance

1. Introduction

Medical diagnostic errors, defined here as missed, delayed or wrong diagnoses, are the second most common medical mistake in Australia [1] [2] and globally [3] [4] behind procedure errors. Diagnostic errors often go undetected or unreported in the medical field [5]. Nonetheless, it has been documented that inaccurate diagnoses are a major cause of adverse medical events and are linked with higher morbidity when compared to other kinds of medical errors [6] [7] [8]. Around 100,000 individuals in the US are estimated to lose their lives each year as a result of medical errors [9] , resulting in higher mortality rates in a given year than AIDS (16,516), breast cancer (42,297) or automobile accidents (43,458) [10]. Furthermore, 47% of adverse events related to diagnostic errors result in serious disability [11].

Whilst the medical and social effects of diagnostic errors are clear, there are other issues. The direct impact of errors on specialists is well reported [12] , where associated legal action has been shown to be strongly linked with high stress, anxiety, guilt, self-criticism, depression, and fearful feelings [13] [14]. Non-medical health practitioners are also affected, with evidence demonstrating that career satisfaction, ability to sleep, relationships with co-workers and self-esteem were all adversely altered as a consequence of a diagnostic error [9] [14] [15]. In addition, it has been shown that after being involved in major medical errors, associated difficulties with sleeping and concentrating may raise the risk of further medical errors in the future in turn [16]. A survey completed by 3171 health workers from a variety of disciplines in the US and Canada demonstrated the prevalence of such effects; of those practitioners involved in major medical errors, 61% experienced increased anxiety of future potential errors, 44% demonstrated reduced self-confidence in their skills as specialists, 42% had a reduced ability to sleep, 42% experienced decreased job satisfaction and 13% felt that their professional reputation was damaged [17]. These data highlighted the wider implications of medical errors.

Medical errors cause an enormous economic burden on both governments and individuals. It is estimated that in 75% of the diagnostic error cases, the average compensation payout in Australia exceeded payment of more than AU $100 k [18] and out of 75% of errors for which compensation was paid, 70% could have been prevented [19]. Similar results were shown in US for diagnostic errors made specifically in radiology, with a total estimation of more than US $38 billion compensatory payments in 2001 with US$17 billion of these costs associated with preventable mistakes [20] [21]. A recent analysis of malpractice claims for diagnostic errors gathered from the US National Practitioner Data Bank in the period between 1986 and 2010 identified around 100,000 cases of malpractice claims, with diagnostic error in radiology being the main causal agent (29%), with an average cost per claim of US$386,849 [22].

Specific radiologic procedures appear to be particularly associated with diagnostic error. Medical insurance agencies in North America [23] and in the UK [24] reported that most lawsuit cases against radiologists arose from a failure to diagnose breast cancer, lung cancer and orthopaedic fractures [24] [25]. In a 14-year Italian study, the main cause of error was associated with cancer diagnosis (43.5% of all disease states), with 60% of errors involving breast [3] [4]. In the later years of this Italian study, the number of claims for missed breast cancer increased markedly, with claims made to insurance companies amounting to US$132 in just 178 cases [4]. According to the Physician Insurers Association of America, radiologists were the specialists most frequently sued in malpractice lawsuits involving breast cancer [26].

1.1. Errors in Missed Cancers

Early detection of most kinds of cancers provides better survival outcomes [27] , and as highlighted above, misdiagnoses may result in a variety of serious consequences [28]. Since the most frequent type of diagnostic errors occur with neoplasms located within the breast, we will examine these conditions with a greater level of detail.

Mammographic images are the primary diagnostic tool for the early detection of breast cancer. Early detection of breast cancer reduces mortality and can lead to treatment that is more effective. The 5-year survival rate is 97% for patients with local stage cancer, but this decreases to 78% and 22% when regional spread and distant disease is reported, respectively [29] [30] [31]. Mammographic screening leading to appropriate intervention has been shown to reduce breast cancer deaths in women aged 50 - 69 by up to 30% [32] ; Nonetheless the missed cancer rate remains high even with technological advances over the last two decades. For example, 30% - 70% of breast cancers diagnosed at follow-up mammography are visible on earlier mammograms which are originally interpreted as normal [33]. In a review of 320 breast cancer cases in a screened population, 24% were missed at screening mammography, and of the missed cancers, 61% were visible in retrospect suggesting these cancers could have been detected sooner [34]. In a study of the 40 - 49 years old group, almost 50% of cancers were missed at screening mammography, meaning that maybe half of reported cancers present as interval (symptomatic) cancers [35]. A study carried out by the Medical Image Optimization and Perception Group (MIOPeG) investigating the performance of experienced readers using images where the cancer was visible and had been previously identified (and biopsy-proven), reported that a median value of 44% of lesions were missed by 116 Australian and New Zealand breast imaging readers [36]. With 1,726,099 mammography studies being performed in Australia in 2011, and 1 million new breast cancer cases being reported each year globally, the impact of radiologic misdiagnosis on public health is a hugely important issue [16] [35] [37].

While 74% of interpretative errors in radiology are linked to cognitive and system factors [38] , it is suggested that only 5% of missed cancers actually have a technical origin [34]. Understanding the radiologists’ interaction with the image is therefore critically important, and studying the types of radiologic diagnostic errors in order to improve patient and practitioner wellbeing, reduce cost and improve the public perception of the health system is well justified. The types of diagnostic errors occurring in radiology have been well described for almost four decades, and these fall into two main groups: Cognitive errors (such as a missed lung nodule when interpreting a chest radiograph) are usually associated with problems of visual perception (search, recognition, interpretation). System errors (such as failure to suggest the next appropriate procedure and failure to communicate results in a timely and clinically appropriate manner) are usually linked to problems with the health system or context of care delivery [39] [40]. Since a large number of radiologic errors result from observer interactions with the image (perception errors), this will be the focus of current review.

1.2. Perception Errors

Using eye position tracking, a technique that can record a radiologist’s gaze and monitor which regions of the images are looked at, it has been found that perception errors can be classified into three categories based on the length of fixation (point where of gaze remains continuously for 100 milliseconds or more within a specified image region) and dwell time (total time spent by reader fixating a specific location) [41] (Figure 1). These categories are search, recognition and decision errors [42] - [46].

To understand search errors it is first necessary to be aware of the global-focal model of perception in radiology which has been well described elsewhere [47] and will be summarized here. The global-focal model of perception in radiology is divided into two stages: Firstly, a rapid global impression takes place, (holistic acquisition of information from the entire image), when the radiologist rapidly examines an image and compares it with normal templates that he or she has mentally stored through prior knowledge and experience [48] [49]. Any perturbations from normality are then flagged and referred to for a more detailed examination. This initial stage of the image interpretation takes as little as a few hundred milliseconds [50] , with Kundel and Nodine (1975) reporting that experienced radiologists could find 70% of the nodules on chest X-rays when presented with just a 200 millisecond “flash” image [51]. Other researchers repeated the experiment with experts reading a test set of mammograms found that radiologists correctly recognized 51% of lesions in the “flash” condition compared to 69% when unlimited time was offered [32]. It is proposed that information from the global impression is then used to direct and inform the second analytic or detailed stage, where the fovea is directed to the location of flagged areas to collect diagnostic features from the abnormality and its background [48] [49]. Logical rules are employed in the analytic search to combine these features in meaningful ways, to determine whether these features should be reported as normal or abnormal findings [48] [49].

Figure 1. (a) An observer setup to record eye-positions during a search task. (b) Eye-position pattern of a radiologist searching for lesions in mammogram. Each circle represents the location of the fixations using a details search. The larger the circle the longer time spent fixating that location. The scan path followed by the eye is indicated by the lines between fixations clusters [41].

1.3. Errors of Omission (False Negative Outcomes)

There are three types of errors of omission. The first category is composed of search errors, in which the interpreting radiologist fails to fixate the lesion and hence it is not reported [41] [42]. The second type of error is a recognition error, which occurs when the areas containing abnormalities are detected, but they are not fixated long enough (usually less than 1 second) to allow the recognition of an abnormal finding at the location. Finally, decision making errors are those in which an abnormality is detected and fixated for a long time, usually 1 second or longer, but ultimately misinterpreted as normal or benign. The 1 second fixation period appears to be an important threshold for decision making as previous work on pulmonary nodules showed that 10 % of fixations on correctly detected lesion were shorter than 1 second and at about 1 second 90% of pulmonary lesions were identified [51].

The evidence suggests that approximately 30%, 25% and 45% of missed breast and lung cancers belong to search recognition and decision errors respectively. This classification of errors has been studied in chest [42] , bone [43] [44] , and mammograms [4] [46]. The rate of occurrence of these types of errors may depend on a number of features including lesion size, contrast, and shape, along with border sharpness and continuity, as well as the experience of the reading radiologist. These factors will be considered below [41] [42] [44] [52] [53].

Dwell duration at a given location along with reader experience has been found to be linked with decision outcome. It has been shown that more experienced readers find lesions faster [45]. For example, less experienced radiologists took 1.8 seconds to first fixate microcalcification clusters and 1.5 seconds to detect masses, while more experienced readers needed only 0.9 and 0.6 seconds to first fixate on the same lesion-types in mammogram [45]. In addition, it was found that after the first 25 seconds of searching, the chance of reporting a false positive decision is increased by 50%, thus indicating the important role played by the global impression in overall image perception and in directing focal search [45]. However, reference [54] reported that readers dwell longer on detected lesions than they do on missed lesions. Nonetheless, faulty search is not the dominant causal agent for errors, as it is estimated that only 30% of missed lesions in breast and lung cancer are due to improper search strategies [51] [55]. Certainly, cancers are missed not only because they are not detected by the radiologist, but also because they are not recognized as cancer [56]. Recognition and decision errors may reflect insufficient training, inexperience, prior conditioning, fatigue, poor judgment, or simply a subtle case in which the wrong decision is made [57].

2. Factors That Affect Interpretation

A number of factors can affect mammographic interpretation, including issues regarding the reader themselves and others relating to the image. Both will be considered here.

2.1. Reader Characteristics

The accuracy of mammographic image reading among individual radiologists is highly variable and factors such as experience can affect lesion detection accuracy. Previous studies looking at how reader characteristics impact upon performance reveal an inconsistency [58] - [64]. Some authors have concluded that annual reading volume was not linked with performance [58] [59] [60] [61]. While others suggest that sensitivity improved with individuals having more experience reporting mammography [63] , having undergone fellowship training [62] , reading higher volumes if overall reading load remains less than 1000 cases per year [63] and higher reading loads in general specificity appears to increase with higher reading volumes [63]. Other factors that appear to be occasionally relevant include the numbers of years certified as a radiologist, years of experience and hours reading per week [63] [64] (Table 1).

A more recent study by reference [36] aimed to address this confusing picture by highlighting whether performance patterns are dependent on volume-based groupings and years of experience. This paper argued that without allocating radiologists to specific groupings determined by defined reading volumes, subtle findings regarding the influence of radiologic characteristics on performance can be obscured by grouping all radiologists together. Key findings from that paper were as follows; radiologists who read less than 1000 mammograms per year, appear to have lower performance scores than those who read more than 1000 cases per year and readers with annual reading volumes of less than 1000 demonstrate reduced performance with increased years reading mammograms. This inverse relationship between performance and numbers of years reading mammograms is counter-intuitive and simply stated suggests that when performing a particular task at low activity, one becomes worse, not better with increasing time. On the other hand, readers with an annual volume of reading mammographic images of greater than 5000, showed positive correlations between radiologists performance and number of years qualified along with number of years as well as numbers of hours, each week reading mammograms. Interestingly, this positive correlation is not linked to enhanced detection of cancer, but instead associated with the increased ability to recognize normal images. This means that the true discriminating agent that separates individuals performing at the highest levels from others is the ability to recognize what is normal. However, for readers with an annual volume between 1000 to 5000 mammographic readings, performance scores were significantly related only to the number of mammographic readings per year [36].

Numerous other factors can lead affect radiologists performance, such as the reader’s fatigue [68] , insufficient views of the anatomy, technical errors and not using prior images for comparison and the relatively low prevalence of breast cancer in screening populations [69]. Furthermore, the consequences on radiologist’ performance by being involved in making previous incorrect decision must also be acknowledged. For example, if a false positive error had been highlighted,

Table 1. Summary for the studies examined the impact of radiologists’ characteristics on mammographic diagnostic accuracy

then the relevant radiologist may only report lesions that are very obvious, whereas if the reported error is a false negative type, then the tendency of radiologists would be to lower the suspicion threshold for reporting a lesion [70].

2.2. Satisfaction of Search

“Satisfaction of search” (SOS), is also another perceptual factor that affects radiologist performance, where the detection of one lesion is hindered by the successful detection of another lesion when two lesions are visible. Estimates of SOS errors have ranged from one-fifth to one-third of misses in general radiology and may be as high as 91% in emergency medicine [71]. Ashman et al. (2000) compared the detection in single abnormalities and multiple abnormalities in plain radiographs and found a similar pattern in both groups; around one third missed a second lesion when two lesions were visible, however, the detection rate for second and third abnormalities in the multiple finding cases was about one half compared with that for cases single lesion [72]. Renfrew (1992) reported that SOS accounted for almost 6% of the errors [72] [73]. Berbaum and colleagues (2010) found that premature search termination is generally not the main cause of SOS; rather, faulty pattern recognition and/or faulty decision making seem to be the more likely the causal agent [74]. Therefore, Manning and his colleagues suggested that it is better to use the term ‘satisfaction of decision’ when describing such errors/phenomenon [75].

3. Image and Lesion Features

Breast density: On the basis of mammographic appearance, breasts have two major components: fibroglandular tissue and fat. Fibroglandular tissue is a combination of fibrous connective tissue (the stroma) and glandular tissue (epithelium). Fibroglandular tissue has a higher x-ray attenuation coefficient than fat, and therefore is less transparent to x-rays. Thus, regions of fibroglandular tissue appear brighter on mammograms and breasts with a high percentage of fibroglandular tissue are referred to as having high mammographic density [76] [77]. The amount of breast density is an important biological factor, as the higher the density in the breast, the higher the risk of breast cancer [78]. Breast density is classified using the Breast Imaging Reporting and Data System (BI-RADS) lexicons for reporting findings on mammography and it is divided into four categories: BI-RADS-A indicates a primarily fatty breast; BI-RADS-B scattered fibroglandular densities; BI-RADS-C a breast that is heterogeneously dense; and BI-RADS-D, an extremely dense breast (Figure 2) [79].

Many lesions are very subtle, making them difficult to detect and distinguish from surrounding breast tissue and previous studies have shown that missed lesions tend to occur in more dense breasts [80] [81] [82]. Mammographic sensitivity decreases from 80% - 98% in fatty breasts to 29.2% - 75% in mammographically dense breasts in both screening and diagnostic scenarios [83] - [93]. Furthermore, specificity is decreased in women from 96.9% in fatty breasts to 89.1% in extremely dense breasts [87] [94]. If the cancer superimposed on the dense background fibroglandular tissue, it may be masked partly or completley, causing difficulty with breast cancer detection and resulting in cancers progressing to an advanced stage [81] [82].

Lesion size and shape: Lesion size can affect lesion detectability as previous reports have found that the most commonly missed cancers in mammography are lesions that are very small in size [80] , and especially so if the lesion is visible only in a single view. Goergen et al. reviewed 146 cases using double readings, and when the two readers disagreed, a third reader reviewed the case. It was shown that the lesions that were reviewed by the third reader were smaller in size than those detected by the primary two readers [95].

Figure 2. Variation in breast density according to BI-RADS lexicons as described by three senior examiners of the American Board of Radiology: (a) Fatty breast; (b) scattered fibroglandular densities; (c) heterogeneously dense; (d) extremely dense breast [79].

Lesion shape is another crucial feature that may affect lesion detectability since radiologists commonly use lesion shape and margin features to classify breast masses into benign and malignant, with these lesions having different shape characteristics [96]. A common appearance of malignant breast masses is a stellate or starburst presentation, with a variable contour that is usually accompanied by spiculations from the edges of the mass. On the other hand, benign breast masses generally consist of smooth contours and a round or oval shape [96]. Reference [80] found that missed cancers were commonly irregular in shape (Figure 3). However, some caution must be taken since the differential diagnosis, whether benign or malignant, cannot confidently be based on the mass shape as a number of benign lesions can be irregular in appearance [97].

Whilst a number of previous studies have considered the effect of shape and margin of masses and resultant findings have been used to improve cancer detection [98] [99] , the range of shape features that has been studied is limited. This is likely due to the difficulties associated defining precisely the shape of masses that stand out against the parenchymal background compared with calcifications [100]. However, the importance of shape characteristics and their impact on mass characterization can enhance our understanding of why visible cancers may be missed in mammography and thus inform future radiology

Figure 3. Four types of masses were delineated by an American Board Radiologist examiner. Left to right: A round mass, an oval mass, a microlobulated malignant tumor, and a spiculated and irregular malignant tumor [80].

training programs and innovative computer-aided systems. Our group recently studied a greater array of image and lesion characteristics than normally investigated to further elucidate which specific feature(s) were making the lesion less likely to be reported [99]. We confirmed that lesion size and shape were critically important, but in particular we showed that the appearances of spiculation appear to be strongly related to reducing detection of cancer meaning that the more rounded the margins, the more chance we are to detect the cancer, and cancers with irregular margins or greater levels of spiculation have a lower detectability rate. Surprisingly, the authors of that paper reported that gray level or brightness characteristics had little effect on detection compared with geometric (shape) features. This latter finding was unexpected because poor lesion contrast is often reported as a key feature that limits diagnostic accuracy in mammography [100]. The overall conclusion; however, was that mammographic sensitivity may be adversely affected without appropriate attention to spiculation.

4. Conclusion

The present review demonstrates that radiological errors are not uncommon. The reasons for error are multifactorial, but they can be due to observer interaction with the image which relies at least in part on image characteristics and reader experiences. Identification and reduction of diagnostic error may provide a measure of how efficient a healthcare system is, and should reduce mortality, morbidity and the length of hospital stays along with reductions in associated healthcare costs. It is therefore in clinicians’ interest and those of their patients to try to reduce risk as much as possible, recognizing that medical care is often a balance of risk and benefit. If both radiologists and patients are fully aware of these risks, the resulting expectations will be realistic.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Rawashdeh, M., Lewis, S. and Brennan, P. (2019) Radiological Errors: Implications and Causes with a Focus on Mammographic Misdiagnosis. Open Journal of Medical Imaging, 9, 1-17.


  1. 1. Harrison, B.T., Gibberd, R.W. and Hamilton, J.D. (1999) An Analysis of the Causes of Adverse Events from the Quality in Australian Health Care Study. The Medical Journal of Australia, 170, 411-415.

  2. 2. Wilson, R.M., Runciman, W.B., Gibberd, R.W., Harrison, B.T., Newby, L. and Hamilton, J.D. (1995) The Quality in Australian Health Care Study. Medical Journal of Australia, 163, 458-471.

  3. 3. Leape, L.L., Berwick, D.M. and Bates, D.W. (2002) Counting Deaths Due to Medical Errors-Reply. JAMA, 288, 2405-2405.

  4. 4. Fileni, A., Magnavita, N., Mirk, P., Iavicoli, I., Magnavita, G. and Bergamaschi, A. (2010) Radiologic Malpractice Litigation Risk in Italy: An Observational Study over a 14-Year Period. American Journal of Roentgenology, 194, 1040-1046.

  5. 5. Pinto, A., Acampora, C., Pinto, F., Kourdioukova, E., Romano, L. and Verstraete, K. (2011) Learning from Diagnostic Errors: A Good Way to Improve Education in Radiology. European Journal of Radiology, 78, 372-376.

  6. 6. Brennan, T.A., Leape, L.L., Laird, N.M., Hebert, L., Localio, A.R., Lawthers, A.G., and Hiatt, H.H. (1991) Incidence of Adverse Events and Negligence in Hospitalized Patients: Results of the Harvard Medical Practice Study I. New England Journal of Medicine, 324, 370-376.

  7. 7. Croskerry, P. (2003) The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them. Academic Medicine, 78, 775-780.

  8. 8. Rafter, N., Hickey, A., Condell, S., Conroy, R., O’Connor, P., Vaughan, D. and Williams, D. (2014) Adverse Events in Healthcare: Learning from Mistakes. QJM: An International Journal of Medicine, 108, 273-277.

  9. 9. Levinson, D.R. and General, I. (2010) Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries. Department of Health and Human Services, Office of the Inspector General, US.

  10. 10. Donaldson, M.S., Corrigan, J.M. and Kohn, L.T. (2000) To Err Is Human: Building a Safer Health System, Vol. 6, National Academies Press, Washington DC.

  11. 11. Leape, L.L., Brennan, T.A., Laird, N., Lawthers, A.G., Localio, A.R., Barnes, B.A., and Hiatt, H. (1991) The Nature of Adverse Events in Hospitalized Patients: Results of the Harvard Medical Practice Study II. New England Journal of Medicine, 324, 377-384.

  12. 12. West, C.P., Huschka, M.M., Novotny, P.J., Sloan, J.A., Kolars, J.C., Habermann, T.M. and Shanafelt, T.D. (2006) Association of Perceived Medical Errors with Resident Distress and Empathy: A Prospective Longitudinal Study. JAMA, 296, 1071-1078.

  13. 13. Gallagher, T.H., Waterman, A.D., Ebers, A.G., Fraser, V.J. and Levinson, W. (2003) Patients’ and Physicians’ Attitudes Regarding the Disclosure of Medical Errors. JAMA, 289, 1001-1007.

  14. 14. Smith, M.L. and Forster, H.P. (2000) Morally Managing Medical Mistakes. Cambridge Quarterly of Healthcare Ethics, 9, 38-53.

  15. 15. Hilfiker, D. (1984) Facing Our Mistakes.The New England Journal of Medicine, 310, 118-122.

  16. 16. Australian Institute of Health and Welfare. (2013) BreastScreen Australia Monitoring Report 2010-2011. Australian Institute of Health and Welfare, Australian, 25 October 2013, 1-98.

  17. 17. Waterman, A.D., Garbutt, J., Hazel, E., Dunagan, W.C., Levinson, W., Fraser, V.J. and Gallagher, T.H. (2007) The Emotional Impact of Medical Errors on Practicing Physicians in the United States and Canada. The Joint Commission Journal on Quality and Patient Safety, 33, 467-476.

  18. 18. Health, A.I.O. (2012) Australia’s Medical Indemnity Claims, AIHW 2010-2011.

  19. 19. Leape, L.L., Berwick, D.M. and Bates, D.W. (2002) What Practices Will Most Improve Safety? Evidence-Based Medicine Meets Patient Safety. JAMA, 288, 501-507.

  20. 20. Johnson, C.D., Krecke, K.N., Miranda, R., Roberts, C.C. and Denham, C. (2009) Developing a Radiology Quality and Safety Program: A Primer. RadioGraphics, 29, 951-959.

  21. 21. Kruskal, J.B. anderson, S., Yam, C.S. and Sosna, J. (2009) Strategies for Establishing a Comprehensive Quality and Performance Improvement Program in a Radiology Department. RadioGraphics, 29, 315-329.

  22. 22. Tehrani, A.S.S., Lee, H., Mathews, S.C., Shore, A., Makary, M.A., Pronovost, P.J. and Newman-Toker, D.E. (2013) 25-Year Summary of US Malpractice Claims for Diagnostic Errors 1986-2010: An Analysis from the National Practitioner Data Bank. BMJ Quality and Safety, 22, 672-680.

  23. 23. Brenner, R.J., Lucey, L.L., Smith, J.J. and Saunders, R. (1998) Radiology and Medical Malpractice Claims: A Report on the Practice Standards Claims Survey of the Physician Insurers Association of America and the American College of Radiology. American Journal of Roentgenology, 171, 19-22.

  24. 24. Moss, J. (1998) Radiology Review. Journal of the Medical Defence Union, 14, 18-20.

  25. 25. Robinson, P.J., Wilson, D., Coral, A., Murphy, A. and Verow, P. (1999) Variation between Experienced Observers in the Interpretation of Accident and Emergency Radiographs. The British Journal of Radiology, 72, 323-330.

  26. 26. Berlin, L. (2001) Dot Size, Lead Time, Fallibility, and Impact on Survival: Continuing Controversies in Mammography. American Journal of Roentgenology, 176, 1123-1130.

  27. 27. Jemal, A., Clegg, L.X., Ward, E., Ries, L.A., Wu, X., Jamison, P.M. and Edwards, B.K. (2004) Annual Report to the Nation on the Status of Cancer, 1975-2001, with a Special Feature Regarding Survival. Cancer: Interdisciplinary International Journal of the American Cancer Society, 101, 3-27.

  28. 28. Gandhi, T.K., Kachalia, A., Thomas, E.J., Puopolo, A.L., Yoon, C., Brennan, T.A. and Studdert, D.M. (2006) Missed and Delayed Diagnoses in the Ambulatory Setting: A Study of Closed Malpractice Claims. Annals of Internal Medicine, 145, 488-496.

  29. 29. Tabár, L., Duffy, S.W., Vitak, B., Chen, H.H. and Prevost, T.C. (1999) The Natural History of Breast Carcinoma: What Have We Learned from Screening? Cancer, 86, 449-462.<449::AID-CNCR13>3.0.CO;2-Q

  30. 30. Warwick, J., Tabàr, L., Vitak, B. and Duffy, S.W. (2004) Time-Dependent Effects on Survival in Breast Carcinoma. Cancer, 100, 1331-1336.

  31. 31. Ganry, O., Peng, J. and Dubreuil, A. (2004) Influence of Abnormal Screens on Delays and Prognostic Indicators of Screen-Detected Breast Carcinoma. Journal of Medical Screening, 11, 28-31.

  32. 32. Shapiro, S., Venet, W., Strax, P., Venet, L. and Roeser, R. (1982) Ten- to Fourteen-Year Effect of Screening on Breast Cancer Mortality. Journal of the National Cancer Institute, 69, 349-355.

  33. 33. Birdwell, R.L., Ikeda, D.M., O’Shaughnessy, K.F. and Sickles, E.A. (2001) Mammographic Characteristics of 115 Missed Cancers Later Detected with Screening Mammography and the Potential Utility of Computer-Aided Detection. Radiology, 219, 192-202.

  34. 34. Bird, R.E., Wallace, T.W. and Yankaskas, B.C. (1992) Analysis of Cancers Missed at Screening Mammography. Radiology, 184, 613-617.

  35. 35. Bech, A.G. (2012) Breast Cancer in Australia: An Overview (No. 71) AIHW.

  36. 36. Rawashdeh, M.A., Lee, W.B., Bourne, R.M., Ryan, E.A., Pietrzyk, M.W., Reed, W.M. and Brennan, P.C. (2013) Markers of Good Performance in Mammography Depend on Number of Annual Readings. Radiology, 269, 61-67.

  37. 37. Siegel, R., Naishadham, D. and Jemal, A. (2013) Cancer Statistics, 2013. CA: A Cancer Journal for Clinicians, 63, 11-30.

  38. 38. Lee, C.S., Nagy, P.G., Weaver, S.J. and Newman-Toker, D.E. (2013) Cognitive and System Factors Contributing to Diagnostic Errors in Radiology. American Journal of Roentgenology, 201, 611-617.

  39. 39. Pinto, A. and Brunese, L. (2010) Spectrum of Diagnostic Errors in Radiology. World Journal of Radiology, 2, 377-383.

  40. 40. Graber, M.L., Franklin, N. and Gordon, R. (2005) Diagnostic Error in Internal Medicine. Archives of Internal Medicine, 165, 1493-1499.

  41. 41. Krupinski, E.A. (2000) The Importance of Perception Research in Medical Imaging. Radiation Medicine, 18, 329-334.

  42. 42. Kundel, H.L., Nodine, C.F. and Krupinski, E.A. (1989) Searching for Lung Nodules. Visual Dwell Indicates Locations of False-Positive and False-Negative Decisions. Investigative Radiology, 24, 472-478.

  43. 43. Lund, P.J., Krupinski, E.A., Pereles, S. and Mockbee, B. (1997) Comparison of Conventional and Computed Radiography: Assessment of Image Quality and Reader Performance in Skeletal Extremity Trauma. Academic Radiology, 4, 570-576.

  44. 44. Hu, C.H., Kundel, H.L., Nodine, C.F., Krupinski, E.A. and Toto, L.C. (1994) Searching for Bone Fractures: A Comparison with Pulmonary Nodule Search. Academic Radiology, 1, 25-32.

  45. 45. Krupinski, E.A. (1996) Visual Scanning Patterns of Radiologists Searching Mammograms. Academic Radiology, 3, 137-144.

  46. 46. Nodine, C.F., Kundel, H.L., Mello-Thoms, C., Weinstein, S.P., Orel, S.G., Sullivan, D.C. and Conant, E.F. (1999) How Experience and Training Influence Mammography Expertise. Academic Radiology, 6, 575-585.

  47. 47. Kundel, H.L. and Nodine, C.F. (1983) A Visual Concept Shapes Image Perception. Radiology, 146, 363-368.

  48. 48. Wolfe, J.M., Evans, K.K., Drew, T., Aizenman, A. and Josephs, E. (2016) How Do Radiologists Use the Human Search Engine? Radiation Protection Dosimetry, 169, 24-31.

  49. 49. Kundel, H.L. (1993) Perception and Representation of Medical Images. In: Loew, M.H., Ed., Medical Imaging 1993: Image Processing, Vol. 1898, International Society for Optics and Photonics, Newport Beach, CA, United States, 2-13.

  50. 50. Gale, A.G. and Walker, G.E. (2003) Visual Search in Breast Cancer Screening. Visual Search, 2, 231-238.

  51. 51. Rubin, G.D., Roos, J.E., Tall, M., Harrawood, B., Bag, S., Ly, D.L. and Roy Choudhury, K. (2014) Characterizing Search, Recognition, and Decision in the Detection of Lung Nodules on CT Scans: Elucidation with Eye Tracking. Radiology, 274, 276-286.

  52. 52. Voisin, S., Pinto, F., Morin-Ducote, G., Hudson, K.B. and Tourassi, G.D. (2013) Predicting Diagnostic Error in Radiology via Eye-Tracking and Image Analytics: Preliminary Investigation in Mammography. Medical Physics, 40, Article ID: 101906.

  53. 53. Nodine, C.F., Krupinski, E.A. and Kundel, H.L. (1990) A Perceptually-Based Algorithm Provides Effective Visual Feedback to Radiologists Searching for Lung Nodules. Visualization in Biomedical Computing, 1990, Proceedings of the First Conference on IEEE, Atlanta, GA, USA, 22-25 May 1990, 202-207.

  54. 54. Krupinski, E.A., Nodine, C.F. and Kundel, H.L. (1998) Enhancing Recognition of Lesions in Radiographic Images Using Perceptual Feedback. Optical Engineering, 37, 813-819.

  55. 55. Mello-Thoms, C., Hardesty, L., Sumkin, J., Ganott, M., Hakim, C., Britton, C. and Maitz, G. (2005) Effects of Lesion Conspicuity on Visual Search in Mammogram Reading. Academic Radiology, 12, 830-840.

  56. 56. Evans, K.K., Birdwell, R.L. and Wolfe, J.M. (2013) If You Don’t Find it Often, You Often Don’t Find It: Why Some Cancers Are Missed in Breast Cancer Screening. PLoS ONE, 8, e64366.

  57. 57. Hendee, W.R. (2003) Medical Imaging Physics. John Wiley & Sons, Hoboken, NJ.

  58. 58. Haneuse, S., Buist, D.S., Miglioretti, D.L., Anderson, M.L., Carney, P.A., Onega, T. and Elmore, J.G. (2012) Mammographic Interpretive Volume and Diagnostic Mammogram Interpretation Performance in Community Practice. Radiology, 262, 69-79.

  59. 59. Molins, E., Macià, F., Ferrer, F., Maristany, M.T. and Castells, X. (2008) Association between Radiologists’ Experience and Accuracy in Interpreting Screening Mammograms. BMC Health Services Research, 8, 91.

  60. 60. Barlow, W.E., Chi, C., Carney, P.A., Taplin, S.H., D’orsi, C., Cutter, G. and Elmore, J.G. (2004) Accuracy of Screening Mammography Interpretation by Characteristics of Radiologists. Journal of the National Cancer Institute, 96, 1840-1850.

  61. 61. Miglioretti, D.L., Smith-Bindman, R., Abraham, L., Brenner, R.J., Carney, P.A., Bowles, E.J.A. and Elmore, J.G. (2007) Radiologist Characteristics Associated with Interpretive Performance of Diagnostic Mammography. Journal of the National Cancer Institute, 99, 1854-1863.

  62. 62. Elmore, J.G., Jackson, S.L., Abraham, L., Miglioretti, D.L., Carney, P.A., Geller, B.M. and Sickles, E.A. (2009) Variability in Interpretive Performance at Screening Mammography and Radiologists’ Characteristics Associated with Accuracy. Radiology, 253, 641-651.

  63. 63. Reed, W.M., Lee, W.B., Cawson, J.N. and Brennan, P.C. (2010) Malignancy Detection in Digital Mammograms: Important Reader Characteristics and Required Case Numbers. Academic Radiology, 17, 1409-1413.

  64. 64. Cornford, E., Reed, J., Murphy, A., Bennett, R. and Evans, A. (2011) Optimal Screening Mammography Reading Volumes; Evidence from Real Life in the East Midlands Region of the NHS Breast Screening Programme. Clinical Radiology, 66, 103-107.

  65. 65. Elmore, J.G., Wells, C.K. and Howard, D.H. (1998) Does Diagnostic Accuracy in Mammography Depend on Radiologists’ Experience? Journal of Women’s Health, 7, 443-449.

  66. 66. Esserman, L., Cowley, H., Eberle, C., Kirkpatrick, A., Chang, S., Berbaum, K. and Gale, A. (2002) Improving the Accuracy of Mammography: Volume and Outcome Relationships. Journal of the National Cancer Institute, 94, 369-375.

  67. 67. Beam, C.A., Conant, E.F. and Sickles, E.A. (2003) Association of Volume and Volume-Independent Factors with Accuracy in Screening Mammogram Interpretation. Journal of the National Cancer Institute, 95, 282-290.

  68. 68. Krupinski, E.A. and Berbaum, K.S. (2010) Does Reader Visual Fatigue Impact Interpretation Accuracy? Medical Imaging 2010: Image Perception, Observer Performance, and Technology Assessment, Vol. 7627, San Diego, CA, 17-18 February 2010, 76270M.

  69. 69. Majid, A.S., De Paredes, E.S., Doherty, R.D., Sharma, N.R. and Salvador, X. (2003) Missed Breast Carcinoma: Pitfalls and Pearls. RadioGraphics, 23, 881-895.

  70. 70. Brady, A., Laoide, R.ó., McCarthy, P. and McDermott, R. (2012) Discrepancy and Error in Radiology: Concepts, Causes and Consequences. The Ulster Medical Journal, 81, 3-9.

  71. 71. Berbaum, K.S., Franklin Jr., E.A., Caldwell, R.T. and Schartz, K.M. (2010) Satisfaction of Search in Traditional Radiographic Imaging. In: Samei, E. and Krupinski, E.A., Eds., The Handbook of Medical Image Perception and Techniques, Cambridge University Press, Cambridge, 107-138.

  72. 72. Ashman, C.J., Yu, J.S. and Wolfman, D. (2000) Satisfaction of Search in Osteoradiology. American Journal of Roentgenology, 175, 541-544.

  73. 73. Renfrew, D.L., Franken Jr., E.A., Berbaum, K.S., Weigelt, F.H. and Abu-Yousef, M.M. (1992) Error in Radiology: Classification and Lessons in 182 Cases Presented at a Problem Case Conference. Radiology, 183, 145-150.

  74. 74. Berbaum, K.S., Franken Jr., E.A., Dorfman, D.D., Caldwell, R.T. and Krupinski, E.A. (2000) Role of Faulty Decision Making in the Satisfaction of Search Effect in Chest Radiography. Academic Radiology, 7, 1098-1106.

  75. 75. Manning, D., Ethell, S. and Donovan, T. (2004) Categories of Observer Error from Eye Tracking and AFROC Data. Medical Imaging 2004: Image Perception, Observer Performance, and Technology Assessment, Vol. 5372, San Diego, CA, 4 May 2004, 90-100.

  76. 76. Boyd, N.F., Guo, H., Martin, L.J., Sun, L., Stone, J., Fishell, E. and Yaffe, M.J. (2007) Mammographic Density and the Risk and Detection of Breast Cancer. New England Journal of Medicine, 356, 227-236.

  77. 77. Boyd, N.F., Martin, L.J., Bronskill, M., Yaffe, M.J., Duric, N. and Minkin, S. (2010) Breast Tissue Composition and Susceptibility to Breast Cancer. Journal of the National Cancer Institute, 102, 1224-1237.

  78. 78. Byrne, C., Schairer, C., Wolfe, J., Parekh, N., Salane, M., Brinton, L.A. and Haile, R. (1995) Mammographic Features and Breast Cancer Risk: Effects with Time, Age, and Menopause Status. JNCI: Journal of the National Cancer Institute, 87 1622-1629.

  79. 79. ACR BI-RADS Committee. (2003) Breast Imaging and Reporting Data System; Breast Imaging Atlas: Mammography, Breast Ultrasound, Magnetic Resonance Imaging.

  80. 80. National Research Council. (2001) Mammography and Beyond: Developing Technologies for the Early Detection of Breast Cancer. National Academies Press, Washington DC.

  81. 81. Haars, G., Van Noord, P.A., Van Gils, C.H., Grobbee, D.E. and Peeters, P.H. (2005) Measurements of Breast Density: No Ratio for a Ratio. Cancer Epidemiology and Prevention Biomarkers, 14, 2634-2640.

  82. 82. Ghosh, K., Brandt, K.R., Sellers, T.A., Reynolds, C., Scott, C.G., Maloney, S.D. and Vachon, C.M. (2008) Association of Mammographic Density with the Pathology of Subsequent Breast Cancer among Postmenopausal Women. Cancer Epidemiology and Prevention Biomarkers, 17, 872-879.

  83. 83. Kerlikowske, K., Grady, D. and Barclay, J. (1966) Effect of Age, Breast Density and Family History on the Sensitivity of First Screening. JAMA, 276, 33-38.

  84. 84. Rosenberg, R.D., Hunt, W.C., Williamson, M.R., Gilliland, F.D., Wiest, P.W., Kelsey, C.A. and Linver, M.N. (1998) Effects of Age, Breast Density, Ethnicity, and Estrogen Replacement Therapy on Screening Mammographic Sensitivity and Cancer Stage at Diagnosis: Review of 183,134 Screening Mammograms in Albuquerque, New Mexico. Radiology, 209, 511-518.

  85. 85. Mandelson, M.T., Oestreicher, N., Porter, P.L., White, D., Finder, C.A., Taplin, S.H. and White, E. (2000) Breast Density as a Predictor of Mammographic Detection: Comparison of Interval- and Screen-Detected Cancers. Journal of the National Cancer Institute, 92, 1081-1087.

  86. 86. Kolb, T.M., Lichy, J. and Newhouse, J.H. (2002) Comparison of the Performance of Screening Mammography, Physical Examination, and Breast US and Evaluation of Factors That Influence Them: An Analysis of 27,825 Patient Evaluations. Radiology, 225, 165-175.

  87. 87. Carney, P.A., Miglioretti, D.L., Yankaskas, B.C., Kerlikowske, K., Rosenberg, R., Rutter, C.M. and Cutter, G. (2003) Individual and Combined Effects of Age, Breast Density, and Hormone Replacement Therapy Use on the Accuracy of Screening Mammography. Annals of Internal Medicine, 138, 168-175.

  88. 88. Berg, W.A., Gutierrez, L., NessAiver, M.S., Carter, W.B., Bhargavan, M., Lewis, R.S. and Ioffe, O.B. (2004) Diagnostic Accuracy of Mammography, Clinical Examination, US, and MR Imaging in Preoperative Assessment of Breast Cancer. Radiology, 233, 830-849.

  89. 89. Kriege, M., Brekelmans, C.T., Obdeijn, I.M., Boetes, C., Zonderland, H.M., Muller, S.H. and Seynaeve, C. (2006) Factors Affecting Sensitivity and Specificity of Screening Mammography and MRI in Women with an Inherited Risk for Breast Cancer. Breast Cancer Research and Treatment, 100, 109-119.

  90. 90. Osako, T., Iwase, T., Takahashi, K., Iijima, K., Miyagi, Y., Nishimura, S. and Kasumi, F. (2007) Diagnostic Mammography and Ultrasonography for Palpable and Nonpalpable Breast Cancer in Women Aged 30 to 39 Years. Breast Cancer, 14, 255-259.

  91. 91. Osako, T., Takahashi, K., Iwase, T., Iijima, K., Miyagi, Y., Nishimura, S. and Kasumi, F. (2007) Diagnostic Ultrasonography and Mammography for Invasive and Noninvasive Breast Cancer in Women Aged 30 to 39 Years. Breast Cancer, 14, 229-233.

  92. 92. Cawson, J.N., Nickson, C., Amos, A., Hill, G., Whan, A.B. and Kavanagh, A.M. (2009) Invasive Breast Cancers Detected by Screening Mammography: A Detailed Comparison of Computer-Aided Detection-Assisted Single Reading and Double Reading. Journal of Medical Imaging and Radiation Oncology, 53, 442-449.

  93. 93. Cook, A.J., Elmore, J.G., Miglioretti, D.L., Sickles, E.A., Bowles, E.J.A., Cutter, G.R. and Carney, P.A. (2010) Decreased Accuracy in Interpretation of Community-Based Screening Mammography for Women with Multiple Clinical Risk Factors. Journal of Clinical Epidemiology, 63, 441-451.

  94. 94. Lehman, C.D., White, E., Peacock, S., Drucker, M.J. and Urban, N. (1999) Effect of Age and Breast Density on Screening Mammograms with False-Positive Findings. American Journal of Roentgenology, 173, 1651-1655.

  95. 95. Goergen, S.K., Evans, J., Cohen, G.P. and MacMillan, J.H. (1997) Characteristics of Breast Carcinomas Missed by Screening Radiologists. Radiology, 204, 131-135.

  96. 96. Feig, S.A. (1992) Breast Masses. Mammographic and Sonographic Evaluation. Radiologic Clinics of North America, 30, 67-92.

  97. 97. Franquet, T., De Miguel, C., Cozcolluela, R. and Donoso, L. (1993) Spiculated Lesions of the Breast: Mammographic-Pathologic Correlation. RadioGraphics, 13, 841-852.

  98. 98. Pohlman, S., Powell, K.A., Obuchowski, N.A., Chilcote, W.A. and Grundfest-Broniatowski, S. (1996) Quantitative Classification of Breast Tumors in Digitized Mammograms. Medical Physics, 23, 1337-1345.

  99. 99. Rangayyan, R.M., Mudigonda, N.R. and Desautels, J.L. (2000) Boundary Modelling and Shape Analysis Methods for Classification of Mammographic Masses. Medical and Biological Engineering and Computing, 38, 487-496.

  100. 100. Yankaskas, B.C., Schell, M.J., Bird, R.E. and Desrochers, D.A. (2001) Reassessment of Breast Cancers Missed during Routine Screening Mammography: A Community-Based Study. American Journal of Roentgenology, 177, 535-541.