Περίληψη:
Background: Previous studies of artificial intelligence (AI) applied to dermatology have shown AI to have higher diagnostic classification accuracy than expert dermatologists; however, these studies did not adequately assess clinically realistic scenarios, such as how AI systems behave when presented with images of disease categories that are not included in the training dataset or images drawn from statistical distributions with significant shifts from training distributions. We aimed to simulate these real-world scenarios and evaluate the effects of image source institution, diagnoses outside of the training set, and other image artifacts on classification accuracy, with the goal of informing clinicians and regulatory agencies about safety and real-world accuracy. Methods: We designed a large dermoscopic image classification challenge to quantify the performance of machine learning algorithms for the task of skin cancer classification from dermoscopic images, and how this performance is affected by shifts in statistical distributions of data, disease categories not represented in training datasets, and imaging or lesion artifacts. Factors that might be beneficial to performance, such as clinical metadata and external training data collected by challenge participants, were also evaluated. 25 331 training images collected from two datasets (in Vienna [HAM10000] and Barcelona [BCN20000]) between Jan 1, 2000, and Dec 31, 2018, across eight skin diseases, were provided to challenge participants to design appropriate algorithms. The trained algorithms were then tested for balanced accuracy against the HAM10000 and BCN20000 test datasets and data from countries not included in the training dataset (Turkey, New Zealand, Sweden, and Argentina). Test datasets contained images of all diagnostic categories available in training plus other diagnoses not included in training data (not trained category). We compared the performance of the algorithms against that of 18 dermatologists in a simulated setting that reflected intended clinical use. Findings: 64 teams submitted 129 state-of-the-art algorithm predictions on a test set of 8238 images. The best performing algorithm achieved 58·8% balanced accuracy on the BCN20000 data, which was designed to better reflect realistic clinical scenarios, compared with 82·0% balanced accuracy on HAM10000, which was used in a previously published benchmark. Shifted statistical distributions and disease categories not included in training data contributed to decreases in accuracy. Image artifacts, including hair, pen markings, ulceration, and imaging source institution, decreased accuracy in a complex manner that varied based on the underlying diagnosis. When comparing algorithms to expert dermatologists (2460 ratings on 1269 images), algorithms performed better than experts in most categories, except for actinic keratoses (similar accuracy on average) and images from categories not included in training data (26% correct for experts vs 6% correct for algorithms, p<0·0001). For the top 25 submitted algorithms, 47·1% of the images from categories not included in training data were misclassified as malignant diagnoses, which would lead to a substantial number of unnecessary biopsies if current state-of-the-art AI technologies were clinically deployed. Interpretation: We have identified specific deficiencies and safety issues in AI diagnostic systems for skin cancer that should be addressed in future diagnostic evaluation protocols to improve safety and reliability in clinical practice. Funding: Melanoma Research Alliance and La Marató de TV3. © 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license
Συγγραφείς:
Combalia, M.
Codella, N.
Rotemberg, V.
Carrera, C.
Dusza, S.
Gutman, D.
Helba, B.
Kittler, H.
Kurtansky, N.R.
Liopyris, K.
Marchetti, M.A.
Podlipnik, S.
Puig, S.
Rinner, C.
Tschandl, P.
Weber, J.
Halpern, A.
Malvehy, J.
Λέξεις-κλειδιά:
actinic keratosis; area under the curve; Argentina; Article; artificial intelligence; basal cell carcinoma; cancer classification; dermatofibroma; dermatologist; diagnostic accuracy; diagnostic test accuracy study; epiluminescence microscopy; human; image analysis; image artifact; image processing; imaging algorithm; intermethod comparison; Italy; keratosis; machine learning; melanoma; New Zealand; patient safety; prediction; quantitative analysis; receiver operating characteristic; sensitivity and specificity; simulation; skin biopsy; skin cancer; Spain; squamous cell carcinoma; statistical analysis; statistical distribution; Sweden; technology; Turkey (republic); unnecessary procedure; artificial intelligence; diagnostic imaging; epiluminescence microscopy; melanoma; pathology; procedures; reproducibility; skin tumor, Artificial Intelligence; Dermoscopy; Humans; Melanoma; Reproducibility of Results; Skin Neoplasms