An artificial intelligence framework for automatic segmentation and volumetry of vestibular schwannomas from contrast-enhanced T1-weighted and high-resolution T2-weighted MRI

View More View Less
  • 1 Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London;
  • 2 Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London;
  • 3 School of Biomedical Engineering & Imaging Sciences, King’s College London, United Kingdom;
  • 4 School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China;
  • 5 Queen Square Radiosurgery Centre (Gamma Knife) and
  • 6 Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London;
  • 7 The Ear Institute, University College London; and
  • 8 The Royal National Throat, Nose and Ear Hospital, London, United Kingdom
Restricted access

Purchase Now

USD  $45.00

JNS + Pediatrics - 1 year subscription bundle (Individuals Only)

USD  $505.00

JNS + Pediatrics + Spine - 1 year subscription bundle (Individuals Only)

USD  $600.00
Print or Print + Online

OBJECTIVE

Automatic segmentation of vestibular schwannomas (VSs) from MRI could significantly improve clinical workflow and assist in patient management. Accurate tumor segmentation and volumetric measurements provide the best indicators to detect subtle VS growth, but current techniques are labor intensive and dedicated software is not readily available within the clinical setting. The authors aim to develop a novel artificial intelligence (AI) framework to be embedded in the clinical routine for automatic delineation and volumetry of VS.

METHODS

Imaging data (contrast-enhanced T1-weighted [ceT1] and high-resolution T2-weighted [hrT2] MR images) from all patients meeting the study’s inclusion/exclusion criteria who had a single sporadic VS treated with Gamma Knife stereotactic radiosurgery were used to create a model. The authors developed a novel AI framework based on a 2.5D convolutional neural network (CNN) to exploit the different in-plane and through-plane resolutions encountered in standard clinical imaging protocols. They used a computational attention module to enable the CNN to focus on the small VS target and propose a supervision on the attention map for more accurate segmentation. The manually segmented target tumor volume (also tested for interobserver variability) was used as the ground truth for training and evaluation of the CNN. We quantitatively measured the Dice score, average symmetric surface distance (ASSD), and relative volume error (RVE) of the automatic segmentation results in comparison to manual segmentations to assess the model’s accuracy.

RESULTS

Imaging data from all eligible patients (n = 243) were randomly split into 3 nonoverlapping groups for training (n = 177), hyperparameter tuning (n = 20), and testing (n = 46). Dice, ASSD, and RVE scores were measured on the testing set for the respective input data types as follows: ceT1 93.43%, 0.203 mm, 6.96%; hrT2 88.25%, 0.416 mm, 9.77%; combined ceT1/hrT2 93.68%, 0.199 mm, 7.03%. Given a margin of 5% for the Dice score, the automated method was shown to achieve statistically equivalent performance in comparison to an annotator using ceT1 images alone (p = 4e−13) and combined ceT1/hrT2 images (p = 7e−18) as inputs.

CONCLUSIONS

The authors developed a robust AI framework for automatically delineating and calculating VS tumor volume and have achieved excellent results, equivalent to those achieved by an independent human annotator. This promising AI technology has the potential to improve the management of patients with VS and potentially other brain tumors.

ABBREVIATIONS AI = artificial intelligence; ASSD = average symmetric surface distance; ceT1 = contrast-enhanced T1-weighted; CNN = convolutional neural network; DL = deep learning; GK = Gamma Knife; HDL = hardness-weighted Dice loss; hrT2 = high-resolution T2-weighted; RVE = relative volume error; SpvA = supervised attention module; SRS = stereotactic radiosurgery; VS = vestibular schwannoma.

JNS + Pediatrics - 1 year subscription bundle (Individuals Only)

USD  $505.00

JNS + Pediatrics + Spine - 1 year subscription bundle (Individuals Only)

USD  $600.00

Contributor Notes

Correspondence Jonathan Shapey: Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, London, United Kingdom. j.shapey@ucl.ac.uk.

INCLUDE WHEN CITING Published online December 6, 2019; DOI: 10.3171/2019.9.JNS191949.

J.S. and G.W. contributed equally to the study.

Disclosures This work was supported by Wellcome Trust (203145Z/16/Z, 203148/Z/16/Z, WT106882) and EPSRC (NS/A000050/1, NS/A000049/1) funding. Reuben Dorent reports the following nonfinancial relationships: a personal relationship with Richard Dorent and academic competitions with Kings College London and CentraleSupelec. Sebastien Ourselin reports a financial relationship with Medtronic. Tom Vercauterin is supported by a Medtronic/Royal Academy of Engineering Research Chair (RCSRF1819\7\34) and reports direct stock ownership in Mauna Kea Technologies.

  • 1

    Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, : Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv:1811.02629, 2018

    • Search Google Scholar
    • Export Citation
  • 2

    Bello GA, Dawes TJW, Duan J, Biffi C, de Marvao A, Howard LSGE, : Deep-learning cardiac motion analysis for human survival prediction. Nat Mach Intell 1:95–104, 2019

    • Search Google Scholar
    • Export Citation
  • 3

    Coelho DH, Tang Y, Suddarth B, Mamdani M: MRI surveillance of vestibular schwannomas without contrast enhancement: clinical and economic evaluation. Laryngoscope 128:202209, 2018

    • Search Google Scholar
    • Export Citation
  • 4

    Cross JJ, Baguley DM, Antoun NM, Moffat DA, Prevost AT: Reproducibility of volume measurements of vestibular schwannomas—a preliminary study. Clin Otolaryngol 31:123129, 2006

    • Search Google Scholar
    • Export Citation
  • 5

    European Medicines Agency: Gadolinium-Containing Contrast Agents. Amsterdam: EMA, 2017 (https://www.ema.europa.eu/en/medicines/human/referrals/gadolinium-containing-contrast-agents) [Accessed September 25, 2019]

    • Search Google Scholar
    • Export Citation
  • 6

    Evans DGR, Moran A, King A, Saeed S, Gurusinghe N, Ramsden R: Incidence of vestibular schwannoma and neurofibromatosis 2 in the North West of England over a 10-year period: higher incidence than previously thought. Otol Neurotol 26:9397, 2005

    • Search Google Scholar
    • Export Citation
  • 7

    Gal Y, Ghahramani Z: Dropout as a Bayesian approximation: representing model uncertainty in deep learning, in Proceedings of the 33rd International Conference on Machine Learning. International Machine Learning Society, 2016, pp 10501059

    • Search Google Scholar
    • Export Citation
  • 8

    Gibson E, Li W, Sudre C, Fidon L, Shakir DI, Wang G, : NiftyNet: a deep-learning platform for medical imaging. Comput Methods Programs Biomed 158:113122, 2018

    • Search Google Scholar
    • Export Citation
  • 9

    Goodfellow I, Bengio Y, Courville A, Bengio Y: Deep Learning. Cambridge: MIT Press, 2016

  • 10

    Harris GJ, Plotkin SR, Maccollin M, Bhat S, Urban T, Lev MH, : Three-dimensional volumetrics for tracking vestibular schwannoma growth in neurofibromatosis type II. Neurosurgery 62:13141320, 2008

    • Search Google Scholar
    • Export Citation
  • 11

    Kanzaki J, Tos M, Sanna M, Moffat DA, Monsell EM, Berliner KI: New and modified reporting systems from the consensus meeting on systems for reporting results in vestibular schwannoma. Otol Neurotol 24:642649, 2003

    • Search Google Scholar
    • Export Citation
  • 12

    Kendall A, Gal Y: What uncertainties do we need in Bayesian deep learning for computer vision?, in Guyon I, Luxburg UV, Bengio S, (eds): Advances in Neural Information Processing Systems 30. San Diego: Neural Information Processing Systems, 2017, pp 55745584

    • Search Google Scholar
    • Export Citation
  • 13

    Krizhevsky A, Sutskever I, Hinton GE: ImageNet classification with deep convolutional neural networks, in Pereira F, Burges CJC, Bottou L, (eds): Advances in Neural Information Processing Systems 25. Red Hook, NY: Curran Associates, 2012, pp 10971105

    • Search Google Scholar
    • Export Citation
  • 14

    Li Y, Shen L: Deep learning based multimodal brain tumor diagnosis, in Crimi A, Bakas S, Kuijf H, (eds): Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer International Publishing, 2018, pp 149158

    • Search Google Scholar
    • Export Citation
  • 15

    Lin L, Dou Q, Jin YM, Zhou GQ, Tang YQ, Chen WL, : Deep learning for automated contouring of primary tumor volumes by MRI for nasopharyngeal carcinoma. Radiology 291:677686, 2019

    • Search Google Scholar
    • Export Citation
  • 16

    Lin TY, Goyal P, Girshick R, He K, Dollar P: Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell [epub ahead of print], 2018

    • Search Google Scholar
    • Export Citation
  • 17

    Liu S, Xu D, Zhou SK, Pauly O, Grbic S, Mertelmeier T, : 3D anisotropic hybrid network: transferring convolutional features from 2D images to 3D anisotropic volumes, in International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2018, pp 851858

    • Search Google Scholar
    • Export Citation
  • 18

    MacKeith S, Das T, Graves M, Patterson A, Donnelly N, Mannion R, : A comparison of semi-automated volumetric vs linear measurement of small vestibular schwannomas. Eur Arch Otorhinolaryngol 275:867874, 2018

    • Search Google Scholar
    • Export Citation
  • 19

    McKinley R, Wepfer R, Gundersen T, Wagner F, Chan A, Wiest R, : Nabla-net: a deep Dag-like convolutional architecture for biomedical image segmentation, in Crimi A, Menze B, Maier O, (eds): Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer, 2016, pp 119128

    • Search Google Scholar
    • Export Citation
  • 20

    Medicines and Healthcare Products Regulatory Agency: Gadolinium-containing contrast agents: removal of Omniscan and iv Magnevist, restrictions to the use of other linear agents. GOV.UK (https://www.gov.uk/drug-safety-update/gadolinium-containing-contrast-agents-removal-of-omniscan-and-iv-magnevist-restrictions-to-the-use-of-other-linear-agents) [Accessed September 25, 2019]

    • Search Google Scholar
    • Export Citation
  • 21

    Medicines and Healthcare Products Regulatory Agency and Commission on Human Medicines: Gadolinium-containing MRI contrast agents: nephrogenic systemic fibrosis. Drug Safety Update. August 2007 (https://webarchive.nationalarchives.gov.uk/20080610144403/http:/www.mhra.gov.uk/home/groups/pl-p/documents/websiteresources/con2031801.pdf) [Accessed September 26, 2019]

    • Search Google Scholar
    • Export Citation
  • 22

    Milletari F, Navab N, Ahmadi SA: V-Net: fully convolutional neural networks for volumetric medical image segmentation. arXiv: 606.04797, 2016

    • Search Google Scholar
    • Export Citation
  • 23

    Moffat DA, Hardy DG, Irving RM, Viani L, Beynon GJ, Baguley DM: Referral patterns in vestibular schwannomas. Clin Otolaryngol Allied Sci 20:8083, 1995

    • Search Google Scholar
    • Export Citation
  • 24

    Oktay O, Schlemper J, Le Folgoc L, Lee M, Heinrich M, Misawa K, : Attention U-Net: learning where to look for the pancreas. arXiv: 1804.03999, 2018

    • Search Google Scholar
    • Export Citation
  • 25

    Ozgün C, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O: 3D U-Net: learning dense volumetric segmentation from sparse annotation, in Ourselin S, Joskowicz L, Sabuncu MR, (eds): Medical Image Computing and Computer-Assisted Intervention: MICCAI 2016. Lecture Notes in Computer Science, Vol 9901. Cham: Springer, 2016, pp 424432

    • Search Google Scholar
    • Export Citation
  • 26

    Roche PH, Robitail S, Régis J: Two- and three dimensional measures of vestibular schwannomas and posterior fossa—implications for the treatment. Acta Neurochir (Wien) 149:267273, 2007

    • Search Google Scholar
    • Export Citation
  • 27

    Ronneberger O, Fischer P, Brox T: U-Net: Convolutional Networks for Biomedical Image Segmentation. Cham: Springer, 2015, pp 234241

  • 28

    Shapey J, Barkas K, Connor S, Hitchings A, Cheetham H, Thomson S, : A standardised pathway for the surveillance of stable vestibular schwannoma. Ann R Coll Surg Engl 100:216220, 2018

    • Search Google Scholar
    • Export Citation
  • 29

    Stangerup SEE, Caye-Thomasen P: Epidemiology and natural history of vestibular schwannomas. Otolaryngol Clin North Am 45:257268, vii, 2012

    • Search Google Scholar
    • Export Citation
  • 30

    Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ: Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations, in Cardoso M, Arbel T, Carneiro G, (eds): Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2017, ML-CDS 2017. Lecture Notes in Computer Science, Vol 10553. Cham: Springer, 2017, pp 240248

    • Search Google Scholar
    • Export Citation
  • 31

    Tang S, Griffin AS, Waksal JA, Phillips CD, Johnson CE, Comunale JP, : Surveillance after resection of vestibular schwannoma: measurement techniques and predictors of growth. Otol Neurotol 35:12711276, 2014

    • Search Google Scholar
    • Export Citation
  • 32

    Tango T: Equivalence test and confidence interval for the difference in proportions for the paired-sample design. Stat Med 17:891908, 1998

    • Search Google Scholar
    • Export Citation
  • 33

    van de Langenberg R, de Bondt BJ, Nelemans PJ, Baumert BG, Stokroos RJ: Follow-up assessment of vestibular schwannomas: volume quantification versus two-dimensional measurements. Neuroradiology 51:517524, 2009

    • Search Google Scholar
    • Export Citation
  • 34

    Varughese JK, Breivik CN, Wentzel-Larsen T, Lund-Johansen M: Growth of untreated vestibular schwannoma: a prospective study. J Neurosurg 116:706712, 2012

    • Search Google Scholar
    • Export Citation
  • 35

    Vokurka EA, Herwadkar A, Thacker NA, Ramsden RT, Jackson A: Using Bayesian tissue classification to improve the accuracy of vestibular schwannoma volume and growth measurement. AJNR Am J Neuroradiol 23:459467, 2002

    • Search Google Scholar
    • Export Citation
  • 36

    Walz PC, Bush ML, Robinett Z, Kirsch CFE, Welling DB: Three-dimensional segmented volumetric analysis of sporadic vestibular schwannomas: comparison of segmented and linear measurements. Otolaryngol Head Neck Surg 147:737743, 2012

    • Search Google Scholar
    • Export Citation
  • 37

    Wang G, Li W, Aertsen M, Deprest J, Ourselin S, Vercauteren T: Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing 338:3445, 2019

    • Search Google Scholar
    • Export Citation
  • 38

    Wang G, Li W, Ourselin S, Vercauteren T: Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation. Front Comput Neurosci 13:56, 2019

    • Search Google Scholar
    • Export Citation
  • 39

    Wang G, Li W, Ourselin SS, Vercauteren T: Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer, 2018, pp 178190

    • Search Google Scholar
    • Export Citation
  • 40

    Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, : Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 37:15621573, 2018

    • Search Google Scholar
    • Export Citation
  • 41

    Wang G, Shapey J, Li W, Dorent R, Demitriadis A, Bisdas S, : Automatic segmentation of vestibular schwannoma from T2-weighted MRI by deep spatial attention with hardness-weighted loss. arXiv: 1906.03906, 2019

    • Search Google Scholar
    • Export Citation
  • 42

    Wang G, Zuluaga MA, Li W, Pratt R, Patel PA, Aertsen M, : DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans Pattern Anal Mach Intell 41:15591572, 2019

    • Search Google Scholar
    • Export Citation
  • 43

    Yu Q, Xie L, Wang Y, Zhou Y, Fishman EK, Yuille AL: Recurrent saliency transformation network: incorporating multi-stage visual cues for small organ segmentation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018, pp 82808289

    • Search Google Scholar
    • Export Citation
  • 44

    Zou KH, Warfield SK, Bharatha A, Tempany CMC, Kaus MR, Haker SJ, : Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol 11:178189, 2004

    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 73 73 73
Full Text Views 30 30 30
PDF Downloads 22 22 22
EPUB Downloads 0 0 0