Fully automatic brain tumor segmentation for 3D evaluation in augmented reality

Tim Fick Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands;

Search for other papers by Tim Fick in
jns
Google Scholar
PubMed
Close
 MD
,
Jesse A. M. van Doormaal Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands;

Search for other papers by Jesse A. M. van Doormaal in
jns
Google Scholar
PubMed
Close
 BSc
,
Lazar Tosic Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and

Search for other papers by Lazar Tosic in
jns
Google Scholar
PubMed
Close
 MD
,
Renate J. van Zoest Department of Neurology and Neurosurgery, Curaçao Medical Center, Willemstad, Curaçao

Search for other papers by Renate J. van Zoest in
jns
Google Scholar
PubMed
Close
 MD, MSc
,
Jene W. Meulstee Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands;

Search for other papers by Jene W. Meulstee in
jns
Google Scholar
PubMed
Close
 MSc, PhD
,
Eelco W. Hoving Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands;
Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands;

Search for other papers by Eelco W. Hoving in
jns
Google Scholar
PubMed
Close
 MD, PhD
, and
Tristan P. C. van Doormaal Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands;
Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and

Search for other papers by Tristan P. C. van Doormaal in
jns
Google Scholar
PubMed
Close
 MD, PhD
Free access

OBJECTIVE

For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set.

METHODS

Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95).

RESULTS

The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95).

CONCLUSIONS

The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.

ABBREVIATIONS

AR = augmented reality; ASSD = average symmetric surface distance; DSC = Sørensen-Dice similarity coefficient; HD95 = 95th percentile of Hausdorff distance; HMD = head-mounted display.

OBJECTIVE

For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set.

METHODS

Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95).

RESULTS

The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95).

CONCLUSIONS

The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.

Augmented reality (AR) is a technology that allows interaction with virtual 3D models that are superimposed onto the user's view of the real world. In recent years, interest in this technology has increased, especially since major innovations with head-mounted displays (HMDs).1 AR-HMDs facilitate a new way to evaluate medical images and could generate better understanding of anatomical structures. The stereoscopic view allows the user to appreciate virtual models in real 3D. In addition, hand, head, and eye tracking create a new way to interact with and manipulate 3D models. Studies have shown that AR is particularly effective when used to visualize complex pathology or anatomy.2 In addition, 3D visualizations of the relevant anatomical and pathological structures may enhance planning and preparation of surgical interventions.3 However, the current workflows for visualizing 3D models with AR-HMD require multiple manual steps, thereby making the process cumbersome, and therefore they cannot be integrated with daily clinical practice.

Several steps need to be taken to create 3D models and make them suitable for use with AR-HMD. DICOM images need to be exported from the hospital PACS and saved in a secure environment. 3D models need to be created from DICOM images with segmentation, which is the process of delineating the anatomical structures of interest in medical images. After segmentation, these models need to be optimized for visualization and simplified to account for limited processing power. Furthermore, the models need to be converted into different file formats in order to make them applicable for different viewing modalities.

The segmentation process is crucial for ensuring the quality of 3D models but can be time consuming, especially when performed manually or semiautomatically, which is still required for use in clinical neuronavigation systems.4, 5 Recent studies have shown several automatic segmentation solutions.6–9 However, these algorithms often require separate software programs that need to run actively on a local computer, and this obligates the user to keep the computer running during this process and requires substantial processing power that is not always available in the workplace. A cloud-based solution obviates the need for these requirements and is preferred in a dynamic hospital environment.

To make this technique more accessible to surgeons, we built a fully automatic workflow that uses DICOM images to create 3D models suitable for AR-HMDs and is integrated with an automatic segmentation algorithm that segments skin, brain, ventricles, and tumor. Accurate tumor segmentation is vital for use in clinical settings, but this can be challenging for automatic algorithms owing to large variations in shape, size, and voxel intensity among tumors. In this study, we validated the accuracy of the segmentation algorithm for a large variety of tumors by using pairwise comparisons of the automatically segmented data with a ground truth data set of the same imaging sequences that had been segmented manually.

Methods

Data Set

For this study, we included 50 anonymized MR images of patients with brain tumor who underwent operations between 2019 and 2020 at the neurosurgery department of University Medical Center Utrecht. The following minimal inclusion criteria were met: 1) axial T1-weighted MRI data with contrast series available for 100 slices or more; 2) tumor volume of 5 cm3 or greater; 3) use of tumor contrast enhancement; and 4) intracranial tumor location. Tumors in the pituitary region were excluded from this study.

Ground Truth Segmentation

Ground truth segmentations were created using medical segmentation software (3D Slicer, Massachusetts Institute of Technology) and by manually delineating the tumor in each axial slice. No automatic or semiautomatic functions were used. A protocol was written to ensure that every segmentation was done in a uniform manner. Segmentations were exported as binary data arrays.

Automatic Segmentation and AR Workflow

Automatic segmentation was performed with an automatic cloud-based workflow (AugSafe, Augmedit) that automatically generates 3D models from DICOM images that are compatible for visualization with AR-HMDs. The complete workflow is shown in Fig. 1.

FIG. 1.
FIG. 1.

Flow diagram showing an overview of the complete workflow from acquisition of DICOM images to 3D visualization.

First, MR images in DICOM format were imported to the cloud that can be accessed through a web-based interface on the computer or through an application on the AR-HMD (HoloLens, Microsoft).

Second, the automatic segmentation process, which was embedded within the cloud environment, was initialized. The images were transferred to an external server running the segmentation algorithm (Disior), which automatically segmented skin, brain, ventricles, and tumor surfaces. The segmentation algorithm created image-specific thresholds and then set up spheres with adaptive meshing in order to expand and capture the radiological boundaries of the target tissues. The expanded sphere enabled robust handling of noisy and/or poor-contrast regions in the image. This method has been used for orbital volume calculations.10–13

Third, the resulting 3D models were saved in the cloud, and every individual anatomical structure was simplified to account for the processing power of current AR-HMDs, maintain optimal performance, and preserve all relevant anatomical details. Furthermore, visualization properties, such as color and transparency, were automatically optimized.

Lastly, the 3D models were converted to different file formats and saved in the cloud to allow viewing with different modalities, such as the integrated 3D viewer of a web-based interface (Fig. 2 upper) or as a holographic scene on an AR-HMD (Fig. 2 lower). The 3D models can be viewed within the original DICOM image, and each individual 3D model can be manipulated and viewed from different angles. Quality of segmentation can be visually verified by selecting an anatomical structure and inspecting the outline of the segmentation on the corresponding DICOM slice (Fig. 3). Other devices can be used to view the 3D models only by scanning an automatically generated QR code (i.e., Quick Response code). Each included scan was sequentially segmented using this workflow. Segmentations were exported as binary data arrays.

FIG. 2.
FIG. 2.

3D models generated with automatic segmentation and shown through an embedded 3D viewer in a web-based interface (upper) and an AR-HMD (lower).

FIG. 3.
FIG. 3.

Axial MR image showing the circumference of an automatically segmented tumor.

Outcome Measures

To evaluate the accuracy of the segmentation algorithm, the ground truth and automatically segmented data were compared using both volumetric analysis statistics and surface analysis statistics.14 Binary data arrays of the manual and automatic segmentations were imported into a custom MATLAB script (MathWorks) where all calculations were performed.

For volumetric analysis, we used the Sørensen-Dice similarity coefficient (DSC). This statistic can be described using the formula
which calculates the quotient of similarity between two volumetric sets with a value between 0 and 1. In this formula, AS represents the automatically segmented set and GT represents the ground truth set. A DSC of 1 means perfect segmentation, whereas a DSC of 0 means no overlap at all.
For surface analysis, we compared corresponding data using the average symmetric surface distance (ASSD), which can be described using the following formula:
This formula was used to calculate the average distance between two surface sets in millimeters. In this formula, BAS represents the boundary of the automatically segmented set, BGT represents the boundary of the ground truth set, and d represents the Euclidian distance between elements.15 Furthermore, the 95th percentile of Hausdorff distance (HD95) was used to describe surface data. Similar to conventional HD, this metric describes the maximum minimal distance between sets in millimeters, but the impact of a small subset of outliers is eliminated by using the 95th percentile of distances. This statistic can be described by selecting the Kth ranked distance using the following formula:
In this formula,
is the Kth ranked minimum distance set at the 95th percentile, AS represents the boundary points of the automatically segmented set, and GT represents the boundary points of the ground truth set. HD95 is then defined with the following formula: H95 (AS, GT) = max (h95 (AS, GT), h95 (GT, AS)).

The segmentation time of each automatic segmentation was noted to evaluate the efficiency of the algorithm. Statistical analysis was performed with SPSS version 26.0 (IBM Corp.). The distribution of data was checked using normality tests, which showed that all data were skewed for all outcome measures. Therefore, nonparametric tests were used to compare groups. Linear regression was used to compare tumor volume with DSC, HD95, and ASSD.

Results

Fifty scans were included in the analysis. The mean (range) number of slices per sequence was 269 (140–380), and the mean (range) tumor volume was 39.4 (5.1–113.3) cm3. All 50 scans were computed successfully by the segmentation algorithm. The resulting data from our analysis are summarized in Table 1. The mean ± SD computation time of the segmentation algorithm was 753 ± 128 seconds, compared with a mean ± SD manual segmentation time of 6212 ± 2039 seconds. The mean (95% CI) difference was 5459 (4886–6032) seconds. The mean ± SD DSC was 0.87 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm.

TABLE 1.

Accuracy and efficiency results of automatic segmentation

CharacteristicDSCASSD (mm)HD95 (mm)Vol (cm3)Segmentation Time (sec)
Overall (n = 50)0.87 ± 0.071.31 ± 0.634.80 ± 3.1839.4 ± 33.1753 ± 128
Pathology
 Meningioma
  All (n =15)0.89 ± 0.081.19 ± 0.734.61 ± 2.8448.4 ± 34.5
  Supratentorial (n = 14)0.88 ± 0.081.23 ± 0.744.81 ± 2.8448.6 ± 35.8
  Infratentorial (n = 1)0.950.671.8843.1
 Metastasis
  All (n = 19)0.84 ± 0.071.36 ± 0.654.40 ± 1.7721.5 ± 18.3
  Supratentorial (n = 10)0.86 ± 0.091.30 ± 0.853.62 ± 1.5826.0 ± 19.8
  Infratentorial (n = 9)0.82 ± 0.041.43 ± 0.355.26 ± 1.6116.5 ± 10.1
 Glioblastoma (n = 16)0.88 ± 0.061.36 ± 0.535.46 ± 4.6052.4 ± 38.4

Values are shown as mean or mean ± SD.

The Kruskal-Wallis test showed that pathology significantly affected the accuracy of DSC (H [2] = 7.87, p = 0.02). The post hoc Mann-Whitney U-test with a Bonferroni-adjusted alpha level of 0.017 (0.05/3) was used to compare all pairs of groups. Metastasis (median 0.84 and median 0.85) had lower DSC than meningioma (mean 0.89 and median 0.92), and this difference was statistically significant (U [nmetastasis = 19, nmeningioma = 15] = 72, z = −2.45, p = 0.01). No statistically significant differences in HD95 (p = 0.68) and ASSD (p = 0.24) were found among the included pathologies.

Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). The Mann-Whitney U-test showed that these differences were statistically significant for DSC (U [nsupratentorial = 10, ninfratentorial = 9] = 21, z = −1.96, p = 0.05) and HD95 (U [nsupratentorial = 10, ninfratentorial = 9] = 14.5, z = −2.49, p = 0.01).

When we compared tumor volume and the corresponding DSC using linear regression, we found a significant regression equation (F [1, 48] = 10.985, p = 0.002) with R2 of 0.186. The predicted DSC was equal to 0.832 ± 0.001 per cubic centimeter of volume. This finding implies that, on average, a larger tumor has a larger DSC. No significant associations were found between tumor volume and HD95 (p = 0.56) or ASSD (p = 0.96).

Figure 4 presents the distances between the surface points of the two compared sets with all tumors. This representation accounts for a total of 1,110,917 points, of which 58.0% had a difference less than 1 mm and 79.1% had a difference less than 2 mm. Furthermore, a rapid decline in frequency with increasing distance is shown: 3.5% of points had a distance greater than 5 mm; frequencies were not available for distances greater than 10 mm; and the maximum outlier was 30.28 mm. Outliers were mostly caused by subsets of included voxels that largely deviated from the actual tumor, even though tumor segmentation was performed accurately (Fig. 5).

FIG. 4.
FIG. 4.

Histogram showing the distances between all points included in the two compared surface sets with all tumors.

FIG. 5.
FIG. 5.

Heatmap (left) and 3D model (right) of tumor segmentation. Small subsets of outliers (encircled) unrelated to the tumor are shown, but the rest of the segmentation is accurate.

Additional qualitative impressions are presented to illustrate characteristic difficulties with automatic segmentation (Fig. 6). When the tumor is not clearly enhanced, the border between the tumor and brain may not be accurately recognized. This can result in segmentation where the tumor is only partially included (Fig. 6 left). Other difficulties arise when the tumor is surrounded by structures with similar intensities. These structures are sometimes falsely recognized as part of the tumor (Fig. 6 right).

FIG. 6.
FIG. 6.

Qualitative impressions of difficulties during segmentation. Left: Coronal MR image showing a low-intensity difference between the tumor and brain. The circumference of the segmented tumor shows that the border is not accurately recognized. Right: Sagittal MR image showing the circumference of a segmented tumor that includes a part of the ambient cistern.

Discussion

In this study, we evaluated the accuracy of a fully automatic segmentation algorithm for brain tumor. In recent studies, deep learning techniques, such as convolutional16–26 or recurrent neural networks,27 were mostly used for automatic brain tumor segmentation, with DSC varying between 0.70 and 0.91. A recent meta-analysis of deep learning approaches for automatic segmentation of brain tumor showed a pooled average DSC of 0.88 for high-grade glioma.28 Two studies reported ASSD26, 29 and two studies reported HD95.26, 30 ASSD ranged between 1.17 and 1.86, and HD95 ranged between 3.47 and 6.18. These findings are comparable to our results: we showed a mean DSC of 0.87, ASSD of 1.31 mm, and HD95 of 4.80 mm.

Our segmentation algorithm uses a novel approach for automatic segmentation. The algorithm works with training data indirectly, which means that it is not trained to recognize certain structures in a specific training set and does not depend on a specific number of voxels. A baseline for multiple parameters, such as gray scale, shape, size, and curvature, is formed from a relatively small data set for each structure. Accuracy is improved by visually checking the outcome and further optimizing these parameters. This means that, compared with deep learning techniques, a smaller training set is needed and our algorithm is more robust for handling various image resolutions and variations in pathologies.

In our results, infratentorial metastasis and metastases had significantly decreased DSC compared with supratentorial metastasis and meningioma, respectively. Furthermore, our results suggest that smaller tumors have decreased DSC on average, which may have been a factor between these groups. Moreover, metastasis can show a large variety of shapes and intensities inside the tumor, whereas meningioma is often more homogeneous and well defined, which enables the algorithm to more accurately define the radiological boundaries of the tumor. Automatic segmentation also had decreased accuracy for measuring HD95 of infratentorial metastasis compared with that of supratentorial metastasis. We noticed that structures surrounding infratentorial tumors, such as cisterns or sinuses, could be falsely recognized as a part of the tumor by the segmentation algorithm, possibly leading to these differences in outliers. Nonetheless, the groups are small and should be interpreted with caution. In this respect, it would be interesting to include infratentorial intrinsic brain tumor, which is more frequently seen in pediatric patients.

Automatic segmentation was significantly faster than manual segmentation, even though manual segmentation only included tumor, whereas automatic segmentation included tumor, ventricles, brain, and skin. In this study, we validated segmentation accuracy for tumor because this structure has the most variation and therefore is the most challenging to recognize. Furthermore, multiple aspects of providing care, such as preoperative evaluation, assessment of the surgical approach, and guidance during surgery, require evaluation of tumor structure. However, each segmented structure has its own technical challenges and clinical implications, and accuracy is key for usability. Therefore, we are also currently working to validate accuracy for other structures, such as ventricles.

The currently described segmentation algorithm is incorporated into a cloud environment, which facilitates a fully automatic workflow from acquisition of DICOM images to visualization with an AR-HMD or web-based viewer. This workflow offers several advantages. First, it requires no user input, which eliminates the need for neurosurgeons or specialized personnel to allocate time for 3D segmentation. Second, the algorithm runs on an external server from the cloud, which obviates the need for high processing power on the local computer, which is not always available in the workplace. Furthermore, this also means that the system keeps running in the back end when the local computer is turned off. Third, visualization properties (e.g., simplification, color, transparency) are optimized, and the 3D models are automatically stored and converted into different file formats. This means that the surgeon can view the models at any time when needed using different modalities without further processing. Fourth, the cloud environment uses Azure-based security services (Microsoft) and is secured according to relevant International Organization for Standardization criteria; therefore, the segmentation algorithm can be connected to hospital PACS, which facilitates immediate connection after medical imaging is performed. Lastly, the segmentation algorithm offers parallel processing, which means that multiple images can be processed simultaneously, thereby making it more efficient.

The generated 3D models can support multiple aspects of preoperative care. Information recall by patients has been shown to improve after preoperative consultation with 3D or AR models.31 Therefore, 3D models can help patients better understand their disease and support the decision-making process.32 Furthermore, preoperative evaluation of 3D models can improve anatomical understanding of surgeons and ultimately patient outcomes.33, 34

Limitations

The current segmentation algorithm has several limitations. First, it has to be optimized for use during AR neuronavigation. Ultimately, we want to integrate this workflow into our previously described AR neuronavigation system.35, 36 However, accuracy of neuronavigation systems is key, and every segmentation error accounts for an additional inaccuracy. Therefore, we will have to combine automatic segmentation with manual 3D fine-tuning to make this algorithm applicable in such a system.

Second, the described segmentation algorithm is limited to contrast-enhanced tumors with a minimum volume of 5 cm3 on a T1-weighted sequence. To expand the indications for use of the AR workflow, we plan to design a segmentation algorithm that incorporates T1-weighted MR images without contrast, T2-weighted MR images, and FLAIR sequences. Specific sequences can be automatically detected, and the segmentation algorithm can define the borders of the same anatomical structures on the basis of the different thresholds adapted to the presented sequence. Moreover, segmentation currently includes the whole tumor. Combining different sequences enables the surgeon to distinguish between contrast-enhancing, cystic, and necrotic parts of different tumors. Furthermore, a larger data set is needed to make the segmentation algorithm more sensitive to smaller tumors.

Lastly, pituitary tumors were excluded from this study. These tumors are surrounded by several anatomical structures that have similar intensities as the tumor, which makes it difficult to distinguish different structures. To solve this issue, we plan to use other segmented structures as borders for tumor segmentation. We are also currently developing skull and vessel segmentation algorithms that could be used to prevent the segmented tumor from crossing the borders of these structures.

Conclusions

The currently described, cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in the evaluation of 3D models of contrast-enhancing intracranial lesions with an AR-HMD. The algorithm still has limits regarding tumor volume and applicable pathologies. The next steps involve incorporation of other sequences and 3D fine-tuning to expand the scope of the AR workflow.

Acknowledgments

This project has received funding from the Eurostars-2 joint program, with co-funding from the European Union Horizon 2020 research and innovation program (114221 SAPIENS3D).

Disclosures

Dr. van Doormaal is the founder and CMO of Augmedit bv. Dr. Meulstee is a senior product developer for Augmedit bv.

Author Contributions

Conception and design: all authors. Acquisition of data: Fick, Tosic, van Zoest. Analysis and interpretation of data: Fick. Drafting the article: Fick. Critically revising the article: JAM van Doormaal, Tosic, van Zoest, Meulstee, Hoving, TPC van Doormaal. Approved the final version of the manuscript on behalf of all authors: Fick. Statistical analysis: Fick. Administrative/technical/material support: JAM van Doormaal. Study supervision: Hoving, TPC van Doormaal.

References

  • 1

    Pelargos PE, Nagasawa DT, Lagman C, Tenn S, Demos JV, Lee SJ, et al. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. J Clin Neurosci. 2017;35:14.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2

    Swennen GRJ, Mollemans W, Schutyser F. Three-dimensional treatment planning of orthognathic surgery in the era of virtual imaging. J Oral Maxillofac Surg. 2009;67(10):20802092.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 3

    Preim B, Botha C. Visual Computing for Medicine: Theory, Algorithms, and Applications. 2nd ed. Morgan Kaufmann; 2014:648661.

  • 4

    Li Y, Chen X, Wang N, Zhang W, Li D, Zhang L, et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J Neurosurg. 2019;131(5):15991606.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 5

    McJunkin JL, Jiramongkolchai P, Chung W, Southworth M, Durakovic N, Buchman CA, Silva JR. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. 2018;39(10):e1137e1142.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6

    Zhang L, Wang X, Yang D, Sanford T, Harmon S, Turkbey B, et al. Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation. IEEE Trans Med Imaging. 2020;39(7):25312540.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7

    Ma J, Ma HT, Li H, Ye C, Wu D, Tang X, et al. A fast atlas pre-selection procedure for multi-atlas based brain segmentation. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2015:30533056.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 8

    Li J, Yu ZL, Gu Z, Liu H, Li Y. MMAN: multi-modality aggregation network for brain segmentation from MR images. Neurocomputing. 2019;358:1019.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 9

    Mendrik AM, Vincken KL, Kuijf HJ, Breeuwer M, Bouvy WH, de Bresser J, et al. MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans. Comput Intell Neurosci. 2015;2015:813696.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 10

    Sigron GR, Rüedi N, Chammartin F, Meyer S, Msallem B, Kunz C, Thieringer FM. Three-dimensional analysis of isolated orbital floor fractures pre- and post-reconstruction with standard titanium meshes and “hybrid” patient-specific implants. J Clin Med. 2020;9(5):1579.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 11

    Saloniemi M, Lehtinen V, Snäll J. Computer-aided fracture size measurement in orbital fractures—an alternative to manual evaluation. Craniomaxillofac Trauma Reconstr. Published online October 7, 2020. doi:https://doi.org/10.1177/1943387520962691

    • Search Google Scholar
    • Export Citation
  • 12

    Snäll J, Narjus-Sterba M, Toivari M, Wilkman T, Thorén H. Does postoperative orbital volume predict postoperative globe malposition after blow-out fracture reconstruction? A 6-month clinical follow-up study. Oral Maxillofac Surg. 2019;23(1):2734.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 13

    Kärkkäinen M, Wilkman T, Mesimäki K, Snäll J. Primary reconstruction of orbital fractures using patient-specific titanium milled implants: the Helsinki protocol. Br J Oral Maxillofac Surg. 2018;56(9):791796.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 14

    Fenster A, Chiu B. Evaluation of segmentation algorithms for medical imaging. In: 2005 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2005:71867189.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 15

    Yeghiazaryan V, Voiculescu I. An overview of current evaluation methods used in medical image segmentation. No. RR-15-08. Department of Computer science, University of Oxford; 2015. Accessed June 10, 2021. https://www.cs.ox.ac.uk/files/7732/CS-RR-15-08.pdf

    • Search Google Scholar
    • Export Citation
  • 16

    Wang G, Li W, Ourselin S, Vercauteren T. Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation. Front Comput Neurosci. 2019;13:56.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 17

    Naceur MB, Saouli R, Akil M, Kachouri R. Fully automatic brain tumor segmentation using end-to-end incremental deep neural networks in MRI images. Comput Methods Programs Biomed. 2018;166:3949.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 18

    Sultan HH, Salem NM, Al-Atabany W. Multi-classification of brain tumor images using deep neural network. IEEE Access. 2019;7:6921569225.

  • 19

    Cui S, Mao L, Jiang J, Liu C, Xiong S. Automatic semantic segmentation of brain gliomas from MRI images using a deep cascaded neural network. J Healthc Eng. 2018;2018:4940593.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 20

    Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:6178.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 21

    Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, Miller RW. Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys. 2017;44(10):52345243.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 22

    Chen H, Qin Z, Ding Y, Tian L, Qin Z. Brain tumor segmentation with deep convolutional symmetric neural network. Neurocomputing. 2020;392(7):305313.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 23

    Hussain S, Anwar SM, Majid M. Brain tumor segmentation using cascaded deep convolutional neural network. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2017:19982001.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 24

    Hussain S, Anwar SM, Majid M. Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing. 2018;282:248261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 25

    Yang T, Song J, Li L. A deep learning model integrating SK-TPCNN and random forests for brain tumor segmentation in MRI. Biocybern Biomed Eng. 2019;39(3):613623.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 26

    Li H, Li A, Wang M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks. Comput Biol Med. 2019;108(March):150160.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 27

    Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal. 2018;43:98111.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 28

    Badrigilan S, Nabavi S, Abin AA, Rostampour N, Abedi I, Shirvani A, Ebrahimi Moghaddam M. Deep learning approaches for automated classification and segmentation of head and neck cancers and brain tumors in magnetic resonance images: a meta-analysis study. Int J CARS. 2021;16(4):529542.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 29

    Alqazzaz S, Sun X, Yang X, Nokes L. Automated brain tumor segmentation on multi-modal MR image using SegNet. Comput Vis Media. 2019;5(2):209219.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 30

    Wu Y, Zhao Z, Wu W, Lin Y, Wang M. Automatic glioma segmentation based on adaptive superpixel. BMC Med Imaging. 2019;19(1):73.

  • 31

    Sezer S, Piai V, Kessels RPC, Ter Laan M. Information recall in pre-operative consultation for glioma surgery using actual size three-dimensional models. J Clin Med. 2020;9(11):3660.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 32

    van de Belt TH, Nijmeijer H, Grim D, Engelen LJLPG, Vreeken R, van Gelder MMHJ, Ter Laan M. Patient-specific actual-size three-dimensional printed models for patient education in glioma treatment: first experiences. World Neurosurg. 2018;117:e99e105.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 33

    Wellens LM, Meulstee J, van De Ven CP, Terwisscha van Scheltinga CEJ, Littooij AS, van den Heuvel-Eibrink MM, et al. Comparison of 3-dimensional and augmented reality kidney models with conventional imaging data in the preoperative assessment of children with Wilms tumors. JAMA Netw Open. 2019;2(4):e192633.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 34

    Stadie AT, Kockro RA. Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning. Neurosurgery. 2013;72(suppl 1):6377.

  • 35

    van Doormaal TPC, van Doormaal JAM, Mensink T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Oper Neurosurg (Hagerstown). 2019;17(6):588593.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 36

    Fick T, van Doormaal JAM, Hoving EW, Regli L, van Doormaal TPC. Holographic patient tracking after bed movement for augmented reality neuronavigation using a head-mounted display. Acta Neurochir (Wien). 2021;163(4):879884.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Collapse
  • Expand
  • FIG. 1.

    Flow diagram showing an overview of the complete workflow from acquisition of DICOM images to 3D visualization.

  • FIG. 2.

    3D models generated with automatic segmentation and shown through an embedded 3D viewer in a web-based interface (upper) and an AR-HMD (lower).

  • FIG. 3.

    Axial MR image showing the circumference of an automatically segmented tumor.

  • FIG. 4.

    Histogram showing the distances between all points included in the two compared surface sets with all tumors.

  • FIG. 5.

    Heatmap (left) and 3D model (right) of tumor segmentation. Small subsets of outliers (encircled) unrelated to the tumor are shown, but the rest of the segmentation is accurate.

  • FIG. 6.

    Qualitative impressions of difficulties during segmentation. Left: Coronal MR image showing a low-intensity difference between the tumor and brain. The circumference of the segmented tumor shows that the border is not accurately recognized. Right: Sagittal MR image showing the circumference of a segmented tumor that includes a part of the ambient cistern.

  • 1

    Pelargos PE, Nagasawa DT, Lagman C, Tenn S, Demos JV, Lee SJ, et al. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. J Clin Neurosci. 2017;35:14.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2

    Swennen GRJ, Mollemans W, Schutyser F. Three-dimensional treatment planning of orthognathic surgery in the era of virtual imaging. J Oral Maxillofac Surg. 2009;67(10):20802092.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 3

    Preim B, Botha C. Visual Computing for Medicine: Theory, Algorithms, and Applications. 2nd ed. Morgan Kaufmann; 2014:648661.

  • 4

    Li Y, Chen X, Wang N, Zhang W, Li D, Zhang L, et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J Neurosurg. 2019;131(5):15991606.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 5

    McJunkin JL, Jiramongkolchai P, Chung W, Southworth M, Durakovic N, Buchman CA, Silva JR. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. 2018;39(10):e1137e1142.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6

    Zhang L, Wang X, Yang D, Sanford T, Harmon S, Turkbey B, et al. Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation. IEEE Trans Med Imaging. 2020;39(7):25312540.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7

    Ma J, Ma HT, Li H, Ye C, Wu D, Tang X, et al. A fast atlas pre-selection procedure for multi-atlas based brain segmentation. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2015:30533056.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 8

    Li J, Yu ZL, Gu Z, Liu H, Li Y. MMAN: multi-modality aggregation network for brain segmentation from MR images. Neurocomputing. 2019;358:1019.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 9

    Mendrik AM, Vincken KL, Kuijf HJ, Breeuwer M, Bouvy WH, de Bresser J, et al. MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans. Comput Intell Neurosci. 2015;2015:813696.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 10

    Sigron GR, Rüedi N, Chammartin F, Meyer S, Msallem B, Kunz C, Thieringer FM. Three-dimensional analysis of isolated orbital floor fractures pre- and post-reconstruction with standard titanium meshes and “hybrid” patient-specific implants. J Clin Med. 2020;9(5):1579.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 11

    Saloniemi M, Lehtinen V, Snäll J. Computer-aided fracture size measurement in orbital fractures—an alternative to manual evaluation. Craniomaxillofac Trauma Reconstr. Published online October 7, 2020. doi:https://doi.org/10.1177/1943387520962691

    • Search Google Scholar
    • Export Citation
  • 12

    Snäll J, Narjus-Sterba M, Toivari M, Wilkman T, Thorén H. Does postoperative orbital volume predict postoperative globe malposition after blow-out fracture reconstruction? A 6-month clinical follow-up study. Oral Maxillofac Surg. 2019;23(1):2734.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 13

    Kärkkäinen M, Wilkman T, Mesimäki K, Snäll J. Primary reconstruction of orbital fractures using patient-specific titanium milled implants: the Helsinki protocol. Br J Oral Maxillofac Surg. 2018;56(9):791796.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 14

    Fenster A, Chiu B. Evaluation of segmentation algorithms for medical imaging. In: 2005 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2005:71867189.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 15

    Yeghiazaryan V, Voiculescu I. An overview of current evaluation methods used in medical image segmentation. No. RR-15-08. Department of Computer science, University of Oxford; 2015. Accessed June 10, 2021. https://www.cs.ox.ac.uk/files/7732/CS-RR-15-08.pdf

    • Search Google Scholar
    • Export Citation
  • 16

    Wang G, Li W, Ourselin S, Vercauteren T. Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation. Front Comput Neurosci. 2019;13:56.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 17

    Naceur MB, Saouli R, Akil M, Kachouri R. Fully automatic brain tumor segmentation using end-to-end incremental deep neural networks in MRI images. Comput Methods Programs Biomed. 2018;166:3949.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 18

    Sultan HH, Salem NM, Al-Atabany W. Multi-classification of brain tumor images using deep neural network. IEEE Access. 2019;7:6921569225.

  • 19

    Cui S, Mao L, Jiang J, Liu C, Xiong S. Automatic semantic segmentation of brain gliomas from MRI images using a deep cascaded neural network. J Healthc Eng. 2018;2018:4940593.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 20

    Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:6178.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 21

    Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, Miller RW. Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys. 2017;44(10):52345243.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 22

    Chen H, Qin Z, Ding Y, Tian L, Qin Z. Brain tumor segmentation with deep convolutional symmetric neural network. Neurocomputing. 2020;392(7):305313.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 23

    Hussain S, Anwar SM, Majid M. Brain tumor segmentation using cascaded deep convolutional neural network. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2017:19982001.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 24

    Hussain S, Anwar SM, Majid M. Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing. 2018;282:248261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 25

    Yang T, Song J, Li L. A deep learning model integrating SK-TPCNN and random forests for brain tumor segmentation in MRI. Biocybern Biomed Eng. 2019;39(3):613623.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 26

    Li H, Li A, Wang M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks. Comput Biol Med. 2019;108(March):150160.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 27

    Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal. 2018;43:98111.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 28

    Badrigilan S, Nabavi S, Abin AA, Rostampour N, Abedi I, Shirvani A, Ebrahimi Moghaddam M. Deep learning approaches for automated classification and segmentation of head and neck cancers and brain tumors in magnetic resonance images: a meta-analysis study. Int J CARS. 2021;16(4):529542.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 29

    Alqazzaz S, Sun X, Yang X, Nokes L. Automated brain tumor segmentation on multi-modal MR image using SegNet. Comput Vis Media. 2019;5(2):209219.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 30

    Wu Y, Zhao Z, Wu W, Lin Y, Wang M. Automatic glioma segmentation based on adaptive superpixel. BMC Med Imaging. 2019;19(1):73.

  • 31

    Sezer S, Piai V, Kessels RPC, Ter Laan M. Information recall in pre-operative consultation for glioma surgery using actual size three-dimensional models. J Clin Med. 2020;9(11):3660.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 32

    van de Belt TH, Nijmeijer H, Grim D, Engelen LJLPG, Vreeken R, van Gelder MMHJ, Ter Laan M. Patient-specific actual-size three-dimensional printed models for patient education in glioma treatment: first experiences. World Neurosurg. 2018;117:e99e105.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 33

    Wellens LM, Meulstee J, van De Ven CP, Terwisscha van Scheltinga CEJ, Littooij AS, van den Heuvel-Eibrink MM, et al. Comparison of 3-dimensional and augmented reality kidney models with conventional imaging data in the preoperative assessment of children with Wilms tumors. JAMA Netw Open. 2019;2(4):e192633.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 34

    Stadie AT, Kockro RA. Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning. Neurosurgery. 2013;72(suppl 1):6377.

  • 35

    van Doormaal TPC, van Doormaal JAM, Mensink T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Oper Neurosurg (Hagerstown). 2019;17(6):588593.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 36

    Fick T, van Doormaal JAM, Hoving EW, Regli L, van Doormaal TPC. Holographic patient tracking after bed movement for augmented reality neuronavigation using a head-mounted display. Acta Neurochir (Wien). 2021;163(4):879884.

    • Crossref
    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 3361 741 72
PDF Downloads 3834 866 72
EPUB Downloads 0 0 0