Augmented reality (AR) is a technology that allows interaction with virtual 3D models that are superimposed onto the user's view of the real world. In recent years, interest in this technology has increased, especially since major innovations with head-mounted displays (HMDs).1 AR-HMDs facilitate a new way to evaluate medical images and could generate better understanding of anatomical structures. The stereoscopic view allows the user to appreciate virtual models in real 3D. In addition, hand, head, and eye tracking create a new way to interact with and manipulate 3D models. Studies have shown that AR is particularly effective when used to visualize complex pathology or anatomy.2 In addition, 3D visualizations of the relevant anatomical and pathological structures may enhance planning and preparation of surgical interventions.3 However, the current workflows for visualizing 3D models with AR-HMD require multiple manual steps, thereby making the process cumbersome, and therefore they cannot be integrated with daily clinical practice.
Several steps need to be taken to create 3D models and make them suitable for use with AR-HMD. DICOM images need to be exported from the hospital PACS and saved in a secure environment. 3D models need to be created from DICOM images with segmentation, which is the process of delineating the anatomical structures of interest in medical images. After segmentation, these models need to be optimized for visualization and simplified to account for limited processing power. Furthermore, the models need to be converted into different file formats in order to make them applicable for different viewing modalities.
The segmentation process is crucial for ensuring the quality of 3D models but can be time consuming, especially when performed manually or semiautomatically, which is still required for use in clinical neuronavigation systems.4, 5 Recent studies have shown several automatic segmentation solutions.6–9 However, these algorithms often require separate software programs that need to run actively on a local computer, and this obligates the user to keep the computer running during this process and requires substantial processing power that is not always available in the workplace. A cloud-based solution obviates the need for these requirements and is preferred in a dynamic hospital environment.
To make this technique more accessible to surgeons, we built a fully automatic workflow that uses DICOM images to create 3D models suitable for AR-HMDs and is integrated with an automatic segmentation algorithm that segments skin, brain, ventricles, and tumor. Accurate tumor segmentation is vital for use in clinical settings, but this can be challenging for automatic algorithms owing to large variations in shape, size, and voxel intensity among tumors. In this study, we validated the accuracy of the segmentation algorithm for a large variety of tumors by using pairwise comparisons of the automatically segmented data with a ground truth data set of the same imaging sequences that had been segmented manually.
Methods
Data Set
For this study, we included 50 anonymized MR images of patients with brain tumor who underwent operations between 2019 and 2020 at the neurosurgery department of University Medical Center Utrecht. The following minimal inclusion criteria were met: 1) axial T1-weighted MRI data with contrast series available for 100 slices or more; 2) tumor volume of 5 cm3 or greater; 3) use of tumor contrast enhancement; and 4) intracranial tumor location. Tumors in the pituitary region were excluded from this study.
Ground Truth Segmentation
Ground truth segmentations were created using medical segmentation software (3D Slicer, Massachusetts Institute of Technology) and by manually delineating the tumor in each axial slice. No automatic or semiautomatic functions were used. A protocol was written to ensure that every segmentation was done in a uniform manner. Segmentations were exported as binary data arrays.
Automatic Segmentation and AR Workflow
Automatic segmentation was performed with an automatic cloud-based workflow (AugSafe, Augmedit) that automatically generates 3D models from DICOM images that are compatible for visualization with AR-HMDs. The complete workflow is shown in Fig. 1.
Flow diagram showing an overview of the complete workflow from acquisition of DICOM images to 3D visualization.
First, MR images in DICOM format were imported to the cloud that can be accessed through a web-based interface on the computer or through an application on the AR-HMD (HoloLens, Microsoft).
Second, the automatic segmentation process, which was embedded within the cloud environment, was initialized. The images were transferred to an external server running the segmentation algorithm (Disior), which automatically segmented skin, brain, ventricles, and tumor surfaces. The segmentation algorithm created image-specific thresholds and then set up spheres with adaptive meshing in order to expand and capture the radiological boundaries of the target tissues. The expanded sphere enabled robust handling of noisy and/or poor-contrast regions in the image. This method has been used for orbital volume calculations.10–13
Third, the resulting 3D models were saved in the cloud, and every individual anatomical structure was simplified to account for the processing power of current AR-HMDs, maintain optimal performance, and preserve all relevant anatomical details. Furthermore, visualization properties, such as color and transparency, were automatically optimized.
Lastly, the 3D models were converted to different file formats and saved in the cloud to allow viewing with different modalities, such as the integrated 3D viewer of a web-based interface (Fig. 2 upper) or as a holographic scene on an AR-HMD (Fig. 2 lower). The 3D models can be viewed within the original DICOM image, and each individual 3D model can be manipulated and viewed from different angles. Quality of segmentation can be visually verified by selecting an anatomical structure and inspecting the outline of the segmentation on the corresponding DICOM slice (Fig. 3). Other devices can be used to view the 3D models only by scanning an automatically generated QR code (i.e., Quick Response code). Each included scan was sequentially segmented using this workflow. Segmentations were exported as binary data arrays.
3D models generated with automatic segmentation and shown through an embedded 3D viewer in a web-based interface (upper) and an AR-HMD (lower).
Axial MR image showing the circumference of an automatically segmented tumor.
Outcome Measures
To evaluate the accuracy of the segmentation algorithm, the ground truth and automatically segmented data were compared using both volumetric analysis statistics and surface analysis statistics.14 Binary data arrays of the manual and automatic segmentations were imported into a custom MATLAB script (MathWorks) where all calculations were performed.
The segmentation time of each automatic segmentation was noted to evaluate the efficiency of the algorithm. Statistical analysis was performed with SPSS version 26.0 (IBM Corp.). The distribution of data was checked using normality tests, which showed that all data were skewed for all outcome measures. Therefore, nonparametric tests were used to compare groups. Linear regression was used to compare tumor volume with DSC, HD95, and ASSD.
Results
Fifty scans were included in the analysis. The mean (range) number of slices per sequence was 269 (140–380), and the mean (range) tumor volume was 39.4 (5.1–113.3) cm3. All 50 scans were computed successfully by the segmentation algorithm. The resulting data from our analysis are summarized in Table 1. The mean ± SD computation time of the segmentation algorithm was 753 ± 128 seconds, compared with a mean ± SD manual segmentation time of 6212 ± 2039 seconds. The mean (95% CI) difference was 5459 (4886–6032) seconds. The mean ± SD DSC was 0.87 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm.
Accuracy and efficiency results of automatic segmentation
Characteristic | DSC | ASSD (mm) | HD95 (mm) | Vol (cm3) | Segmentation Time (sec) |
---|---|---|---|---|---|
Overall (n = 50) | 0.87 ± 0.07 | 1.31 ± 0.63 | 4.80 ± 3.18 | 39.4 ± 33.1 | 753 ± 128 |
Pathology | |||||
Meningioma | |||||
All (n =15) | 0.89 ± 0.08 | 1.19 ± 0.73 | 4.61 ± 2.84 | 48.4 ± 34.5 | |
Supratentorial (n = 14) | 0.88 ± 0.08 | 1.23 ± 0.74 | 4.81 ± 2.84 | 48.6 ± 35.8 | |
Infratentorial (n = 1) | 0.95 | 0.67 | 1.88 | 43.1 | |
Metastasis | |||||
All (n = 19) | 0.84 ± 0.07 | 1.36 ± 0.65 | 4.40 ± 1.77 | 21.5 ± 18.3 | |
Supratentorial (n = 10) | 0.86 ± 0.09 | 1.30 ± 0.85 | 3.62 ± 1.58 | 26.0 ± 19.8 | |
Infratentorial (n = 9) | 0.82 ± 0.04 | 1.43 ± 0.35 | 5.26 ± 1.61 | 16.5 ± 10.1 | |
Glioblastoma (n = 16) | 0.88 ± 0.06 | 1.36 ± 0.53 | 5.46 ± 4.60 | 52.4 ± 38.4 |
Values are shown as mean or mean ± SD.
The Kruskal-Wallis test showed that pathology significantly affected the accuracy of DSC (H [2] = 7.87, p = 0.02). The post hoc Mann-Whitney U-test with a Bonferroni-adjusted alpha level of 0.017 (0.05/3) was used to compare all pairs of groups. Metastasis (median 0.84 and median 0.85) had lower DSC than meningioma (mean 0.89 and median 0.92), and this difference was statistically significant (U [nmetastasis = 19, nmeningioma = 15] = 72, z = −2.45, p = 0.01). No statistically significant differences in HD95 (p = 0.68) and ASSD (p = 0.24) were found among the included pathologies.
Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). The Mann-Whitney U-test showed that these differences were statistically significant for DSC (U [nsupratentorial = 10, ninfratentorial = 9] = 21, z = −1.96, p = 0.05) and HD95 (U [nsupratentorial = 10, ninfratentorial = 9] = 14.5, z = −2.49, p = 0.01).
When we compared tumor volume and the corresponding DSC using linear regression, we found a significant regression equation (F [1, 48] = 10.985, p = 0.002) with R2 of 0.186. The predicted DSC was equal to 0.832 ± 0.001 per cubic centimeter of volume. This finding implies that, on average, a larger tumor has a larger DSC. No significant associations were found between tumor volume and HD95 (p = 0.56) or ASSD (p = 0.96).
Figure 4 presents the distances between the surface points of the two compared sets with all tumors. This representation accounts for a total of 1,110,917 points, of which 58.0% had a difference less than 1 mm and 79.1% had a difference less than 2 mm. Furthermore, a rapid decline in frequency with increasing distance is shown: 3.5% of points had a distance greater than 5 mm; frequencies were not available for distances greater than 10 mm; and the maximum outlier was 30.28 mm. Outliers were mostly caused by subsets of included voxels that largely deviated from the actual tumor, even though tumor segmentation was performed accurately (Fig. 5).
Histogram showing the distances between all points included in the two compared surface sets with all tumors.
Heatmap (left) and 3D model (right) of tumor segmentation. Small subsets of outliers (encircled) unrelated to the tumor are shown, but the rest of the segmentation is accurate.
Additional qualitative impressions are presented to illustrate characteristic difficulties with automatic segmentation (Fig. 6). When the tumor is not clearly enhanced, the border between the tumor and brain may not be accurately recognized. This can result in segmentation where the tumor is only partially included (Fig. 6 left). Other difficulties arise when the tumor is surrounded by structures with similar intensities. These structures are sometimes falsely recognized as part of the tumor (Fig. 6 right).
Qualitative impressions of difficulties during segmentation. Left: Coronal MR image showing a low-intensity difference between the tumor and brain. The circumference of the segmented tumor shows that the border is not accurately recognized. Right: Sagittal MR image showing the circumference of a segmented tumor that includes a part of the ambient cistern.
Discussion
In this study, we evaluated the accuracy of a fully automatic segmentation algorithm for brain tumor. In recent studies, deep learning techniques, such as convolutional16–26 or recurrent neural networks,27 were mostly used for automatic brain tumor segmentation, with DSC varying between 0.70 and 0.91. A recent meta-analysis of deep learning approaches for automatic segmentation of brain tumor showed a pooled average DSC of 0.88 for high-grade glioma.28 Two studies reported ASSD26, 29 and two studies reported HD95.26, 30 ASSD ranged between 1.17 and 1.86, and HD95 ranged between 3.47 and 6.18. These findings are comparable to our results: we showed a mean DSC of 0.87, ASSD of 1.31 mm, and HD95 of 4.80 mm.
Our segmentation algorithm uses a novel approach for automatic segmentation. The algorithm works with training data indirectly, which means that it is not trained to recognize certain structures in a specific training set and does not depend on a specific number of voxels. A baseline for multiple parameters, such as gray scale, shape, size, and curvature, is formed from a relatively small data set for each structure. Accuracy is improved by visually checking the outcome and further optimizing these parameters. This means that, compared with deep learning techniques, a smaller training set is needed and our algorithm is more robust for handling various image resolutions and variations in pathologies.
In our results, infratentorial metastasis and metastases had significantly decreased DSC compared with supratentorial metastasis and meningioma, respectively. Furthermore, our results suggest that smaller tumors have decreased DSC on average, which may have been a factor between these groups. Moreover, metastasis can show a large variety of shapes and intensities inside the tumor, whereas meningioma is often more homogeneous and well defined, which enables the algorithm to more accurately define the radiological boundaries of the tumor. Automatic segmentation also had decreased accuracy for measuring HD95 of infratentorial metastasis compared with that of supratentorial metastasis. We noticed that structures surrounding infratentorial tumors, such as cisterns or sinuses, could be falsely recognized as a part of the tumor by the segmentation algorithm, possibly leading to these differences in outliers. Nonetheless, the groups are small and should be interpreted with caution. In this respect, it would be interesting to include infratentorial intrinsic brain tumor, which is more frequently seen in pediatric patients.
Automatic segmentation was significantly faster than manual segmentation, even though manual segmentation only included tumor, whereas automatic segmentation included tumor, ventricles, brain, and skin. In this study, we validated segmentation accuracy for tumor because this structure has the most variation and therefore is the most challenging to recognize. Furthermore, multiple aspects of providing care, such as preoperative evaluation, assessment of the surgical approach, and guidance during surgery, require evaluation of tumor structure. However, each segmented structure has its own technical challenges and clinical implications, and accuracy is key for usability. Therefore, we are also currently working to validate accuracy for other structures, such as ventricles.
The currently described segmentation algorithm is incorporated into a cloud environment, which facilitates a fully automatic workflow from acquisition of DICOM images to visualization with an AR-HMD or web-based viewer. This workflow offers several advantages. First, it requires no user input, which eliminates the need for neurosurgeons or specialized personnel to allocate time for 3D segmentation. Second, the algorithm runs on an external server from the cloud, which obviates the need for high processing power on the local computer, which is not always available in the workplace. Furthermore, this also means that the system keeps running in the back end when the local computer is turned off. Third, visualization properties (e.g., simplification, color, transparency) are optimized, and the 3D models are automatically stored and converted into different file formats. This means that the surgeon can view the models at any time when needed using different modalities without further processing. Fourth, the cloud environment uses Azure-based security services (Microsoft) and is secured according to relevant International Organization for Standardization criteria; therefore, the segmentation algorithm can be connected to hospital PACS, which facilitates immediate connection after medical imaging is performed. Lastly, the segmentation algorithm offers parallel processing, which means that multiple images can be processed simultaneously, thereby making it more efficient.
The generated 3D models can support multiple aspects of preoperative care. Information recall by patients has been shown to improve after preoperative consultation with 3D or AR models.31 Therefore, 3D models can help patients better understand their disease and support the decision-making process.32 Furthermore, preoperative evaluation of 3D models can improve anatomical understanding of surgeons and ultimately patient outcomes.33, 34
Limitations
The current segmentation algorithm has several limitations. First, it has to be optimized for use during AR neuronavigation. Ultimately, we want to integrate this workflow into our previously described AR neuronavigation system.35, 36 However, accuracy of neuronavigation systems is key, and every segmentation error accounts for an additional inaccuracy. Therefore, we will have to combine automatic segmentation with manual 3D fine-tuning to make this algorithm applicable in such a system.
Second, the described segmentation algorithm is limited to contrast-enhanced tumors with a minimum volume of 5 cm3 on a T1-weighted sequence. To expand the indications for use of the AR workflow, we plan to design a segmentation algorithm that incorporates T1-weighted MR images without contrast, T2-weighted MR images, and FLAIR sequences. Specific sequences can be automatically detected, and the segmentation algorithm can define the borders of the same anatomical structures on the basis of the different thresholds adapted to the presented sequence. Moreover, segmentation currently includes the whole tumor. Combining different sequences enables the surgeon to distinguish between contrast-enhancing, cystic, and necrotic parts of different tumors. Furthermore, a larger data set is needed to make the segmentation algorithm more sensitive to smaller tumors.
Lastly, pituitary tumors were excluded from this study. These tumors are surrounded by several anatomical structures that have similar intensities as the tumor, which makes it difficult to distinguish different structures. To solve this issue, we plan to use other segmented structures as borders for tumor segmentation. We are also currently developing skull and vessel segmentation algorithms that could be used to prevent the segmented tumor from crossing the borders of these structures.
Conclusions
The currently described, cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in the evaluation of 3D models of contrast-enhancing intracranial lesions with an AR-HMD. The algorithm still has limits regarding tumor volume and applicable pathologies. The next steps involve incorporation of other sequences and 3D fine-tuning to expand the scope of the AR workflow.
Acknowledgments
This project has received funding from the Eurostars-2 joint program, with co-funding from the European Union Horizon 2020 research and innovation program (114221 SAPIENS3D).
Disclosures
Dr. van Doormaal is the founder and CMO of Augmedit bv. Dr. Meulstee is a senior product developer for Augmedit bv.
Author Contributions
Conception and design: all authors. Acquisition of data: Fick, Tosic, van Zoest. Analysis and interpretation of data: Fick. Drafting the article: Fick. Critically revising the article: JAM van Doormaal, Tosic, van Zoest, Meulstee, Hoving, TPC van Doormaal. Approved the final version of the manuscript on behalf of all authors: Fick. Statistical analysis: Fick. Administrative/technical/material support: JAM van Doormaal. Study supervision: Hoving, TPC van Doormaal.
References
- 1↑
Pelargos PE, Nagasawa DT, Lagman C, Tenn S, Demos JV, Lee SJ, et al. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. J Clin Neurosci. 2017;35:1–4.
- 2↑
Swennen GRJ, Mollemans W, Schutyser F. Three-dimensional treatment planning of orthognathic surgery in the era of virtual imaging. J Oral Maxillofac Surg. 2009;67(10):2080–2092.
- 3↑
Preim B, Botha C. Visual Computing for Medicine: Theory, Algorithms, and Applications. 2nd ed. Morgan Kaufmann; 2014:648–661.
- 4↑
Li Y, Chen X, Wang N, Zhang W, Li D, Zhang L, et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J Neurosurg. 2019;131(5):1599–1606.
- 5↑
McJunkin JL, Jiramongkolchai P, Chung W, Southworth M, Durakovic N, Buchman CA, Silva JR. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. 2018;39(10):e1137–e1142.
- 6
Zhang L, Wang X, Yang D, Sanford T, Harmon S, Turkbey B, et al. Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation. IEEE Trans Med Imaging. 2020;39(7):2531–2540.
- 7
Ma J, Ma HT, Li H, Ye C, Wu D, Tang X, et al. A fast atlas pre-selection procedure for multi-atlas based brain segmentation. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2015:3053–3056.
- 8
Li J, Yu ZL, Gu Z, Liu H, Li Y. MMAN: multi-modality aggregation network for brain segmentation from MR images. Neurocomputing. 2019;358:10–19.
- 9
Mendrik AM, Vincken KL, Kuijf HJ, Breeuwer M, Bouvy WH, de Bresser J, et al. MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans. Comput Intell Neurosci. 2015;2015:813696.
- 10
Sigron GR, Rüedi N, Chammartin F, Meyer S, Msallem B, Kunz C, Thieringer FM. Three-dimensional analysis of isolated orbital floor fractures pre- and post-reconstruction with standard titanium meshes and “hybrid” patient-specific implants. J Clin Med. 2020;9(5):1579.
- 11
Saloniemi M, Lehtinen V, Snäll J. Computer-aided fracture size measurement in orbital fractures—an alternative to manual evaluation. Craniomaxillofac Trauma Reconstr. Published online October 7, 2020. doi:https://doi.org/10.1177/1943387520962691
- 12
Snäll J, Narjus-Sterba M, Toivari M, Wilkman T, Thorén H. Does postoperative orbital volume predict postoperative globe malposition after blow-out fracture reconstruction? A 6-month clinical follow-up study. Oral Maxillofac Surg. 2019;23(1):27–34.
- 13
Kärkkäinen M, Wilkman T, Mesimäki K, Snäll J. Primary reconstruction of orbital fractures using patient-specific titanium milled implants: the Helsinki protocol. Br J Oral Maxillofac Surg. 2018;56(9):791–796.
- 14↑
Fenster A, Chiu B. Evaluation of segmentation algorithms for medical imaging. In: 2005 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2005:7186–7189.
- 15↑
Yeghiazaryan V, Voiculescu I. An overview of current evaluation methods used in medical image segmentation. No. RR-15-08. Department of Computer science, University of Oxford; 2015. Accessed June 10, 2021. https://www.cs.ox.ac.uk/files/7732/CS-RR-15-08.pdf
- 16
Wang G, Li W, Ourselin S, Vercauteren T. Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation. Front Comput Neurosci. 2019;13:56.
- 17
Naceur MB, Saouli R, Akil M, Kachouri R. Fully automatic brain tumor segmentation using end-to-end incremental deep neural networks in MRI images. Comput Methods Programs Biomed. 2018;166:39–49.
- 18
Sultan HH, Salem NM, Al-Atabany W. Multi-classification of brain tumor images using deep neural network. IEEE Access. 2019;7:69215–69225.
- 19
Cui S, Mao L, Jiang J, Liu C, Xiong S. Automatic semantic segmentation of brain gliomas from MRI images using a deep cascaded neural network. J Healthc Eng. 2018;2018:4940593.
- 20
Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:61–78.
- 21
Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, Miller RW. Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys. 2017;44(10):5234–5243.
- 22
Chen H, Qin Z, Ding Y, Tian L, Qin Z. Brain tumor segmentation with deep convolutional symmetric neural network. Neurocomputing. 2020;392(7):305–313.
- 23
Hussain S, Anwar SM, Majid M. Brain tumor segmentation using cascaded deep convolutional neural network. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2017:1998–2001.
- 24
Hussain S, Anwar SM, Majid M. Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing. 2018;282:248–261.
- 25
Yang T, Song J, Li L. A deep learning model integrating SK-TPCNN and random forests for brain tumor segmentation in MRI. Biocybern Biomed Eng. 2019;39(3):613–623.
- 26↑
Li H, Li A, Wang M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks. Comput Biol Med. 2019;108(March):150–160.
- 27↑
Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal. 2018;43:98–111.
- 28↑
Badrigilan S, Nabavi S, Abin AA, Rostampour N, Abedi I, Shirvani A, Ebrahimi Moghaddam M. Deep learning approaches for automated classification and segmentation of head and neck cancers and brain tumors in magnetic resonance images: a meta-analysis study. Int J CARS. 2021;16(4):529–542.
- 29↑
Alqazzaz S, Sun X, Yang X, Nokes L. Automated brain tumor segmentation on multi-modal MR image using SegNet. Comput Vis Media. 2019;5(2):209–219.
- 30↑
Wu Y, Zhao Z, Wu W, Lin Y, Wang M. Automatic glioma segmentation based on adaptive superpixel. BMC Med Imaging. 2019;19(1):73.
- 31↑
Sezer S, Piai V, Kessels RPC, Ter Laan M. Information recall in pre-operative consultation for glioma surgery using actual size three-dimensional models. J Clin Med. 2020;9(11):3660.
- 32↑
van de Belt TH, Nijmeijer H, Grim D, Engelen LJLPG, Vreeken R, van Gelder MMHJ, Ter Laan M. Patient-specific actual-size three-dimensional printed models for patient education in glioma treatment: first experiences. World Neurosurg. 2018;117:e99–e105.
- 33↑
Wellens LM, Meulstee J, van De Ven CP, Terwisscha van Scheltinga CEJ, Littooij AS, van den Heuvel-Eibrink MM, et al. Comparison of 3-dimensional and augmented reality kidney models with conventional imaging data in the preoperative assessment of children with Wilms tumors. JAMA Netw Open. 2019;2(4):e192633.
- 34↑
Stadie AT, Kockro RA. Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning. Neurosurgery. 2013;72(suppl 1):63–77.
- 35↑
van Doormaal TPC, van Doormaal JAM, Mensink T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Oper Neurosurg (Hagerstown). 2019;17(6):588–593.
- 36↑
Fick T, van Doormaal JAM, Hoving EW, Regli L, van Doormaal TPC. Holographic patient tracking after bed movement for augmented reality neuronavigation using a head-mounted display. Acta Neurochir (Wien). 2021;163(4):879–884.