Neurosurgery is a specialized field, in which errors cannot be tolerated. The ideal way to pursue perfection in surgery is to study anatomy and practice surgical procedures on cadaver specimens. However, the shortage of donors and the difficulty of processing and preserving the cadavers makes them less accessible.1
Education using virtual reality (VR) has gradually been introduced in the field of neurosurgery.2 Remote learning has further accelerated this trend during the COVID-19 pandemic. With VR, the process of education can continue while transcending time and space, and through interaction with a 3D model and an immersive experience, the trainee can understand the 3D positional relationship of the surgical anatomy that is otherwise difficult to comprehend with the existing 2D educational resources. VR-assisted surgical planning with a patient-specific model provides a preview of the surgical approach and allows the surgeon to rehearse the surgery through simulation.
Until now, 3D models have been created by segmenting data from CT or MRI sequences and then surface rendering or volume rendering.3 Given that medical images do not have natural colors, the 3D models obtained from them also lack color. The 3D models were textured manually to give a realistic impression.4 Also, due to the limitations of CT and MRI resolution, structures such as fine blood vessels, cranial nerves, and arachnoid membranes could not be reproduced.
To overcome this challenge, we created a 3D model with a silicone-injected cadaver, using a photogrammetric method.5, 6 Through this method, a 3D model with the same color as the real specimen could be obtained, and this was combined with the 3D data obtained from CT scans to create a complete head model. The 3D models we have made so far have been uploaded to www.neurosurgery3d.com and can be observed for free.
Here, we introduce a method of transferring a 3D model obtained from an actual photograph of a cadaver to a VR platform and give examples of its use. We interacted with a photographic 3D model through a head-mounted display (HMD) and used it to perform a virtual surgery, which was demonstrated to neurosurgery residents. We collected feedback from the neurosurgery residents and confirmed the educational effects of this new technology. All the software used to make this 3D model and to perform virtual surgery, as described in this paper, are available free of cost.
Methods
Cadaver Preparation and Taking Photographs
The cadaveric study committee and the institutional review board of the Yonsei University of Medicine approved this study. One adult cadaver head provided by the Surgical Anatomy Education Center of Yonsei University College of Medicine was used.
The preparation method for the specimen has been described previously.7, 8 The cadaver head was dissected and photographed layer by layer (Fig. 1). First, the calvaria was extracted by craniotomy. Then, the dura mater was removed, cerebral hemisphere was taken out, tentorium was resected, brainstem was removed, the wall of the cavernous sinus was peeled out, anterior clinoidectomy was performed on the right side, and finally, anterior and posterior petrosectomy were performed on the right side. In every step a photogrammetric 3D scan was performed. We have described the detailed method of photogrammetric 3D scanning of anatomical specimens elsewhere (our unpublished data, 2021). Briefly, more than 100 photographs in the one model are taken from as many different angles as possible so that there are no blind spots remaining. The photographs are imported to a photogrammetry software, Autodesk Recap Photo (Autodesk, Inc.), which is available free of cost for educational purposes (Fig. 2). The reconstructed 3D models are then trimmed and exported as.fbx files.
Materials used to fabricate the photographic 3D model. The cadaver specimen was dissected layer by layer. The specimen was photographed from various angles with no blind spot. More than 100 photographs were taken for each model. A: The calvaria has been removed. B: The dura has been removed. C: Cerebral hemisphere. D: Skull base dura exposed with falx cerebri and tentorium. E: Dura of the right temporal base was peeled out. F: Brainstem and cerebellum.
Photographic 3D model generation. Multiple photos were uploaded for 3D mesh generation, using Autodesk ReCap Photo software (upper). Colored 3D mesh models were generated and trimmed (lower).
Segmentation and 3D Reconstruction
CT scanning was performed for the cadaver head. The medical images were imported to a free medical image processing software program, 3D Slicer (http://www.slicer.org/; Harvard University).9 First the soft tissue and then the skull were segmented using a thresholding method (Fig. 3). The segmented 3D model was exported as .stl files.
The skull and skin 3D models were segmented from the CT scan with the 3D Slicer software.
Alignment of 3D Models
The models were imported to a free 3D modeling software program, Blender (www.blender.org). The models were aligned with overlapping surfaces, using an iterative closest point algorithm.10 The aligned models were exported in their original format.
Importing 3D Models Into VR Application
VIDEO 1. Photograph-integrated VR demonstration. The methods of making photographic 3D models and performing virtual surgery are demonstrated. Copyright Tae Hoon Roh. Published with permission. Click here to view.
The models can be imported either as a reference mesh (noneditable triangles) or as clay (editable chunk). Basically, sheetlike structures such as skull base, tentorium, and dura mater were imported as meshes, whereas solid structures like cerebrum, cerebellum, and brainstem were imported as clay. The 3D models derived from CT scans (skull and skin), which does not contain natural colors, were painted using specific colors in the application.
Virtual Surgery With Photographic 3D Models
Operators put on the HMD and interacted with the models by using Oculus Touch controllers. Each 3D model was separated into independent layers when imported (Fig. 4). With the clay tool, the 3D clay model of the selected layer was added or erased. The skin incision was made by using the erase function of the clay tool. The craniotomy was mimicked by selecting the skull layer and erasing it with the clay tool. When the skin and the skull were virtually resected, the underlying dura mater was exposed with its natural color. By turning off the visibility of the dura layer, the underlying brain was seen. Brain retraction was mimicked by moving the brain away from the operation field with the move or flatten tools. The cavernous sinus was visualized by switching off the visibility of the temporal dura.
Segmented models and photographic models were imported to the VR application Adobe Medium (upper). Each model was imported into separate layers. The visibility and editability can be toggled in the control panel. Virtual surgery was performed with HMD and controllers (lower).
The virtual surgery was demonstrated to the neurosurgical residents of Ajou University Hospital and Yonsei University Severance Hospital. A pterional approach, a lateral suboccipital approach, and anterior petrosal and posterior petrosal approaches were performed. Hands-on virtual surgery was experienced by the residents. After that, the feedback survey was completed by the participating neurosurgical residents. To clarify the difference between the conventional and photographic models, representative images of each model were presented (Fig. 5).
Comparison of a conventional 3D model, which was generated through segmentation of the CT scan, and a photographic 3D model. Left: A skull base model segmented from the CT scan. Bony structures are well segmented with a threshold of HU > 500. However, it is difficult to visualize dural structures and nerves entering the foramina on a CT scan. Right: A skull base model made by the photogrammetric method could help in preserving the real color of the specimen.
Statistical Analysis
Descriptive statistics were analyzed using Google Sheets software. Data were computed and analyzed using IBM SPSS statistical package, version 25.0 (IBM Corp.). Data are reported as the mean and SD. Cronbach's alpha was calculated to assess the internal consistency and reliability of measurements. Wilcoxon's signed-rank test was performed for nonparametric comparison tests.
Results
Surgical Approaches
Various surgical approaches were successfully performed with the 3D models (Fig. 6). The pterion was identified by the suture lines on the skull. During the pterional approach, the sylvian vein and fissure were identified. After retraction of the base of the frontal lobe, the optic nerve, right internal carotid artery, and the oculomotor nerve entering the cavernous sinus were identified. The falciform ligament was also identified. When dura mater was removed, the anterior clinoid process was identified. The anterior clinoid process was drilled out.
Examples of various surgical approaches performed with virtual cadavers: A: Pterional craniotomy was performed. B: Internal view seen through pterional craniotomy. Optic nerve entering the optic canal, oculomotor nerve, internal carotid artery (ICA), anterior cerebral artery (ACA), and posterior communicating artery (P.Com.A.) are visualized. Arachnoid membrane of the oculomotor cistern is also visible. C: Anterior petrosal approach. The dura of the temporal base was peeled out (turned off). The anterior clinoid process was drilled out. The cavernous ICA was exposed. The Kawase triangle was drilled out. D: Posterior petrosal approach. Mastoidectomy was performed to expose semicircular canals. The sigmoid and transverse sinus were identified by their color. CN = cranial nerve; n. = nerve; T = temporal lobe.
The lateral suboccipital approach was performed on the right side. The asterion was also identifiable on the skull. Craniectomy was extended to partially expose the transverse and the sigmoid sinus. The dura was opened, and the 5th–12th cranial nerves were identified after medial retraction of the cerebellum. The posterior inferior cerebellar artery, anterior inferior cerebellar artery, superior cerebellar artery, basilar artery, and vertebral artery were identified. On the ventral side of the cerebellum, flocculus and choroid plexus exiting from the foramen of Luschka were observed.
Then, the temporal craniotomy was extended to the base of the temporal bone. On the temporal skull base, the foramen spinosum, arcuate eminence, facial hiatus, and greater superficial petrosal nerve (GSPN) were identified. The lateral wall of the cavernous sinus was peeled out. Three branches of the trigeminal nerve and the internal carotid artery passing through the cavernous sinus were visualized. Lateral anterior petrosectomy was successfully performed. The posterior fossa dura was identified and opened. The ventral side of the brainstem and the 2nd–12th cranial nerves were visualized.
At last, mastoidectomy was performed with the posterior petrosal approach. Lateral, posterior, and superior semicircular canals were identified. Upon drilling out the mastoid bone, the fallopian canal, superior petrosal sinus, and jugular bulb were exposed. After removal of the posterior fossa dura and tentorium, the ventrolateral side of the brainstem was accessed.
Feedback From the Participants
Overall, 31 neurosurgery residents completed the survey. The list of questionnaires and the answers are summarized in Table 1. On comparing a photographic 3D model to a medical image–derived 3D model, the photographic 3D model scored higher (4.3 ± 0.8 and 3.2 ± 1.1, p = 0.001). In addition, 30 (96.8%) respondents preferred the photographic 3D model for virtual surgery.
Responses for questionnaires about photographic 3D models used in a VR program
Question | 1: Strongly Disagree | 2: Disagree | 3: Neutral | 4: Agree |
---|---|---|---|---|
The segmented 3D model reproduced precise anatomy well. | 2 (6.5) | 7 (22.6) | 6 (19.4) | 14 (45.2) |
The photographic 3D model reproduced precise anatomy well. | 1 (3.2) | 4 (12.9) | 11 (35.5) | |
Do you think performing virtual surgery will help in performing actual surgery? | 3 (9.7) | 1 (3.2) | 10 (32.3) | |
Do you think performing virtual surgery will help in studying a new surgical approach? | 1 (3.2) | 1 (3.2) | 11 (35.5) | |
Do you think showing photographic 3D models is better for virtual surgery? | 1 (3.2) | 11 (35.5) |
Values in parentheses represent percentage of responses.
The additional comments received were: “the photographic 3D models are fantastic,” “the resolution of the photographic 3D models is blurry,” “I would like to select other surgical instruments like drills,” “I think a good operative video is more helpful than this,” and “I want stable view like in the microscope.”
Discussion
Due to the coronavirus pandemic worldwide, the hours dedicated to the education and training of the residents have decreased.11 The schedule for attending surgeries was also delayed, so there is a desperate need for educational materials that can replace actual patients. Amid these demands, education using VR is booming,12 but virtual surgery has rarely been reproduced in detail.
Methods of 3D reconstruction of anatomical structures with medical images are divided into surface-rendering and volume-rendering techniques.13 Surface rendering shows a specific structure by segmentation.14 It has the disadvantage that segmentation takes a lot of effort and time, but it is a commonly used method because it is compatible with limited computing resources. The limitation of this method is that it can only reproduce what is seen with CT or MRI. Given that the voxel size of CT or MRI mainly used in clinical practice is 1 mm, it is impossible to segment a structure smaller than this in a medical image. It is also difficult to segment a structure in which density or signal intensity is not well distinguished.
On the other hand, through volume rendering, a CT or MRI sequence is displayed on a 3D space with its continuous value. By adjusting the level and width, internal structures can be seen with volume rendering. Volume rendering allows more lifelike visualization than surface rendering.15 The disadvantage of volume rendering is that it takes up too much computing power. Even in a computer equipped with the latest specifications, observing a volume-rendered model can overwhelm the system.
VR has been implemented to plan surgery and for anatomy education in the field of neurosurgery.12,16–18 The Dextroscope was the first commercialized system capable of simulating neurosurgery by implementing VR.19, 20 By integrating the techniques of volume and surface rendering, the surgery could be accurately rehearsed. However, because special hardware was required, it was not widely used due to the high entry barrier. The computing power was not strong at that time.
A long time after the Dextroscope was invented, the era of metaverse began. VR equipment became affordable. Many applications were developed for surgical simulation, such as Surgical Theater and ImmersiveTouch.2, 12 However, none of these used photographs of a real human body.
The limitation of the existing 3D models is that only what can be seen with CT or MRI can be reproduced. For example, the GSPN from the facial hiatus or the perforating artery from the basilar artery to the brainstem is difficult to depict.
As described in the Neurosurgical Atlas by Aaron Cohen-Gadol (www.neurosurgicalatlas.com),4, 21 the predicted path can be drawn through precise 3D modeling, which is very useful for educational purposes, but it cannot replace actual photographs. The 3D modeling and texturing are not done by neurosurgeons directly, but by 3D graphics experts, which is why it is difficult to find an anatomically accurate model.
We created a 3D structure by using photogrammetry to overcome this challenge. It was also confirmed that the structure is consistent with the 3D model reconstruction of CT. It was seen that the GSPN of the temporal base could be identified in VR, and the internal structure of the cavernous sinus could also be seen. So far, there has been no other VR program that displays these structures in their actual color.
3D modeling done using photogrammetry has the advantage of showing structures that are difficult to demonstrate with CT or MRI in their actual color. Examples of optimal structures for making 3D models by using photogrammetry are the brain surface, dura, bone surface, tentorium, falx cerebri, cranial nerves, and choroidal plexus.6 On the other hand, for structures such as the skull that can be easily segmented with CT, it is advantageous to model from medical images. Therefore, integrating both methods could help to overcome the shortcomings of each.
Additionally, we implemented these concepts through freely available software. Even very advanced equipment such as a Dextroscope is not easy to use if it is expensive. VR could be helpful for surgeons, however, it is difficult for clinicians to bear the cost of the VR equipment yet, because the patient does not pay for it. It is also possible to perform patient-specific preoperative simulations by applying the method we used to the patient's medical image. The core concept of this method is to create different layers for each segment such as skin, bone, and brain so that only that layer can be modified during simulation.
The neurosurgical trainees who participated in the VR experience generally had favorable responses to it. Most respondents said that being able to see a picture in 3 dimensions improved their understanding of the anatomy. In cases of negative responses, the reason stated was that the picture was blurry, with a lower resolution, when seen through VR. However, this aspect will improve as computer performance also improves in the future. In addition, there were requests for simulating the use of specific surgical instruments in the experience. We are planning to develop a dedicated application for neurosurgery simulation based on the concepts discussed here.
Limitations of This Study
The limitation of VR using HMD is that there is no haptic feedback. In addition to anatomical knowledge, haptic feedback is important to hone surgical skills. Although it may be an alternative to a physical model obtained by 3D printing, there is a disadvantage that direct dissection is not possible due to the limitations of the materials used for 3D printing.
The limitation of our 3D model is that there is no information inside the volume. The 3D model obtained by photogrammetry is a surface mesh, not volume data. This means that it is information about a surface that is empty and has zero volume. When this is imported into Adobe Medium and converted into Clay format, the interior becomes solid and has a volume. In this process, the resolution decreases substantially. Also, because there is no information about the color inside, it shows a random color different from the real one when it is cut in VR. As of now, the alternative we have chosen to overcome this is to create a model with multiple layers, then set the outer layer so that only the outer layer can be modified, and then cut it off to reveal the inner structure. Through this method, the degree of freedom of simulation is low. In the future, it will be possible to create a surgical simulator with internal information by using the cross-sectioned images of the human body as in the Visible Human Project.22
As of now, the HMD needs to link to a computer equipped with a high-end CPU and GPU to run these applications. The hardware performance of stand-alone HMD cannot keep up with the running of these applications yet. However, as computing performance is improving, virtual dissection can be implemented through an inexpensive HMD.
Future Directions
Adobe Medium, the application we used, was originally developed as a sculpting application. Thus, there is a limit to its application in the reproduction of a surgery. This can be overcome through the development of dedicated applications in the future. This study was conducted to introduce a potential direction for virtual surgery. The dedicated application should include features that allow the operator to manipulate the patient's position during surgery, virtual tools that mimic surgical tools, and virtual microscopes and endoscopes. It could incorporate features that cause bleeding when an artery is accidentally touched or an alarm on inadvertently touching a nerve. With physical property incorporated, retracting brain or soft tissue, incision of the skin, and cutting the dura will be imitated.
It is expected that the development of dedicated hardware with haptic feedback will also be useful. If a VR machine is made to observe an object through lenses like a surgical microscope, with interfaces such as handle, foot, and mouth switch, it will help simulate a surgery performed using a microscope. As an intermediate step toward that, augmented reality, the method of projecting a photographic 3D model onto a 3D printed skull, can be implemented and observed through a surgical microscope.23
The ultimate direction for virtual surgical simulators is to import patient data such as MR or CT scans and convert them into 3D data, without any additional work, so that virtual simulation can be performed immediately.
Conclusions
We have introduced a technique that involves using pictures of a dissected cadaver as a resource to simulate VR surgery. Neurosurgery residents found this technique of neurosurgical simulation with photographic 3D models to be helpful. It could be used to further develop resources that could aid the education of neurosurgeons and medical students. We have also described the simulation of a virtual surgery that was performed using freely available tools. The technique described here can also be applied for patient-specific surgical simulation with the patient's medical images. We have presented a direction for the use of VR for neurosurgical simulation. We believe that it will enhance surgical performance, ultimately contributing to improved patient outcomes.
Acknowledgments
This work was supported by a National Research Foundation of Korea (NRF) grant (NRF-2019R1G1A1011569) and the Bio & Medical Technology Development Program of the NRF (NRF-2020M3A9E8024890) funded by the Korean government (Ministry of Science and ITC) (Dr. Roh).
We deeply appreciate Mr. Jun Ho Kim and Mr. Jong Ho Bang in the Surgical Anatomy Education Center of Yonsei University College of Medicine for their technical support. We also thank our mentor, Professor Emeritus Kyu Sung Lee.
Disclosures
The authors report no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper.
Author Contributions
Conception and design: Roh, EH Kim, Hong. Acquisition of data: Roh, Oh, Jang, Choi. Analysis and interpretation of data: Roh, Oh, EH Kim. Drafting the article: Roh Critically revising the article: EH Kim. Reviewed submitted version of manuscript: all authors. Administrative/technical/material support: Roh, Hong. Study supervision: SH Kim, Roh.
Supplemental Information
Videos
Video 1. https://vimeo.com/559935408.
References
- 1↑
Habicht JL, Kiessling C, Winkelmann A. Bodies for anatomy education in medical schools: an overview of the sources of cadavers worldwide. Acad Med. 2018;93(9):1293–1300.
- 2↑
Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJ. Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance. Neurosurg Rev. Published online September 18, 2020. doi:10.1007/s10143-020-01378-0
- 3↑
Bücking TM, Hill ER, Robertson JL, et al. From medical imaging data to 3D printed anatomical models. PLoS One. 2017;12(5):e0178540.
- 4↑
Teton ZE, Freedman RS, Tomlinson SB, Linzey JR, Onyewuenyi A, Khahera AS, et al. The Neurosurgical Atlas: advancing neurosurgical education in the digital age. Neurosurg Focus. 2020;48(3):E17.
- 5↑
Dirven R, Hilgers FJM, Plooij JM, Maal TJ, Bergé SJ, Verkerke GJ, Marres HA. 3D stereophotogrammetry for the assessment of tracheostoma anatomy. Acta Otolaryngol. 2008;128(11):1248–1254.
- 6↑
Petriceks AH, Peterson AS, Angeles M, Brown WP, Srivastava S. Photogrammetry of human specimens: an innovation in anatomy education. J Med Educ Curric Dev. 2018;5:2382120518799356.
- 7↑
Kim EH, Yoo J, Jung IH, et al. Endoscopic transorbital approach to the insular region: cadaveric feasibility study and clinical application (SevEN-005). J Neurosurg. Published online January 22, 2021. doi:10.3171/2020.8.JNS202255
- 8↑
Lim J, Roh TH, Kim W, et al. Biportal endoscopic transorbital approach: a quantitative anatomical study and clinical application. Acta Neurochir (Wien). 2020;162(9):2119–2128.
- 9↑
Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC, Pujol S, et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn Reson Imaging. 2012;30(9):1323–1341.
- 10↑
He Y, Liang B, Yang J, Li S, He J. An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features. Sensors (Basel). 2017;17(8):E1862.
- 11↑
Bambakidis NC, Tomei KL. Editorial. Impact of COVID-19 on neurosurgery resident training and education. J Neurosurg. 2020;133(2):10–11.
- 12↑
Zhao J, Xu X, Jiang H, Ding Y. The effectiveness of virtual reality-based technology on anatomy teaching: a meta-analysis of randomized controlled studies. BMC Med Educ. 2020;20(2):127.
- 13↑
Udupa JK, Hung HM, Chuang KS. Surface and volume rendering in three-dimensional imaging: a comparison. J Digit Imaging. 1991;4(3):159–168.
- 14↑
Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annu Rev Biomed Eng. 2000;2(2):315–337.
- 15↑
Eid M, De Cecco CN, Nance JW Jr, Caruso D, Albrecht MH, Spandorfer AJ, et al. Cinematic rendering in CT: a novel, lifelike 3D visualization technique. AJR Am J Roentgenol. 2017;209(2):370–379.
- 16
Banerjee PP, Luciano CJ, Lemole GM Jr, Charbel FT, Oh MY. Accuracy of ventriculostomy catheter placement using a head- and hand-tracked high-resolution virtual reality simulator with haptic feedback. J Neurosurg. 2007;107(3):515–521.
- 17
Henn JS, Lemole GM Jr, Ferreira MAT, Gonzalez LF, Schornak M, Preul MC, Spetzler R. Interactive stereoscopic virtual reality: a new tool for neurosurgical education. Technical note. J Neurosurg. 2002;96(2):144–149.
- 18
Lemole GM Jr, Banerjee PP, Luciano C, Neckrysh S, Charbel FT. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery. 2007;61(2):142–149.
- 19↑
Kockro RA, Hwang PYK. Virtual temporal bone: an interactive 3-dimensional learning aid for cranial base surgery. Neurosurgery. 2009;64(5)(suppl 2):216–230.
- 20↑
Stadie AT, Kockro RA, Reisch R, Tropine A, Boor S, Stoeter P, Perneczky A. Virtual reality system for planning minimally invasive neurosurgery. Technical note. J Neurosurg. 2008;108(2):382–394.
- 21↑
Morone PJ, Shah KJ, Hendricks BK, Cohen-Gadol AA. Virtual, 3-dimensional temporal bone model and its educational value for neurosurgical trainees. World Neurosurg. 2019;122:e1412–e1415.
- 22↑
Chung BS, Chung MS, Park JS. Six walls of the cavernous sinus identified by sectioned images and three-dimensional models: anatomic report. World Neurosurg. 2015;84(2):337–344.
- 23↑
Haemmerli J, Davidovic A, Meling TR, Chavaz L, Schaller K, Bijlenga P. Evaluation of the precision of operative augmented reality compared to standard neuronavigation using a 3D-printed skull. Neurosurg Focus. 2021;50(2):E17.