Browse

You are looking at 81 - 90 of 4,829 items for :

  • Neurosurgical Focus x
  • Refine by Access: all x
Clear All
Free access

Accuracy of routine external ventricular drain placement following a mixed reality–guided twist-drill craniostomy

Sangjun Eom, Tiffany S. Ma, Neha Vutakuri, Tianyi Hu, Aden P. Haskell-Mendoza, David A. W. Sykes, Maria Gorlatova, and Joshua Jackson

OBJECTIVE

The traditional freehand placement of an external ventricular drain (EVD) relies on empirical craniometric landmarks to guide the craniostomy and subsequent passage of the EVD catheter. The diameter and trajectory of the craniostomy physically limit the possible trajectories that can be achieved during the passage of the catheter. In this study, the authors implemented a mixed reality–guided craniostomy procedure to evaluate the benefit of an optimally drilled craniostomy to the accurate placement of the catheter.

METHODS

Optical marker–based tracking using an OptiTrack system was used to register the brain ventricular hologram and drilling guidance for craniostomy using a HoloLens 2 mixed reality headset. A patient-specific 3D-printed skull phantom embedded with intracranial camera sensors was developed to automatically calculate the EVD accuracy for evaluation. User trials consisted of one blind and one mixed reality–assisted craniostomy followed by a routine, unguided EVD catheter placement for each of two different drill bit sizes.

RESULTS

A total of 49 participants were included in the study (mean age 23.4 years, 59.2% female). The mean distance from the catheter target improved from 18.6 ± 12.5 mm to 12.7 ± 11.3 mm (p = 0.0008) using mixed reality guidance for trials with a large drill bit and from 19.3 ± 12.7 mm to 10.1 ± 8.4 mm with a small drill bit (p < 0.0001). Accuracy using mixed reality was improved using a smaller diameter drill bit compared with a larger bit (p = 0.039). Overall, the majority of the participants were positive about the helpfulness of mixed reality guidance and the overall mixed reality experience.

CONCLUSIONS

Appropriate indications and use cases for the application of mixed reality guidance to neurosurgical procedures remain an area of active inquiry. While prior studies have demonstrated the benefit of mixed reality–guided catheter placement using predrilled craniostomies, the authors demonstrate that real-time quantitative and visual feedback of a mixed reality–guided craniostomy procedure can independently improve procedural accuracy and represents an important tool for trainee education and eventual clinical implementation.

Free access

Assessing views and attitudes toward the use of extended reality and its implications in neurosurgical education: a survey of neurosurgical trainees

Nithin Gupta, Nikki M. Barrington, Nicholas Panico, Nolan J. Brown, Rohin Singh, Redi Rahmani, and Randy S. D’Amico

OBJECTIVE

Extended reality (XR) systems, including augmented reality (AR), virtual reality (VR), and mixed reality, have rapidly emerged as new technologies capable of changing the way neurosurgeons prepare for cases. Thus, the authors sought to evaluate the perspectives of neurosurgical trainees on the integration of these technologies into neurosurgical education.

METHODS

A 20-question cross-sectional survey was administered to neurosurgical residents and fellows to evaluate perceptions of the use of XR in neurosurgical training. Respondents evaluated each statement using a modified Likert scale (1–5).

RESULTS

One hundred sixteen responses were recorded, with 59.5% of participants completing more than 90% of the questions. Approximately 59% of participants reported having institutional access to XR technologies. The majority of XR users (72%) believed it was effective for simulating surgical situations, compared with only 41% for those who did not have access to XR. Most respondents (61%) agreed that XR could become a standard in neurosurgical education and a cost-effective training tool (60%). Creating patient-specific anatomical XR models was considered relatively easy by 56% of respondents. Those with XR access reported finding it easier to create intraoperative models (58%) than those without access. A significant percentage (79%) agreed on the need for technical skill training outside the operating room (OR), especially among those without XR access (82%). There was general agreement (60%) regarding the specific need for XR. XR was perceived as effectively simulating stress in the OR. Regarding clinical outcomes, 61% believed XR improved efficiency and safety and 48% agreed it enhanced resection margins. Major barriers to XR integration included lack of ample training hours and/or time to use XR amid daily clinical obligations (63%).

CONCLUSIONS

The data presented in this study indicate that there is broad agreement among neurosurgical trainees that XR holds potential as a training modality in neurosurgical education. Moreover, trainees who have access to XR technologies tend to hold more positive perceptions regarding the benefits of XR in their training. This finding suggests that the availability of XR resources can positively influence trainees’ attitudes and beliefs regarding the utility of these technologies in their education and training.

Free access

Combined use of 3D printing and mixed reality technology for neurosurgical training: getting ready for brain surgery

Sebastian Jeising, Shufang Liu, Timo Blaszczyk, Marion Rapp, Thomas Beez, Jan F. Cornelius, Michael Schwerter, and Michael Sabel

OBJECTIVE

Learning surgical skills is an essential part of neurosurgical training. Ideally, these skills are acquired to a sufficient extent in an ex vivo setting. The authors previously described an in vitro brain tumor model, consisting of a cadaveric animal brain injected with fluorescent agar-agar, for acquiring a wide range of basic neuro-oncological skills. This model focused on haptic skills such as safe tissue ablation technique and the training of fluorescence-based resection. As important didactical technologies such as mixed reality and 3D printing become more readily available, the authors developed a readily available training model that integrates the haptic aspects into a mixed reality setup.

METHODS

The anatomical structures of a brain tumor patient were segmented from medical imaging data to create a digital twin of the case. Bony structures were 3D printed and combined with the in vitro brain tumor model. The segmented structures were visualized in mixed reality headsets, and the congruence of the printed and the virtual objects allowed them to be spatially superimposed. In this way, users of the system were able to train on the entire treatment process from surgery planning to instrument preparation and execution of the surgery.

RESULTS

Mixed reality visualization in the joint model facilitated model (patient) positioning as well as craniotomy and the extent of resection planning respecting case-dependent specifications. The advanced physical model allowed brain tumor surgery training including skin incision; craniotomy; dural opening; fluorescence-guided tumor resection; and dura, bone, and skin closure.

CONCLUSIONS

Combining mixed reality visualization with the corresponding 3D printed physical hands-on model allowed advanced training of sequential brain tumor resection skills. Three-dimensional printing technology facilitates the production of a precise, reproducible, and worldwide accessible brain tumor surgery model. The described model for brain tumor resection advanced regarding important aspects of skills training for neurosurgical residents (e.g., locating the lesion, head position planning, skull trepanation, dura opening, tissue ablation techniques, fluorescence-guided resection, and closure). Mixed reality enriches the model with important structures that are difficult to model (e.g., vessels and fiber tracts) and advanced interaction concepts (e.g., craniotomy simulations). Finally, this concept demonstrates a bridging technology toward intraoperative application of mixed reality.

Free access

Creation of a microsurgical neuroanatomy laboratory and virtual operating room: a preliminary study

Gökberk Erol, Abuzer Güngör, Umut Tan Sevgi, Beste Gülsuna, Yücel Doğruel, Hakan Emmez, and Uğur Türe

OBJECTIVE

A comprehensive understanding of microsurgical neuroanatomy, familiarity with the operating room environment, patient positioning in relation to the surgery, and knowledge of surgical approaches is crucial in neurosurgical education. However, challenges such as limited patient exposure, heightened patient safety concerns, a decreased availability of surgical cases during training, and difficulties in accessing cadavers and laboratories have adversely impacted this education. Three-dimensional (3D) models and augmented reality (AR) applications can be utilized to depict the cortical and white matter anatomy of the brain, create virtual models of patient surgical positions, and simulate the operating room and neuroanatomy laboratory environment. Herein, the authors, who used a single application, aimed to demonstrate the creation of 3D models of anatomical cadaver dissections, surgical approaches, patient surgical positions, and operating room and laboratory designs as alternative educational materials for neurosurgical training.

METHODS

A 3D modeling application (Scaniverse) was employed to generate 3D models of cadaveric brain specimens and surgical approaches using photogrammetry. It was also used to create virtual representations of the operating room and laboratory environment, as well as the surgical positions of patients, by utilizing light detection and ranging (LiDAR) sensor technology for accurate spatial mapping. These virtual models were then presented in AR for educational purposes.

RESULTS

Virtual representations in three dimensions were created to depict cadaver specimens, surgical approaches, patient surgical positions, and the operating room and laboratory environment. These models offer the flexibility of rotation and movement in various planes for improved visualization and understanding. The operating room and laboratory environment were rendered in three dimensions to create a simulation that could be navigated using AR and mixed reality technology. Realistic cadaveric models with intricate details were showcased on internet-based platforms and AR platforms for enhanced visualization and learning.

CONCLUSIONS

The utilization of this cost-effective, straightforward, and readily available approach to generate 3D models has the potential to enhance neuroanatomical and neurosurgical education. These digital models can be easily stored and shared via the internet, making them accessible to neurosurgeons worldwide for educational purposes.

Open access

An evaluation of physical and augmented patient-specific intracranial aneurysm simulators on microsurgical clipping performance and skills: a randomized controlled study

Philippe Dodier, Lorenzo Civilla, Ammar Mallouhi, Lukas Haider, Anna Cho, Philip Lederer, Wei-Te Wang, Christian Dorfer, Arthur Hosmann, Karl Rössler, Markus Königshofer, Ewald Unger, Maria-Chiara Palumbo, Alberto Redaelli, Josa M. Frischer, and Francesco Moscato

OBJECTIVE

In the era of flow diversion, there is an increasing demand to train neurosurgeons outside the operating room in safely performing clipping of unruptured intracranial aneurysms. This study introduces a clip training simulation platform for residents and aspiring cerebrovascular neurosurgeons, with the aim to visualize peri-aneurysm anatomy and train virtual clipping applications on the matching physical aneurysm cases.

METHODS

Novel, cost-efficient techniques allow the fabrication of realistic aneurysm phantom models and the additional integration of holographic augmented reality (AR) simulations. Specialists preselected suitable and unsuitable clips for each of the 5 patient-specific models, which were then used in a standardized protocol involving 9 resident participants. Participants underwent four sessions of clip applications on the models, receiving no interim training (control), a video review session (video), or a video review session and holographic clip simulation training (video + AR) between sessions 2 and 3. The study evaluated objective microsurgical skills, which included clip selection, number of clip applications, active simulation time, wrist tremor analysis during simulations, and occlusion efficacy. Aneurysm occlusions of the reference sessions were assessed by indocyanine green videoangiography, as well as conventional and photon-counting CT scans.

RESULTS

A total of 180 clipping procedures were performed without technical complications. The measurements of the active simulation times showed a 39% improvement for all participants. A median of 2 clip application attempts per case was required during the final session, with significant improvement observed in experienced residents (postgraduate year 5 or 6). Wrist tremor improved by 29% overall. The objectively assessed aneurysm occlusion rate (Raymond-Roy class 1) improved from 76% to 80% overall, even reaching 93% in the extensively trained cohort (video + AR) (p = 0.046).

CONCLUSIONS

The authors introduce a newly developed simulator training platform combining physical and holographic aneurysm clipping simulators. The development of exchangeable, aneurysm-comprising housings allows objective radio-anatomical evaluation through conventional and photon-counting CT scans. Measurable performance metrics serve to objectively document improvements in microsurgical skills and surgical confidence. Moreover, the different training levels enable a training program tailored to the cerebrovascular trainees’ levels of experience and needs.

Free access

IMAGINER: improving accuracy with a mixed reality navigation system during placement of external ventricular drains. A feasibility study

Ronny Grunert, Dirk Winkler, Johannes Wach, Fabian Kropla, Sebastian Scholz, Martin Vychopen, and Erdem Güresir

OBJECTIVE

The placement of a ventricular catheter, that is, an external ventricular drain (EVD), is a common and essential neurosurgical procedure. In addition, it is one of the first procedures performed by inexperienced neurosurgeons. With or without surgical experience, the placement of an EVD according to anatomical landmarks only can be difficult, with the potential risk for inaccurate catheter placement. Repeated corrections can lead to avoidable complications. The use of mixed reality could be a helpful guide and improve the accuracy of drain placement, especially in patients with acute pathology leading to the displacement of anatomical structures. Using a human cadaveric model in this feasibility study, the authors aimed to evaluate the accuracy of EVD placement by comparing two techniques: mixed reality and freehand placement.

METHODS

Twenty medical students performed the EVD placement procedure with a Cushing’s ventricular cannula on the right and left sides of the ventricular system. The cannula was placed according to landmarks on one side and with the assistance of mixed reality (Microsoft HoloLens 2) on the other side. With mixed reality, a planned trajectory was displayed in the field of view that guides the placement of the cannula. Subsequently, the actual position of the cannula was assessed with the help of a CT scan with a 1-mm slice thickness. The bony structure as well as the left and right cannula positions were registered to the CT scan with the planned target point before the placement procedure. CloudCompare software was applied for registration and evaluation of accuracy.

RESULTS

EVD placement using mixed reality was easily performed by all medical students. The predefined target point (inside the lateral ventricle) was reached with both techniques. However, the scattering radius of the target point reached through the use of mixed reality (12 mm) was reduced by more than 54% compared with the puncture without mixed reality (26 mm), which represents a doubling of the puncture accuracy.

CONCLUSIONS

This feasibility study specifically showed that the integration and use of mixed reality helps to achieve more than double the accuracy in the placement of ventricular catheters. Because of the easy availability of these new tools and their intuitive handling, we see great potential for mixed reality to improve accuracy.

Free access

Impact of augmented reality fiber tractography on the extent of resection and functional outcome of primary motor area tumors

Sabino Luzzi, Anna Simoncelli, and Renato Galzio

OBJECTIVE

This study aimed to evaluate the impact of augmented reality intraoperative fiber tractography (AR-iFT) on extent of resection (EOR), motor functional outcome, and survival of patients with primary motor area (M1) intra-axial malignant tumors.

METHODS

Data obtained from patients who underwent AR-iFT for M1 primary tumors were retrospectively analyzed and compared with those from a control group who underwent unaugmented reality intraoperative fiber tractography (unAR-iFT). A full asleep procedure with electrical stimulation mapping and fluorescein guidance was performed in both groups. The Neurological Assessment in Neuro-Oncology (NANO), Medical Research Council (MRC), and House-Brackmann grading systems were used for neurological, motor, and facial nerve assessment, respectively. Three-month postoperative NANO and MRC scores were used as outcome measures of the safety of the technique, whereas EOR and survival curves were related to its cytoreductive efficacy. In this study, p < 0.05 indicated statistical significance.

RESULTS

This study included 34 and 31 patients in the AR-iFT and unAR-iFT groups, respectively. The intraoperative seizure rate, 3-month postoperative NANO score, and 1-week and 1-month MRC scores were significantly (p < 0.05) different and in favor of the AR-iFT group. However, no difference was observed in the rate of complications. Glioma had incidence rates of 58.9% and 51.7% in the study and control groups, respectively, with no statistical difference. Metastasis had a slightly higher incidence rate in the control group, without statistical significance, and the gross-total resection and near-total resection rates and progression-free survival (PFS) rate were higher in the study group. Overall survival was not affected by the technique.

CONCLUSIONS

AR-iFT proved to be feasible, effective, and safe during surgery for M1 tumors and positively affected the EOR, intraoperative seizure rate, motor outcome, and PFS. Integration with electrical stimulation mapping is critical to achieve constant anatomo-functional intraoperative feedback. The accuracy of AR-iFT is intrinsically limited by diffusion tensor–based techniques, parallax error, and fiber tract crowding. Further studies are warranted to definitively validate the benefits of augmented reality navigation in this surgical scenario.

Free access

Improving mixed-reality neuronavigation with blue-green light: a comparative multimodal laboratory study

Salvatore Marrone, Gianluca Scalia, Lidia Strigari, Sruthi Ranganathan, Mario Travali, Rosario Maugeri, Roberta Costanzo, Lara Brunasso, Lapo Bonosi, Salvatore Cicero, Domenico Gerardo Iacopino, Maurizio Salvati, and Giuseppe Emanuele Umana

OBJECTIVE

This study aimed to rigorously assess the accuracy of mixed-reality neuronavigation (MRN) in comparison with magnetic neuronavigation (MN) through a comprehensive phantom-based experiment. It introduces a novel dimension by examining the influence of blue-green light (BGL) on MRN accuracy, a previously unexplored avenue in this domain.

METHODS

Twenty-nine phantoms, each meticulously marked with 5–6 fiducials, underwent CT scans as part of the navigation protocol. A 3D model was then superimposed onto a 3D-printed plaster skull using a semiautomatic registration process. The study meticulously evaluated the accuracy of both navigation techniques by pinpointing specific markers on the plaster surface. Precise measurements were then taken using digital calipers, with navigation conducted under three distinct lighting conditions: indirect white light (referred to as no light [NL]), direct white light (WL), and BGL. The research enlisted two operators with distinct levels of experience, one senior and one junior, to ensure a comprehensive analysis. The study was structured into two distinct experiments (experiment 1 [MN] and experiment 2 [MRN]) conducted by the two operators. Data analysis focused on calculating average and median values within subgroups, considering variables such as the type of lighting, precision, and recording time.

RESULTS

In experiment 1, no statistically significant differences emerged between the two operators. However, in experiment 2, notable disparities became apparent, with the senior operator recording longer times but achieving higher precision. Most significantly, BGL consistently demonstrated a capacity to enhance accuracy in MRN across both experiments.

CONCLUSIONS

This study demonstrated the substantial positive influence of BGL on MRN accuracy, providing profound implications for the design and implementation of mixed-reality systems. It also emphasized that integrating BGL into mixed-reality environments could profoundly improve user experience and performance. Further research is essential to validate these findings in real-world settings and explore the broader potential of BGL in a variety of mixed-reality applications.

Free access

Innovations in craniovertebral junction training: harnessing the power of mixed reality and head-mounted displays

Akshay Ganeshkumar, Varidh Katiyar, Prachi Singh, Ravi Sharma, Amol Raheja, Kanwaljeet Garg, Shashwat Mishra, Vivek Tandon, Ajay Garg, Franco Servadei, and Shashank Sharad Kale

OBJECTIVE

The objective of this study was to analyze the potential and convenience of using mixed reality as a teaching tool for craniovertebral junction (CVJ) anomaly pathoanatomy.

METHODS

CT and CT angiography images of 2 patients with CVJ anomalies were used to construct mixed reality models in the HoloMedicine application on the HoloLens 2 headset, resulting in four viewing stations. Twenty-two participants were randomly allocated into two groups, with each participant rotating through all stations for 90 seconds, each in a different order based on their group. At every station, objective questions evaluating the understanding of CVJ pathoanatomy were answered. At the end, subjective opinion on the user experience of mixed reality was provided using a 5-point Likert scale. The objective performance of the two viewing modes was compared, and a correlation between performance and participant experience was sought. Subjective feedback was compiled and correlated with experience.

RESULTS

In both groups, there was a significant improvement in median (interquartile range [IQR]) objective performance with mixed reality compared with DICOM: 1) group A: case 1, median 6 (IQR 6–7) versus 5 (IQR 3–6), p = 0.009; case 2, median 6 (IQR 6–7) versus 5 (IQR 3–6), p = 0.02; 2) group B: case 1, median 6 (IQR 5–7) versus 4 (IQR 2–5), p = 0.04; case 2, median 6 (IQR 6–7) versus 5 (IQR 3–7), p = 0.03. There was significantly higher improvement in less experienced participants in both groups for both cases: 1) group A: case 1, r = −0.8665, p = 0.0005; case 2, r = −0.8002, p = 0.03; 2) group B: case 1, r = −0.6977, p = 0.01; case 2, r = −0.7417, p = 0.009. Subjectively, mixed reality was easy to use, with less disorientation due to the visible background, and it was believed to be a useful teaching tool.

CONCLUSIONS

Mixed reality is an effective teaching tool for CVJ pathoanatomy, particularly for young neurosurgeons and trainees. The versatility of mixed reality and the intuitiveness of the user experience offer many potential applications, including training, intraoperative guidance, patient counseling, and individualized medicine; consequently, mixed reality has the potential to transform neurosurgery.

Free access

Intraoperative mixed-reality spinal neuronavigation system: a novel navigation technique for spinal intradural pathologies

Kadri Emre Caliskan, Gorkem Yavas, and Mehmet Sedat Cagli

OBJECTIVE

The objective of this study was to assess the intraoperative accuracy and feasibility of 3D-printed marker-based mixed-reality neurosurgical navigation for spinal intradural pathologies.

METHODS

The authors produced 3D segmentations of spinal intradural tumors with neighboring structures by using combined CT and MRI, and preoperative registration of pathology and markers was successfully performed. A patient-specific, surgeon-facilitated application for mobile devices was built, and a mixed-reality light detection and ranging (LIDAR) camera on a mobile device was employed for cost-effective, high-accuracy spinal neuronavigation.

RESULTS

Mobile device LIDAR cameras can successfully overlay images of virtual tumor segmentations according to the position of a 3D-printed marker. The surgeon can visualize and manipulate 3D segmentations of the pathology intraoperatively in 3D.

CONCLUSIONS

A 3D-printed marker-based mixed-reality spinal neuronavigation technique was performed in spinal intradural pathology procedures and has potential to be clinically feasible and easy to use for surgeons, as well as being time saving, cost-effective, and highly precise for spinal surgical procedures.