Browse

You are looking at 11 - 20 of 39,160 items for

  • Refine by Access: all x
Clear All
Restricted access

Joseph Piatt

OBJECTIVE

Hydrocephalus is a chronic, treatable, but in most cases incurable condition characterized by long periods of stability punctuated by crises. Patients in crisis usually seek care in an emergency department (ED). How patients with hydrocephalus use EDs has received almost no epidemiological study.

METHODS

Data were taken from the National Emergency Department Survey for 2018. Visits by patients with hydrocephalus were identified by diagnostic codes. Neurosurgical visits were identified by codes for imaging of the brain or skull or by neurosurgical procedure codes. Visits and dispositions were characterized by demographic factors for neurosurgical and unspecified visits by using methods for analysis of complex survey designs. Associations among demographic factors were assessed using latent class analysis.

RESULTS

There were an estimated 204,785 ED visits by patients with hydrocephalus in the United States in 2018. Roughly 80% of patients with hydrocephalus who visited EDs were adults or elders. By a ratio of 2:1, patients with hydrocephalus visited EDs much more often for unspecified reasons than for neurosurgical reasons. Patients with neurosurgical complaints had more costly ED visits, and if they were admitted they had longer and more costly hospitalizations than did patients with unspecified complaints. Only 1 in 3 patients with hydrocephalus who visited an ED was sent home regardless of whether the complaint was neurosurgical. Neurosurgical visits ended in transfer to another acute care facility more than 3 times as often as unspecified visits. Odds of transfer were more strongly associated with geography and, specifically, with proximity to a teaching hospital than with personal or community wealth.

CONCLUSIONS

Patients with hydrocephalus make heavy use of EDs, and they make more visits for reasons unrelated to their hydrocephalus than for neurosurgical reasons. Transfer to another acute care facility is an adverse clinical outcome that is much more common after neurosurgical visits. It is a system inefficiency that might be minimized by proactive case management and coordination of care.

Restricted access

Sung Ho Lee, Won-Sang Cho, Hee Chang Lee, Hansan Oh, Jin Woo Bae, Young Hoon Choi, Jin Chul Paeng, Joonhyung Gil, Kangmin Kim, Hyun-Seung Kang, and Jeong Eun Kim

OBJECTIVE

Little is known about the relationship between postoperative changes in cerebral perfusion and the ivy sign representing leptomeningeal collateral burden in moyamoya disease (MMD). This study aimed to investigate the usefulness of the ivy sign in evaluating cerebral perfusion status following bypass surgery in patients with adult MMD.

METHODS

Two hundred thirty-three hemispheres in 192 patients with adult MMD undergoing combined bypass between 2010 and 2018 were retrospectively enrolled. The ivy sign was represented as the ivy score on FLAIR MRI in each territory of the anterior, middle, and posterior cerebral arteries. Ivy scores, as well as clinical and hemodynamic states on SPECT, were semiquantitatively compared both preoperatively and at 6 months after surgery.

RESULTS

Clinical status improved at 6 months after surgery (p < 0.01). On average, ivy scores in whole and individual territories were decreased at 6 months (all p values < 0.01). Cerebral blood flow (CBF) postoperatively improved in three individual vascular territories (all p values ≤ 0.03) except for the posterior cerebral artery territory (PCAt), and cerebrovascular reserve (CVR) improved in those areas (all p values ≤ 0.04) except for the PCAt. Postoperative changes in ivy scores and CBF were inversely correlated in all territories (p ≤ 0.02), except for the PCAt. Furthermore, changes in ivy scores and CVR were only correlated in the posterior half of the middle cerebral artery territory (p = 0.01).

CONCLUSIONS

The ivy sign was significantly decreased after bypass surgery, which was well correlated with postoperative hemodynamic improvement in the anterior circulation territories. The ivy sign is believed to be a useful radiological marker for postoperative follow-up of cerebral perfusion status.

Free access

Vishwa Bharathi Gaonkar, Manbachan Singh, Sanjeev Srivastava, Pawan Goyal, Sanjay K. Rajan, and Aditya Gupta

Restricted access

Yilong Zheng, Jai Prashanth Rao, and Kai Rui Wan

Restricted access

Derya Karatas, Jaime L. Martínez Santos, Saygı Uygur, Ahmet Dagtekin, Zeliha Kurtoglu Olgunus, Emel Avci, and Mustafa K. Baskaya

OBJECTIVE

Opening the roof of the interhemispheric microsurgical corridor to access various neurooncological or neurovascular lesions can be demanding because of the multiple bridging veins that drain into the sinus with their highly variable, location-specific anatomy. The objective of this study was to propose a new classification system for these parasagittal bridging veins, which are herein described as being arranged in 3 configurations with 4 drainage routes.

METHODS

Twenty adult cadaveric heads (40 hemispheres) were examined. From this examination, the authors describe 3 types of configurations of the parasagittal bridging veins relative to specific anatomical landmarks (coronal suture, postcentral sulcus) and their drainage routes into the superior sagittal sinus, convexity dura, lacunae, and falx. They also quantify the relative incidence and extension of these anatomical variations and provide several preoperative, postoperative, and microneurosurgical clinical case study examples.

RESULTS

The authors describe 3 anatomical configurations for venous drainage, which improves on the 2 types that have been previously described. In type 1, a single vein joins; in type 2, 2 or more contiguous veins join; and in type 3, a venous complex joins at the same point. Anterior to the coronal suture, the most common configuration was type 1 dural drainage, occurring in 57% of hemispheres. Between the coronal suture and the postcentral sulcus, most veins (including 73% of superior anastomotic veins of Trolard) drain first into a venous lacuna, which are larger and more numerous in this region. Posterior to the postcentral sulcus, the most common drainage route was through the falx.

CONCLUSIONS

The authors propose a systematic classification for the parasagittal venous network. Using anatomical landmarks, they define 3 venous configurations and 4 drainage routes. Analysis of these configurations with respect to surgical routes indicates 2 highly risky interhemispheric surgical fissure routes. The risks are attributable to the presence of large lacunae that receive multiple veins (type 2) or venous complex (type 3) configurations that negatively impact a surgeon’s working space and degree of movement and thus are predisposed to inadvertent avulsions, bleeding, and venous thrombosis.

Restricted access

Sarthak Mohanty, Christopher Lai, Christopher Mikhail, Gabriella Greisberg, Fthimnir M. Hassan, Stephen R. Stephan, Zeeshan M. Sardar, Ronald A. Lehman Jr., and Lawrence G. Lenke

OBJECTIVE

The aim of this study was to discern whether patients with a cranial sagittal vertical axis to the hip (CrSVA-H) > 2 cm at 2 years postoperatively exhibit significantly worse patient-reported outcomes (PROs) and clinical outcomes compared with patients with CrSVA-H < 2 cm.

METHODS

This was a retrospective, 1:1 propensity score–matched (PSM) study of patients who underwent posterior spinal fusion for adult spinal deformity. All patients had a baseline sagittal imbalance of CrSVA-H > 30 mm. Two-year patient-reported and clinical outcomes were assessed in unmatched and PSM cohorts, including Scoliosis Research Society–22r (SRS-22r) and Oswestry Disability Index scores as well as reoperation rates. The study compared two cohorts based on 2-year alignment: CrSVA-H < 20 mm (aligned cohort) vs CrSVA-H > 20 mm (malaligned cohort). For the matched cohorts, binary outcome comparisons were carried out using the McNemar test, while continuous outcomes used the Wilcoxon rank-sum test. For unmatched cohorts, categorical variables were compared using chi-square/Fisher’s tests, while continuous outcomes were compared using Welch’s t-test.

RESULTS

A total of 156 patients with mean age of 63.7 (SEM 1.09) years underwent posterior spinal fusion spanning a mean of 13.5 (0.32) levels. At baseline, the mean pelvic incidence minus lumbar lordosis mismatch was 19.1° (2.01°), the T1 pelvic angle was 26.6° (1.20°), and the CrSVA-H was 74.9 (4.33) mm. The mean CrSVA-H improved from 74.9 mm to 29.2 mm (p < 0.0001). At the 2-year follow-up, 129 (78%) of 164 patients achieved CrSVA-H < 2 cm (aligned cohort). Patients who had CrSVA-H > 2 cm (malaligned cohort) at the 2-year follow-up had worse preoperative CrSVA-H (p < 0.0001). After performing PSM, 27 matched pairs were generated. In the PSM cohort, the aligned and malaligned cohorts demonstrated comparable preoperative patient-reported outcomes (PROs). However, at the 2-year postoperative follow-up, the malaligned cohort reported worse outcomes in SRS-22r function (p = 0.0275), pain (p = 0.0012), and mean total score (p = 0.0109). Moreover, when patients were stratified based on their magnitude of improvement in CrSVA-H (< 50% vs > 50%), patients with > 50% improvement in CrSVA-H had superior outcomes in SRS-22r function (p = 0.0336), pain (p = 0.0446), and mean total score (p = 0.0416). Finally, patients in the malaligned cohort had a higher 2-year reoperation rate (22% vs 7%; p = 0.0412) compared with patients in the aligned cohort.

CONCLUSIONS

Among patients who present with forward sagittal imbalance (CrSVA-H > 30 mm), patients with CrSVA-H exceeding 20 mm at the 2-year postoperative follow-up have inferior PROs and higher reoperation rates.

Restricted access

Leonardo Domínguez, Claudio Rivas-Palacios, Mario M. Barbosa, Maria Andrea Escobar, Elvira Puello Florez, and Ezequiel García-Ballestas

OBJECTIVE

Surgery is the cornerstone of craniosynostosis treatment. In this study, two widely accepted techniques are described: endoscope-assisted surgery (EAS) and open surgery (OS). The authors compared the perioperative and reconstructive outcomes of EAS and OS in children ≤ 6 months of age treated at the Napoleón Franco Pareja Children’s Hospital (Cartagena, Colombia).

METHODS

According to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement, patients with defined criteria who underwent surgery to correct craniosynostosis between June 1996 and June 2022 were retrospectively enrolled. Demographic data, perioperative outcomes, and follow-up were obtained from their medical records. Student t-tests were used for significance. Cronbach’s α was used to assess agreement between estimated blood loss (EBL). Spearman’s correlation coefficient and the coefficient of determination were used to establish associations between the results of interest, and the odds ratio was used to calculate the risk ratio of blood product transfusion.

RESULTS

A total of 74 patients met the inclusion criteria; 24 (32.4%) belonged to the OS group and 50 (67.6%) to the EAS group. There was a high interobserver agreement quantifying the EBL. The EBL, transfusion of blood products, surgical time, and hospital stay were shorter in the EAS group. Surgical time was positively correlated with EBL. There were no differences between the two groups in the percentage of cranial index correction at 12 months of follow-up.

CONCLUSIONS

Surgical correction of craniosynostosis in children aged ≤ 6 months by EAS was associated with a significant decrease in EBL, transfusion requirements, surgical time, and hospital stay compared with OS. The results of cranial deformity correction in patients with scaphocephaly and acrocephaly were equivalent in both study groups.

Restricted access

Lea Baumgart, Sebastian Ille, Jan S. Kirschke, Bernhard Meyer, and Sandro M. Krieg

OBJECTIVE

Multiple solutions for navigation-guided pedicle screw placement are currently available. Intraoperative imaging techniques are invaluable for spinal surgery, but often there is little attention paid to patient radiation exposure. This study aimed to compare the applied radiation doses of sliding gantry CT (SGCT)– and mobile cone-beam CT (CBCT)–based pedicle screw placement for spinal instrumentation.

METHODS

The authors retrospectively analyzed 183 and 54 patients who underwent SGCT- or standard CBCT-based pedicle screw placement, respectively, for spinal instrumentation at their department between June 2019 and January 2020. SGCT uses an automated radiation dose adjustment.

RESULTS

Baseline characteristics, including the number of screws per patient and the number of instrumented levels, did not significantly differ between the two groups. Although the accuracy of screw placement according to Gertzbein-Robbins classification did not differ between the two groups, more screws had to be revised intraoperatively in the CBCT group (SGCT 2.7% vs CBCT 6.0%, p = 0.0036). Mean (± SD) radiation doses for the first (SGCT 484.0 ± 201.1 vs CBCT 687.4 ± 188.5 mGy*cm, p < 0.0001), second (SGCT 515.8 ± 216.3 vs CBCT 658.3 ± 220.1 mGy*cm, p < 0.0001), third (SGCT 531.3 ± 237.5 vs CBCT 641.6 ± 177.3 mGy*cm, p = 0.0140), and total (SGCT 1216.9 ± 699.3 vs CBCT 2000.3 ± 921.0 mGy*cm, p < 0.0001) scans were significantly lower for SGCT. This was also true for radiation doses per scanned level (SGCT 461.9 ± 429.3 vs CBCT 1004.1 ± 905.1 mGy*cm, p < 0.0001) and radiation doses per screw (SGCT 172.6 ± 110.1 vs CBCT 349.6 ± 273.4 mGy*cm, p < 0.0001).

CONCLUSIONS

The applied radiation doses were significantly lower using SGCT for navigated pedicle screw placement in spinal instrumentation. A modern CT scanner on a sliding gantry leads to lower radiation doses, especially through automated 3D radiation dose adjustment.

Restricted access

Keita Shibahashi, Hiroyuki Ohbe, Hiroki Matsui, and Hideo Yasunaga

OBJECTIVE

Intracranial pressure (ICP) monitoring is recommended for the management of severe traumatic brain injury (TBI). The clinical benefit of ICP monitoring remains controversial, however, with randomized controlled trials showing negative results. Therefore, this study investigated the real-world impact of ICP monitoring in managing severe TBI.

METHODS

This observational study used the Japanese Diagnosis Procedure Combination inpatient database, a nationwide inpatient database, from July 1, 2010, to March 31, 2020. The study included patients aged 18 years or older who were admitted to an intensive care or high-dependency unit with a diagnosis of severe TBI. Patients who did not survive or were discharged on admission day were excluded. Between-hospital differences in ICP monitoring were quantified using the median odds ratio (MOR). A one-to-one propensity score matching (PSM) analysis was conducted to compare patients who initiated ICP monitoring on the admission day with those who did not. Outcomes in the matched cohort were compared using mixed-effects linear regression analysis. Linear regression analysis was used to estimate interactions between ICP monitoring and the subgroups.

RESULTS

The analysis included 31,660 eligible patients from 765 hospitals. There was considerable variability in the use of ICP monitoring across hospitals (MOR 6.3, 95% confidence interval [CI] 5.7–7.1), with ICP monitoring used in 2165 patients (6.8%). PSM resulted in 1907 matched pairs with highly balanced covariates. ICP monitoring was associated with significantly lower in-hospital mortality (31.9% vs 39.1%, within-hospital difference −7.2%, 95% CI −10.3% to −4.2%) and longer length of hospital stay (median 35 vs 28 days, within-hospital difference 6.5 days, 95% CI 2.6–10.3). There was no significant difference in the proportion of patients with unfavorable outcomes (Barthel index < 60 or death) at discharge (80.3% vs 77.8%, within-hospital difference 2.1%, 95% CI −0.6% to 5.0%). Subgroup analyses demonstrated a quantitative interaction between ICP monitoring and the Japan Coma Scale (JCS) score for in-hospital mortality, with a greater risk reduction with higher JCS score (p = 0.033).

CONCLUSIONS

ICP monitoring was associated with lower in-hospital mortality in the real-world management of severe TBI. The results suggest that active ICP monitoring is associated with improved outcomes after TBI, while the indication for monitoring might be limited to the most severely ill patients.

Free access

Dooman Arefan, Matthew Pease, Shawn R. Eagle, David O. Okonkwo, and Shandong Wu

OBJECTIVE

An estimated 1.5 million people die every year worldwide from traumatic brain injury (TBI). Physicians are relatively poor at predicting long-term outcomes early in patients with severe TBI. Machine learning (ML) has shown promise at improving prediction models across a variety of neurological diseases. The authors sought to explore the following: 1) how various ML models performed compared to standard logistic regression techniques, and 2) if properly calibrated ML models could accurately predict outcomes up to 2 years posttrauma.

METHODS

A secondary analysis of a prospectively collected database of patients with severe TBI treated at a single level 1 trauma center between November 2002 and December 2018 was performed. Neurological outcomes were assessed at 3, 6, 12, and 24 months postinjury with the Glasgow Outcome Scale. The authors used ML models including support vector machine, neural network, decision tree, and naïve Bayes models to predict outcome across all 4 time points by using clinical information available on admission, and they compared performance to a logistic regression model. The authors attempted to predict unfavorable versus favorable outcomes (Glasgow Outcome Scale scores of 1–3 vs 4–5), as well as mortality. Models’ performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) with 95% confidence interval and balanced accuracy.

RESULTS

Of the 599 patients in the database, the authors included 501, 537, 469, and 395 at 3, 6, 12, and 24 months posttrauma. Across all time points, the AUCs ranged from 0.71 to 0.85 for mortality and from 0.62 to 0.82 for unfavorable outcomes with various modeling strategies. Decision tree models performed worse than all other modeling approaches for multiple time points regarding both unfavorable outcomes and mortality. There were no statistically significant differences between any other models. After proper calibration, the models had little variation (0.02–0.05) across various time points.

CONCLUSIONS

The ML models tested herein performed with equivalent success compared with logistic regression techniques for prognostication in TBI. The TBI prognostication models could predict outcomes beyond 6 months, out to 2 years postinjury.