Cerebrospinal fluid (CSF) leaks occur in approximately 10% of patients undergoing a translabyrinthine, retrosigmoid, or middle fossa approach for vestibular schwannoma resection. Cerebrospinal fluid rhinorrhea also results from trauma, neoplasms, and congenital defects. A high degree of difficulty in repair sometimes requires repetitive microsurgical revisions—a rate of 10% of cases is often cited. This can not only lead to morbidity but is also costly and burdensome to the health care system. In this case-based theoretical analysis, the authors summarize the literature regarding endoscopic endonasal techniques to obliterate the eustachian tube (ET) as well as compare endoscopic endonasal versus open approaches for repair. Given the results of their analysis, they recommend endoscopic endonasal ET obliteration (EEETO) as a first- or second-line technique for the repair of CSF rhinorrhea from a lateral skull base source refractory to spontaneous healing and CSF diversion. They present a case in which EEETO resolved refractory CSF rhinorrhea over a 10-month follow-up after CSF diversions, wound reexploration, revised packing of the ET via a lateral microscopic translabyrinthine approach, and the use of a vascularized flap had failed. They further summarize the literature regarding studies that describe various iterations of EEETO. By its minimally invasive nature, EEETO imposes less morbidity as well as less risk to the patient. It can be readily implemented into algorithms once CSF diversion (for example, lumbar drain) has failed, prior to considering open surgery for repair. Additional studies are warranted to further demonstrate the outcome and cost-saving benefits of EEETO as the data until now have been largely empirical yet very hopeful. The summaries and technical notes described in this paper may serve as a resource for those skull base teams faced with similar challenging and otherwise refractory CSF leaks from a lateral skull base source.
Minimally invasive endoscopic repair of refractory lateral skull base cerebrospinal fluid rhinorrhea: case report and review of the literature
Brandon Lucke-Wold, Erik C. Brown, Justin S. Cetas, Aclan Dogan, Sachin Gupta, Timothy E. Hullar, Timothy L. Smith, and Jeremy N. Ciporen
Timing of surgery in traumatic brachial plexus injury: a systematic review
Enrico Martin, Joeky T. Senders, Aislyn C. DiRisio, Timothy R. Smith, and Marike L. D. Broekman
Ideal timeframes for operating on traumatic stretch and blunt brachial plexus injuries remain a topic of debate. Whereas on the one hand spontaneous recovery might occur, on the other hand, long delays are believed to result in poorer functional outcomes. The goal of this review is to assess the optimal timeframe for surgical intervention for traumatic brachial plexus injuries.
A systematic search was performed in January 2017 in PubMed and Embase databases according to the PRISMA guidelines. Search terms related to “brachial plexus injury” and “timing” were used. Obstetric plexus palsies were excluded. Qualitative synthesis was performed on all studies. Timing of operation and motor outcome were collected from individual patient data. Patients were categorized into 5 delay groups (0–3, 3–6, 6–9, 9–12, and > 12 months). Median delays were calculated for Medical Research Council (MRC) muscle grade ≥ 3 and ≥ 4 recoveries.
Forty-three studies were included after full-text screening. Most articles showed significantly better motor outcome with delays to surgery less than 6 months, with some studies specifying even shorter delays. Pain and quality of life scores were also significantly better with shorter delays. Nerve reconstructions performed after long time intervals, even more than 12 months, can still be useful. All papers reporting individual-level patient data described a combined total of 569 patients; 65.5% of all patients underwent operations within 6 months and 27.4% within 3 months. The highest percentage of ≥ MRC grade 3 (89.7%) was observed in the group operated on within 3 months. These percentages decreased with longer delays, with only 35.7% ≥ MRC grade 3 with delays > 12 months. A median delay of 4 months (IQR 3–6 months) was observed for a recovery of ≥ MRC grade 3, compared with a median delay of 7 months (IQR 5–11 months) for ≤ MRC grade 3 recovery.
The results of this systematic review show that in stretch and blunt injury of the brachial plexus, the optimal time to surgery is shorter than 6 months. In general, a 3-month delay appears to be appropriate because while recovery is better in those operated on earlier, this must be considered given the potential for spontaneous recovery.
Incidence and risk factors for acute kidney injury after spine surgery using the RIFLE classification
Bhiken I. Naik, Douglas A. Colquhoun, William E. McKinney, Andrew Bryant Smith, Brian Titus, Timothy L. McMurry, Jacob Raphael, and Marcel E. Durieux
Earlier definitions of acute renal failure are not sensitive in identifying milder forms of acute kidney injury (AKI). The authors hypothesized that by applying the RIFLE criteria for acute renal failure (Risk of renal dysfunction, Injury to the kidney, Failure of kidney function, Loss of kidney function, and End-stage kidney disease) to thoracic and lumbar spine surgery, there would be a higher incidence of AKI. They also developed a model to predict the postoperative glomerular filtration rate (GFR).
A hospital data repository was used to identify patients undergoing thoracic and/or lumbar spine surgery over a 5-year period (2006–2011). The lowest GFR in the first week after surgery was used to identify and categorize kidney injury if present. Risk factors were identified and a model was developed to predict postoperative GFR based on the defined risk factors.
A total of 726 patients were identified over the study period. The incidence of AKI was 3.9% (n = 28) based on the RIFLE classification with 23 patients in the risk category and 5 in the injury category. No patient was classified into the failure category or required renal replacement therapy. The baseline GFR in the non-AKI and AKI groups was 80 and 79.8 ml/min, respectively. After univariate analysis, only hypertension was associated with postoperative AKI (p = 0.02). A model was developed to predict the postoperative GFR. This model accounted for 64.4% of the variation in the postoperative GFRs (r2 = 0.644).
The incidence of AKI in spine surgery is higher than previously reported, with all of the patients classified into either the risk or injury RIFLE categories. Because these categories have previously been shown to be associated with poor long-term outcomes, early recognition, management, and follow-up of these patients is important.
Predicting nonroutine discharge after elective spine surgery: external validation of machine learning algorithms
Presented at the 2019 AANS/CNS Joint Section on Disorders of the Spine and Peripheral Nerves
Brittany M. Stopa, Faith C. Robertson, Aditya V. Karhade, Melissa Chua, Marike L. D. Broekman, Joseph H. Schwab, Timothy R. Smith, and William B. Gormley
Nonroutine discharge after elective spine surgery increases healthcare costs, negatively impacts patient satisfaction, and exposes patients to additional hospital-acquired complications. Therefore, prediction of nonroutine discharge in this population may improve clinical management. The authors previously developed a machine learning algorithm from national data that predicts risk of nonhome discharge for patients undergoing surgery for lumbar disc disorders. In this paper the authors externally validate their algorithm in an independent institutional population of neurosurgical spine patients.
Medical records from elective inpatient surgery for lumbar disc herniation or degeneration in the Transitional Care Program at Brigham and Women’s Hospital (2013–2015) were retrospectively reviewed. Variables included age, sex, BMI, American Society of Anesthesiologists (ASA) class, preoperative functional status, number of fusion levels, comorbidities, preoperative laboratory values, and discharge disposition. Nonroutine discharge was defined as postoperative discharge to any setting other than home. The discrimination (c-statistic), calibration, and positive and negative predictive values (PPVs and NPVs) of the algorithm were assessed in the institutional sample.
Overall, 144 patients underwent elective inpatient surgery for lumbar disc disorders with a nonroutine discharge rate of 6.9% (n = 10). The median patient age was 50 years and 45.1% of patients were female. Most patients were ASA class II (66.0%), had 1 or 2 levels fused (80.6%), and had no diabetes (91.7%). The median hematocrit level was 41.2%. The neural network algorithm generalized well to the institutional data, with a c-statistic (area under the receiver operating characteristic curve) of 0.89, calibration slope of 1.09, and calibration intercept of −0.08. At a threshold of 0.25, the PPV was 0.50 and the NPV was 0.97.
This institutional external validation of a previously developed machine learning algorithm suggests a reliable method for identifying patients with lumbar disc disorder at risk for nonroutine discharge. Performance in the institutional cohort was comparable to performance in the derivation cohort and represents an improved predictive value over clinician intuition. This finding substantiates initial use of this algorithm in clinical practice. This tool may be used by multidisciplinary teams of case managers and spine surgeons to strategically invest additional time and resources into postoperative plans for this population.
Transitional care services: a quality and safety process improvement program in neurosurgery
Faith C. Robertson, Jessica L. Logsdon, Hormuzdiyar H. Dasenbrock, Sandra C. Yan, Siobhan M. Raftery, Timothy R. Smith, and William B. Gormley
Readmissions increasingly serve as a metric of hospital performance, inviting quality improvement initiatives in both medicine and surgery. However, few readmission reduction programs have targeted surgical patient populations. The objective of this study was to establish a transitional care program (TCP) with the goal of decreasing length of stay (LOS), improving discharge efficiency, and reducing readmissions of neurosurgical patients by optimizing patient education and postdischarge surveillance.
Patients undergoing elective cranial or spinal neurosurgery performed by one of 5 participating surgeons at a quaternary care hospital were enrolled into a multifaceted intervention. A preadmission overview and establishment of an anticipated discharge date were both intended to set patient expectations for a shorter hospitalization. At discharge, in-hospital prescription filling was provided to facilitate medication compliance. Extended discharge appointments with a neurosurgery TCP-trained nurse emphasized postoperative activity, medications, incisional care, nutrition, signs that merit return to medical attention, and follow-up appointments. Finally, patients received a surveillance phone call 48 hours after discharge. Eligible patients omitted due to staff limitations were selected as controls. Patients were matched by sex, age, and operation type—key confounding variables—with control patients, who were eligible patients treated at the same time period but not enrolled in the TCP due to staff limitation. Multivariable logistic regression evaluated the association of TCP enrollment with discharge time and readmission, and linear regression with LOS. Covariates included matching criteria and Charlson Comorbidity Index scores.
Between 2013 and 2015, 416 patients were enrolled in the program and matched to a control. The median patient age was 55 years (interquartile range 44.5–65 years); 58.4% were male. The majority of enrolled patients underwent spine surgery (59.4%, compared with 40.6% undergoing cranial surgery). Hospitalizations averaged 62.1 hours for TCP patients versus 79.6 hours for controls (a 16.40% reduction, 95% CI 9.30%–23.49%; p < 0.001). The intervention was associated with a higher proportion of morning discharges, which was intended to free beds for afternoon admissions and improve patient flow (OR 3.13, 95% CI 2.27–4.30; p < 0.001), and decreased 30-day readmissions (2.5% vs 5.8%; OR 2.43, 95% CI 1.14–5.27; p = 0.02).
This neurosurgical TCP was associated with a significantly shorter LOS, earlier discharge, and reduced 30-day readmission after elective neurosurgery. These results underscore the importance of patient education and surveillance after hospital discharge.
Sources of error in comparing functional magnetic resonance imaging and invasive electrophysiological recordings
Derek L. G. Hill, Andrew D. Castellano Smith, Andrew Simmons, Calvin R. Maurer Jr., Timothy C. S. Cox, Robert Elwes, Michael Brammer, David J. Hawkes, and Charles E. Polkey
Object. Several authors have recently reported studies in which they aim to validate functional magnetic resonance (fMR) imaging against the accepted gold standard of invasive electrophysiological monitoring. The authors have conducted a similar study, and in this paper they identify and quantify two characteristics of these data that can make such a comparison problematic.
Methods. Eight patients in whom surgery for epilepsy was performed and five healthy volunteers underwent fMR imaging to localize the part of the sensorimotor cortex responsible for hand movement. In the patient group subdural electrode mats were subsequently implanted to identify eloquent regions of the brain and the epileptogenic zone. The fMR imaging data were processed to correct for motion during the study and then registered with a postimplantation computerized tomography (CT) scan on which the electrodes were visible. The motion during imaging in the two groups studied, and the deformation of the brain between the preoperative images and postoperative scans were measured.
The patients who underwent epilepsy surgery moved significantly more during fMR imaging experiments than healthy volunteers performing the same motor task. This motion had a particularly increased out-of-plane component and was significantly more correlated with the stimulus than in the volunteers. This motion was especially increased when the patients were performing a task on the side affected by the lesion. The additional motion is hard to correct and substantially degrades the quality of the resulting fMR images, making it a much less reliable technique for use in these patients than in others. Also, the authors found that after electrode implantation, the brain surface can shift more than 10 mm relative to the skull compared with its preoperative location, substantially degrading the accuracy of the comparison of electrophysiological measurements made in the deformed brain and fMR studies obtained preoperatively.
Conclusions. These two findings indicate that studies of this sort are currently of limited use for validating fMR imaging and should be interpreted with care. Additional image analysis research is necessary to solve the problems caused by patients' motion and brain deformation.
Predicting leptomeningeal disease spread after resection of brain metastases using machine learning
Ishaan Ashwini Tewarie, Alexander W. Senko, Charissa A. C. Jessurun, Abigail Tianai Zhang, Alexander F. C. Hulsbergen, Luis Rendon, Jack McNulty, Marike L. D. Broekman, Luke C. Peng, Timothy R. Smith, and John G. Phillips
The incidence of leptomeningeal disease (LMD) has increased as treatments for brain metastases (BMs) have improved and patients with metastatic disease are living longer. Sample sizes of individual studies investigating LMD after surgery for BMs and its risk factors have been limited, ranging from 200 to 400 patients at risk for LMD, which only allows the use of conventional biostatistics. Here, the authors used machine learning techniques to enhance LMD prediction in a cohort of surgically treated BMs.
A conditional survival forest, a Cox proportional hazards model, an extreme gradient boosting (XGBoost) classifier, an extra trees classifier, and logistic regression were trained. A synthetic minority oversampling technique (SMOTE) was used to train the models and handle the inherent class imbalance. Patients were divided into an 80:20 training and test set. Fivefold cross-validation was used on the training set for hyperparameter optimization. Patients eligible for study inclusion were adults who had consecutively undergone neurosurgical BM treatment, had been admitted to Brigham and Women’s Hospital from January 2007 through December 2019, and had a minimum of 1 month of follow-up after neurosurgical treatment.
A total of 1054 surgically treated BM patients were included in this analysis. LMD occurred in 168 patients (15.9%) at a median of 7.05 months after BM diagnosis. The discrimination of LMD occurrence was optimal using an XGboost algorithm (area under the curve = 0.83), and the time to LMD was prognosticated evenly by the random forest algorithm and the Cox proportional hazards model (C-index = 0.76). The most important feature for both LMD classification and regression was the BM proximity to the CSF space, followed by a cerebellar BM location. Lymph node metastasis of the primary tumor at BM diagnosis and a cerebellar BM location were the strongest risk factors for both LMD occurrence and time to LMD.
The outcomes of LMD patients in the BM population are predictable using SMOTE and machine learning. Lymph node metastasis of the primary tumor at BM diagnosis and a cerebellar BM location were the strongest LMD risk factors.
International practice variation in postoperative imaging of chronic subdural hematoma patients
Alexander F. C. Hulsbergen, Sandra C. Yan, Brittany M. Stopa, Aislyn DiRisio, Joeky T. Senders, Max J. van Essen, Stéphanie M. E. van der Burgt, Timothy R. Smith, William B. Gormley, and Marike L. D. Broekman
The value of CT scanning after burr hole surgery in chronic subdural hematoma (CSDH) patients is unclear, and practice differs between countries. At the Brigham and Women’s Hospital (BWH) in Boston, Massachusetts, neurosurgeons frequently order routine postoperative CT scans, while the University Medical Center Utrecht (UMCU) in the Netherlands does not have this policy. The aim of this study was to compare the use of postoperative CT scans in CSDH patients between these hospitals and to evaluate whether there are differences in clinical outcomes.
The authors collected data from both centers for 391 age- and sex-matched CSDH patients treated with burr hole surgery between January 1, 2002, and July 1, 2016, and compared the number of postoperative scans up to 6 weeks after surgery, the need for re-intervention, and postoperative neurological condition.
BWH patients were postoperatively scanned a median of 4 times (interquartile range [IQR] 2–5), whereas UMCU patients underwent a median of 0 scans (IQR 0–1, p < 0.001). There was no significant difference in the number of re-operations (20 in the BWH vs 27 in the UMCU, p = 0.34). All re-interventions were preceded by clinical decline and no recurrences were detected on scans performed on asymptomatic patients. Patients’ neurological condition was not worse in the UMCU than in the BWH (p = 0.43).
While BWH patients underwent more scans than UMCU patients, there were no differences in clinical outcomes. The results of this study suggest that there is little benefit to routine scanning in asymptomatic patients who have undergone surgical treatment of uncomplicated CSDH and highlight opportunities to make practice more efficient.
A national stratification of the global macroeconomic burden of central nervous system cancer
Jakob V. E. Gerstl, Alexander G. Yearley, John L. Kilgallon, Philipp Lassarén, Faith C. Robertson, Vendela Herdell, Andy Y. Wang, David J. Segar, Joshua D. Bernstock, Edward R. Laws, Kavitha Ranganathan, and Timothy R. Smith
Country-by-country estimates of the macroeconomic disease burden of central nervous system (CNS) cancers are important when determining the allocation of resources related to neuro-oncology. Accordingly, in this study the authors investigated macroeconomic losses related to CNS cancer in 173 countries and identified pertinent epidemiological trends.
Data for CNS cancer incidence, mortality, and disability-adjusted life years (DALYs) were collected from the Global Burden of Disease 2019 database. Gross domestic product data were combined with DALY data to estimate economic losses using a value of lost welfare approach.
The mortality-to-incidence ratio of CNS cancer in 2019 was 0.60 in high-income regions compared to 0.82 in Sub-Saharan Africa and 0.87 in Central Europe, Eastern Europe, and Central Asia. Welfare losses varied across both high- and low-income countries. Welfare losses attributable to CNS cancer in Japan represented 0.07% of the gross domestic product compared to 0.23% in Germany. In low- and middle-income countries, Iraq reported welfare losses of 0.20% compared to 0.04% in Angola. Globally, the DALY rate in 2019 was the same for CNS cancer as for prostate cancer at 112 per 100,000 person-years, despite a 75% lower incidence rate, equating to CNS cancer welfare losses of 182 billion US dollars.
Macroeconomic losses vary across high- and low-income settings and appear to be region specific. These differences may be explained by differences in regional access to screening and diagnosis, population-level genetic predispositions, and environmental risk factors. Mortality-to-incidence ratios are higher in low- and middle-income countries than in high-income countries, highlighting possible gaps in treatment access. Quantification of macroeconomic losses related to CNS cancer can help to justify the spending of finite resources to improve outcomes for neuro-oncological patients globally.