Patients who undergo craniotomy for brain tumor resection are prone to experiencing seizures, which can have debilitating medical, neurological, and psychosocial effects. A controversial issue in neurosurgery is the common practice of administering perioperative anticonvulsant prophylaxis to these patients despite a paucity of supporting data in the literature. The foreseeable benefits of this strategy must be balanced against potential adverse effects and interactions with critical medications such as chemotherapeutic agents and corticosteroids. Multiple disparate metaanalyses have been published on this topic but have not been applied into clinical practice, and, instead, personal preference frequently determines practice patterns in this area of management. Therefore, to select the current best available evidence to guide clinical decision making, the literature was evaluated to identify meta-analyses that investigated the efficacy and/or safety of anticonvulsant prophylaxis in this patient population. Six meta-analyses published between 1996 and 2011 were included in the present study. The Quality of Reporting of Meta-analyses and Oxman-Guyatt methodological quality assessment tools were used to score these meta-analyses, and the Jadad decision algorithm was applied to determine the highest-quality meta-analysis. According to this analysis, 2 metaanalyses were deemed to be the current best available evidence, both of which conclude that prophylactic treatment does not improve seizure control in these patients. Therefore, this management strategy should not be routinely used.
Eli T. Sayegh, Shayan Fakurnejad, Taemin Oh, Orin Bloch and Andrew T. Parsa
Taemin Oh, Daniel T. Nagasawa, Brendan M. Fong, Andy Trang, Quinton Gopen, Andrew T. Parsa and Isaac Yang
Unfavorable outcomes such as facial paralysis and deafness were once unfortunate probable complications following resection of acoustic neuromas. However, the implementation of intraoperative neuromonitoring during acoustic neuroma surgery has demonstrated placing more emphasis on quality of life and preserving neurological function. A modern review demonstrates a great degree of recent success in this regard. In facial nerve monitoring, the use of modern electromyography along with improvements in microneurosurgery has significantly improved preservation. Recent studies have evaluated the use of video monitoring as an adjunctive tool to further improve outcomes for patients undergoing surgery. Vestibulocochlear nerve monitoring has also been extensively studied, with the most popular techniques including brainstem auditory evoked potential monitoring, electrocochleography, and direct compound nerve action potential monitoring. Among them, direct recording remains the most promising and preferred monitoring method for functional acoustic preservation. However, when compared with postoperative facial nerve function, the hearing preservation is only maintained at a lower rate. Here, the authors analyze the major intraoperative neuromonitoring techniques available for acoustic neuroma resection.
Corinna C. Zygourakis, Taemin Oh, Matthew Z. Sun, Igor Barani, James G. Kahn and Andrew T. Parsa
Vestibular schwannomas (VSs) are managed in 3 ways: observation (“wait and scan”); Gamma Knife surgery (GKS); or microsurgery. Whereas there is considerable literature regarding which management approach is superior, there are only a few studies addressing the cost of treating VSs, and there are no cost-utility analyses in the US to date.
In this study, the authors used the University of California at San Francisco medical record and hospital accounting databases to determine total hospital charges and costs for 33 patients who underwent open surgery, 42 patients who had GKS, and 12 patients who were observed between 2010 and 2013. The authors then performed decision-tree analysis to determine which treatment paradigm produces the highest quality-adjusted life years and to calculate the incremental cost-effectiveness ratio, depending on the patient's age at VS diagnosis.
The average total hospital cost over a 3-year period for surgically treated patients was $80,074 (± $49,678) versus $9737 (± $5522) for patients receiving radiosurgery and $1746 (± $2792) for patients who were observed. When modeling the most debilitating symptoms and worst outcomes of VSs (vertigo and death) at different ages at diagnosis, radiation is dominant to observation at all ages up to 70 years. Surgery is cost-effective when compared with radiation (incremental cost-effectiveness ratio < $150,000) at younger ages at diagnosis (< 45 years old).
In this model, surgery is a cost-effective alternative to radiation when VS is diagnosed in patients at < 45 years. For patients ≥ 45 years, radiation is the most cost-effective treatment option.
Matthew Z. Sun, Taemin Oh, Michael E. Ivan, Aaron J. Clark, Michael Safaee, Eli T. Sayegh, Gurvinder Kaur, Andrew T. Parsa and Orin Bloch
There are few and conflicting reports on the effects of delayed initiation of chemoradiotherapy on the survival of patients with glioblastoma. The standard of care for newly diagnosed glioblastoma is concurrent radiotherapy and temozolomide chemotherapy after maximal safe resection; however, the optimal timing of such therapy is poorly defined. Given the lack of consensus in the literature, the authors performed a retrospective analysis of The Cancer Genome Atlas (TCGA) database to investigate the effect of time from surgery to initiation of therapy on survival in newly diagnosed glioblastoma.
Patients with primary glioblastoma diagnosed since 2005 and treated according to the standard of care were identified from TCGA database. Kaplan-Meier and multivariate Cox regression analyses were used to compare overall survival (OS) and progression-free survival (PFS) between groups stratified by postoperative delay to initiation of radiation treatment.
There were 218 patients with newly diagnosed glioblastoma with known time to initiation of radiotherapy identified in the database. The median duration until therapy was 27 days. Delay to radiotherapy longer than the median was not associated with worse PFS (HR = 0.918, p = 0.680) or OS (HR = 1.135, p = 0.595) in multivariate analysis when controlling for age, sex, KPS score, and adjuvant chemotherapy. Patients in the highest and lowest quartiles for delay to therapy (≤ 20 days vs ≥ 36 days) did not statistically differ in PFS (p = 0.667) or OS (p = 0.124). The small subset of patients with particularly long delays (> 42 days) demonstrated worse OS (HR = 1.835, p = 0.019), but not PFS (p = 0.74).
Modest delay in initiation of postoperative chemotherapy and radiation does not appear to be associated with worse PFS or OS in patients with newly diagnosed glioblastoma, while significant delay longer than 6 weeks may be associated with worse OS.
Jason S. Hauptman, Andrew Dadour, Taemin Oh, Christine B. Baca, Barbara G. Vickrey, Stefanie D. Vassar, Raman Sankar, Noriko Salamon, Harry V. Vinters and Gary W. Mathern
Low income, government insurance, and minority status are associated with delayed treatment for neurosurgery patients. Less is known about the influence of referral location and how socioeconomic factors and referral patterns evolve over time. For pediatric epilepsy surgery patients at the University of California, Los Angeles (UCLA), this study determined how referral location and sociodemographic features have evolved over 25 years.
Children undergoing epilepsy neurosurgery at UCLA (453 patients) were classified by location of residence and compared with clinical epilepsy and sociodemographic factors.
From 1986 to 2010, referrals from Southern California increased (+33%) and referrals from outside of California decreased (−19%). Over the same period, the number of patients with preferred provider organization (PPO) and health maintenance organization (HMO) insurance increased (+148% and +69%, respectively) and indemnity insurance decreased (−96%). Likewise, the number of Hispanics (+117%) and Asians (100%) increased and Caucasians/whites decreased (−24%). The number of insurance companies decreased from 52 carriers per 100 surgical patients in 1986–1990 to 19 per 100 in 2006–2010. Patients living in the Eastern US had a younger age at surgery (−46%), shorter intervals from seizure onset to referral for evaluation (−28%) and from presurgical evaluation to surgery (−61%) compared with patients from Southern California. The interval from seizure onset to evaluation was shorter (−33%) for patients from Los Angeles County compared with those living in non-California Western US states.
Referral locations evolved over 25 years at UCLA, with more cases coming from local regions; the percentage of minority patients also increased. The interval from seizures onset to surgery was shortest for patients living farthest from UCLA but still within the US. Geographic location and race/ethnicity was not associated with differences in becoming seizure free after epilepsy surgery in children.
Taemin Oh, Justin K. Scheer, Justin S. Smith, Richard Hostin, Chessie Robinson, Jeffrey L. Gum, Frank Schwab, Robert A. Hart, Virginie Lafage, Douglas C. Burton, Shay Bess, Themistocles Protopsaltis, Eric O. Klineberg, Christopher I. Shaffrey, Christopher P. Ames and the International Spine Study Group
Patients with adult spinal deformity (ASD) experience significant quality of life improvements after surgery. Treatment, however, is expensive and complication rates are high. Predictive analytics has the potential to use many variables to make accurate predictions in large data sets. A validated minimum clinically important difference (MCID) model has the potential to assist in patient selection, thereby improving outcomes and, potentially, cost-effectiveness.
The present study was a retrospective analysis of a multiinstitutional database of patients with ASD. Inclusion criteria were as follows: age ≥ 18 years, radiographic evidence of ASD, 2-year follow-up, and preoperative Oswestry Disability Index (ODI) > 15. Forty-six variables were used for model training: demographic data, radiographic parameters, surgical variables, and results on the health-related quality of life questionnaire. Patients were grouped as reaching a 2-year ODI MCID (+MCID) or not (−MCID). An ensemble of 5 different bootstrapped decision trees was constructed using the C5.0 algorithm. Internal validation was performed via 70:30 data split for training/testing. Model accuracy and area under the curve (AUC) were calculated. The mean quality-adjusted life years (QALYs) and QALYs gained at 2 years were calculated and discounted at 3.5% per year. The QALYs were compared between patients in the +MCID and –MCID groups.
A total of 234 patients met inclusion criteria (+MCID 129, −MCID 105). Sixty-nine patients (29.5%) were included for model testing. Predicted versus actual results were 50 versus 40 for +MCID and 19 versus 29 for −MCID (i.e., 10 patients were misclassified). Model accuracy was 85.5%, with 0.96 AUC. Predicted results showed that patients in the +MCID group had significantly greater 2-year mean QALYs (p = 0.0057) and QALYs gained (p = 0.0002).
A successful model with 85.5% accuracy and 0.96 AUC was constructed to predict which patients would reach ODI MCID. The patients in the +MCID group had significantly higher mean 2-year QALYs and QALYs gained. This study provides proof of concept for using predictive modeling techniques to optimize patient selection in complex spine surgery.
Taemin Oh, Justin K. Scheer, Robert Eastlack, Justin S. Smith, Virginie Lafage, Themistocles S. Protopsaltis, Eric Klineberg, Peter G. Passias, Vedat Deviren, Richard Hostin, Munish Gupta, Shay Bess, Frank Schwab, Christopher I. Shaffrey and Christopher P. Ames
Alignment changes in the cervical spine that occur following surgical correction for thoracic deformity remain poorly understood. The purpose of this study was to evaluate such changes in a cohort of adults with thoracic deformity treated surgically.
The authors conducted a multicenter retrospective analysis of consecutive patients with thoracic deformity. Inclusion criteria for this study were as follows: corrective osteotomy for thoracic deformity, upper-most instrumented vertebra (UIV) between T-1 and T-4, lower-most instrumented vertebra (LIV) at or above L-5 (LIV ≥ L-5) or at the ilium (LIV-ilium), and a minimum radiographic follow-up of 2 years. Sagittal radiographic parameters were assessed preoperatively as well as at 3 months and 2 years postoperatively, including the C-7 sagittal vertical axis (SVA), C2–7 cervical lordosis (CL), C2–7 SVA, T-1 slope (T1S), T1S minus CL (T1S-CL), T2–12 thoracic kyphosis (TK), apical TK, lumbar lordosis (LL), pelvic incidence (PI), PI-LL, pelvic tilt (PT), and sacral slope (SS).
Fifty-seven patients with a mean age of 49.1 ± 14.6 years met the study inclusion criteria. The preoperative prevalence of increased CL (CL > 15°) was 48.9%. Both 3-month and 2-year apical TK improved from baseline (p < 0.05, statistically significant). At the 2-year follow-up, only the C2–7 SVA increased significantly from baseline (p = 0.01), whereas LL decreased from baseline (p < 0.01). The prevalence of increased CL was 35.3% at 3 months and 47.8% at 2 years, which did not represent a significant change. Postoperative cervical alignment changes were not significantly different from preoperative values regardless of the LIV (LIV ≥ L-5 or LIV-ilium, p > 0.05 for both). In a subset of patients with a maximum TK ≥ 60° (35 patients) and 3-column osteotomy (38 patients), no significant postoperative cervical changes were seen.
Increased CL is common in adult spinal deformity patients with thoracic deformities and, unlike after lumbar corrective surgery, does not appear to normalize after thoracic corrective surgery. Cervical sagittal malalignment (C2–7 SVA) also increases postoperatively. Surgeons should be aware that spontaneous cervical alignment normalization might not occur following thoracic deformity correction.
Hansen Deng, Andrew K. Chan, Simon G. Ammanuel, Alvin Y. Chan, Taemin Oh, Henry C. Skrehot, Caleb S. Edwards, Sravani Kondapavulur, Amy D. Nichols, Catherine Liu, John K. Yue, Sanjay S. Dhall, Aaron J. Clark, Dean Chou, Christopher P. Ames and Praveen V. Mummaneni
Surgical site infection (SSI) following spine surgery causes major morbidity and greatly impedes functional recovery. In the modern era of advanced operative techniques and improved perioperative care, SSI remains a problematic complication that may be reduced with institutional practices. The objectives of this study were to 1) characterize the SSI rate and microbial etiology following spine surgery for various thoracolumbar diseases, and 2) identify risk factors that were associated with SSI despite current perioperative management.
All patients treated with thoracic or lumbar spine operations on the neurosurgery service at the University of California, San Francisco from April 2012 to April 2016 were formally reviewed for SSI using the National Healthcare Safety Network (NHSN) guidelines. Preoperative risk variables included age, sex, BMI, smoking, diabetes mellitus (DM), coronary artery disease (CAD), ambulatory status, history of malignancy, use of preoperative chlorhexidine gluconate (CHG) showers, and the American Society of Anesthesiologists (ASA) classification. Operative variables included surgical pathology, resident involvement, spine level and surgical technique, instrumentation, antibiotic and steroid use, estimated blood loss (EBL), and operative time. Multivariable logistic regression was used to evaluate predictors for SSI. Odds ratios and 95% confidence intervals were reported.
In total, 2252 consecutive patients underwent thoracolumbar spine surgery. The mean patient age was 58.6 ± 13.8 years and 49.6% were male. The mean hospital length of stay was 6.6 ± 7.4 days. Sixty percent of patients had degenerative conditions, and 51.9% underwent fusions. Sixty percent of patients utilized presurgery CHG showers. The mean operative duration was 3.7 ± 2 hours, and the mean EBL was 467 ± 829 ml. Compared to nonfusion patients, fusion patients were older (mean 60.1 ± 12.7 vs 57.1 ± 14.7 years, p < 0.001), were more likely to have an ASA classification > II (48.0% vs 36.0%, p < 0.001), and experienced longer operative times (252.3 ± 120.9 minutes vs 191.1 ± 110.2 minutes, p < 0.001). Eleven patients had deep SSI (0.49%), and the most common causative organisms were methicillin-sensitive Staphylococcus aureus and methicillin-resistant S. aureus. Patients with CAD (p = 0.003) or DM (p = 0.050), and those who were male (p = 0.006), were predictors of increased odds of SSI, and presurgery CHG showers (p = 0.001) were associated with decreased odds of SSI.
This institutional experience over a 4-year period revealed that the overall rate of SSI by the NHSN criteria was low at 0.49% following thoracolumbar surgery. This was attributable to the implementation of presurgery optimization, and intraoperative and postoperative measures to prevent SSI across the authors’ institution. Despite prevention measures, having a history of CAD or DM, and being male, were risk factors associated with increased SSI, and presurgery CHG shower utilization decreased SSI risk in patients.
Justin K. Scheer, Taemin Oh, Justin S. Smith, Christopher I. Shaffrey, Alan H. Daniels, Daniel M. Sciubba, D. Kojo Hamilton, Themistocles S. Protopsaltis, Peter G. Passias, Robert A. Hart, Douglas C. Burton, Shay Bess, Renaud Lafage, Virginie Lafage, Frank Schwab, Eric O. Klineberg, Christopher P. Ames and the International Spine Study Group
Pseudarthrosis can occur following adult spinal deformity (ASD) surgery and can lead to instrumentation failure, recurrent pain, and ultimately revision surgery. In addition, it is one of the most expensive complications of ASD surgery. Risk factors contributing to pseudarthrosis in ASD have been described; however, a preoperative model predicting the development of pseudarthrosis does not exist. The goal of this study was to create a preoperative predictive model for pseudarthrosis based on demographic, radiographic, and surgical factors.
A retrospective review of a prospectively maintained, multicenter ASD database was conducted. Study inclusion criteria consisted of adult patients (age ≥ 18 years) with spinal deformity and surgery for the ASD. From among 82 variables assessed, 21 were used for model building after applying collinearity testing, redundancy, and univariable predictor importance ≥ 0.90. Variables included demographic data along with comorbidities, modifiable surgical variables, baseline coronal and sagittal radiographic parameters, and baseline scores for health-related quality of life measures. Patients groups were determined according to their Lenke radiographic fusion type at the 2-year follow-up: bilateral or unilateral fusion (union) or pseudarthrosis (nonunion). A decision tree was constructed, and internal validation was accomplished via bootstrapped training and testing data sets. Accuracy and the area under the receiver operating characteristic curve (AUC) were calculated to evaluate the model.
A total of 336 patients were included in the study (nonunion: 105, union: 231). The model was 91.3% accurate with an AUC of 0.94. From 82 initial variables, the top 21 covered a wide range of areas including preoperative alignment, comorbidities, patient demographics, and surgical use of graft material.
A model for predicting the development of pseudarthrosis at the 2-year follow-up was successfully created. This model is the first of its kind for complex predictive analytics in the development of pseudarthrosis for patients with ASD undergoing surgical correction and can aid in clinical decision-making for potential preventative strategies.