Browse

You are looking at 1 - 10 of 102 items for

  • All content x
  • By Author: Theodore, Nicholas x
Clear All
Restricted access

Adham M. Khalafallah, Adrian E. Jimenez, Nathan A. Shlobin, Collin J. Larkin, Debraj Mukherjee, Corinna C. Zygourakis, Sheng-Fu Lo, Daniel M. Sciubba, Ali Bydon, Timothy F. Witham, Nader S. Dahdaleh, and Nicholas Theodore

OBJECTIVE

Although fellowship training is becoming increasingly common in neurosurgery, it is unclear which factors predict an academic career trajectory among spinal neurosurgeons. In this study, the authors sought to identify predictors associated with academic career placement among fellowship-trained neurological spinal surgeons.

METHODS

Demographic data and bibliometric information on neurosurgeons who completed a residency program accredited by the Accreditation Council for Graduate Medical Education between 1983 and 2019 were gathered, and those who completed a spine fellowship were identified. Employment was denoted as academic if the hospital where a neurosurgeon worked was affiliated with a neurosurgical residency program; all other positions were denoted as nonacademic. A logistic regression model was used for multivariate statistical analysis.

RESULTS

A total of 376 fellowship-trained spinal neurosurgeons were identified, of whom 140 (37.2%) held academic positions. The top 5 programs that graduated the most fellows in the cohort were Cleveland Clinic, The Johns Hopkins Hospital, University of Miami, Barrow Neurological Institute, and Northwestern University. On multivariate analysis, increased protected research time during residency (OR 1.03, p = 0.044), a higher h-index during residency (OR 1.12, p < 0.001), completing more than one clinical fellowship (OR 2.16, p = 0.024), and attending any of the top 5 programs that graduated the most fellows (OR 2.01, p = 0.0069) were independently associated with an academic career trajectory.

CONCLUSIONS

Increased protected research time during residency, a higher h-index during residency, completing more than one clinical fellowship, and attending one of the 5 programs graduating the most fellowship-trained neurosurgical spinal surgeons independently predicted an academic career. These results may be useful in identifying and advising trainees interested in academic spine neurosurgery.

Restricted access

Srujan Kopparapu, Daniel Lubelski, Zach Pennington, Majid Khan, Nicholas Theodore, and Daniel Sciubba

OBJECTIVE

Percutaneous vertebroplasty (PV) and balloon kyphoplasty (BK) are two minimally invasive techniques used to treat mechanical pain secondary to spinal compression fractures. A concern for both procedures is the radiation exposure incurred by both operators and patients. The authors conducted a systematic review of the available literature to examine differences in interventionalist radiation exposure between PV and BK and differences in patient radiation exposure between PV and BK.

METHODS

The authors conducted a search of the PubMed, Ovid Medline, Cochrane Reviews, Embase, and Web of Science databases according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Full-text articles in English describing one of the primary endpoints in ≥ 5 unique patients treated with PV or BK of the mobile spine were included. Estimates of mean operative time, radiation exposure, and fluoroscopy duration were reported as weighted averages. Additionally, annual occupational dose limits provided by the United States Nuclear Regulatory Commission (USNRC) were used to determine the number needed to harm (NNH).

RESULTS

The meta-analysis included 27 articles. For PV, the mean fluoroscopy times were 4.9 ± 3.3 minutes per level without protective measures and 5.2 ± 3.4 minutes with protective measures. The mean operator radiation exposures per level in mrem were 4.6 ± 5.4 at the eye, 7.8 ± 8.7 at the neck, 22.7 ± 62.4 at the torso, and 49.2 ± 62.2 at the hand without protective equipment and 0.3 ± 0.1 at the torso and 95.5 ± 162.5 at the hand with protection. The mean fluoroscopy times per level for BK were 6.1 ± 2.5 minutes without protective measures and 6.0 ± 3.2 minutes with such measures. The mean exposures were 31.3 ± 39.3, 19.7 ± 4.6, 31.8 ± 34.2, and 174.4 ± 117.3 mrem at the eye, neck, torso, and hand, respectively, without protection, and 1, 9.2 ± 26.2, and 187.7 ± 100.4 mrem at the neck, torso, and hand, respectively, with protective equipment. For protected procedures, radiation to the hand was the limiting factor and the NNH estimates were 524 ± 891 and 266 ± 142 for PV and BK, respectively. Patient exposure as measured by flank-mounted dosimeters, entrance skin dose, and dose area product demonstrated lower exposure with PV than BK (p < 0.01).

CONCLUSIONS

Operator radiation exposure is significantly decreased by the use of protective equipment. Radiation exposure to both the operator and patient is lower for PV than BK. NNH estimates suggest that radiation to the hand limits the number of procedures an operator can safely perform. In particular, radiation to the hand limits PV to 524 and BK to 266 procedures per year before surpassing the threshold set by the USNRC.

Open access

Kee D. Kim, K. Stuart Lee, Domagoj Coric, Jason J. Chang, James S. Harrop, Nicholas Theodore, and Richard M. Toselli

OBJECTIVE

The aim of this study was to evaluate whether the investigational Neuro-Spinal Scaffold (NSS), a highly porous bioresorbable polymer device, demonstrates probable benefit for safety and neurological recovery in patients with complete (AIS grade A) T2–12 spinal cord injury (SCI) when implanted ≤ 96 hours postinjury.

METHODS

This was a prospective, open-label, multicenter, single-arm study in patients with a visible contusion on MRI. The NSS was implanted into the epicenter of the postirrigation intramedullary spinal cord contusion cavity with the intention of providing structural support to the injured spinal cord parenchyma. The primary efficacy endpoint was the proportion of patients who had an improvement of ≥ 1 AIS grade (i.e., conversion from complete paraplegia to incomplete paraplegia) at the 6-month follow-up visit. A preset objective performance criterion established for the study was defined as an AIS grade conversion rate of ≥ 25%. Secondary endpoints included change in neurological level of injury (NLI). This analysis reports on data through 6-month follow-up assessments.

RESULTS

Nineteen patients underwent NSS implantation. There were 3 early withdrawals due to death, which were all determined by investigators to be unrelated to the NSS or the implantation procedure. Seven of 16 patients (43.8%) who completed the 6-month follow-up visit had conversion of neurological status (AIS grade A to grade B [n = 5] or C [n = 2]). Five patients showed improvement in NLI of 1 to 2 levels compared with preimplantation assessment, 3 patients showed no change, and 8 patients showed deterioration of 1 to 4 levels. There were no unanticipated or serious adverse device effects or serious adverse events related to the NSS or the implantation procedure as determined by investigators.

CONCLUSIONS

In this first-in-human study, implantation of the NSS within the spinal cord appeared to be safe in the setting of surgical decompression and stabilization for complete (AIS grade A) thoracic SCI. It was associated with a 6-month AIS grade conversion rate that exceeded historical controls. The INSPIRE study data demonstrate that the potential benefits of the NSS outweigh the risks in this patient population and support further clinical investigation in a randomized controlled trial.

Clinical trial registration no.: NCT02138110 (clinicaltrials.gov)

Restricted access

Robert Young, Ethan Cottrill, Zach Pennington, Jeff Ehresman, A. Karim Ahmed, Timothy Kim, Bowen Jiang, Daniel Lubelski, Alex M. Zhu, Katherine S. Wright, Donna Gavin, Alyson Russo, Marie N. Hanna, Ali Bydon, Timothy F. Witham, Corinna Zygourakis, and Nicholas Theodore

OBJECTIVE

Enhanced Recovery After Surgery (ERAS) protocols have rapidly gained popularity in multiple surgical specialties and are recognized for their potential to improve patient outcomes and decrease hospitalization costs. However, they have only recently been applied to spinal surgery. The goal in the present work was to describe the development, implementation, and impact of an Enhanced Recovery After Spine Surgery (ERASS) protocol for patients undergoing elective spine procedures at an academic community hospital.

METHODS

A multidisciplinary team, drawing on prior publications and spine surgery best practices, collaborated to develop an ERASS protocol. Patients undergoing elective cervical or lumbar procedures were prospectively enrolled at a single tertiary care center; interventions were standardized across the cohort for pre-, intra-, and postoperative care using standardized order sets in the electronic medical record. Protocol efficacy was evaluated by comparing enrolled patients to a historic cohort of age- and procedure-matched controls. The primary study outcomes were quantity of opiate use in morphine milligram equivalents (MMEs) on postoperative day (POD) 1 and length of stay. Secondary outcomes included frequency and duration of indwelling urinary catheter use, discharge disposition, 30-day readmission and reoperation rates, and complication rates. Multivariable linear regression was used to determine whether ERASS protocol use was independently predictive of opiate use on POD 1.

RESULTS

In total, 97 patients were included in the study cohort and were compared with a historic cohort of 146 patients. The patients in the ERASS group had lower POD 1 opiate use than the control group (26 ± 33 vs 42 ± 40 MMEs, p < 0.001), driven largely by differences in opiate-naive patients (16 ± 21 vs 38 ± 36 MMEs, p < 0.001). Additionally, patients in the ERASS group had shorter hospitalizations than patients in the control group (51 ± 30 vs 62 ± 49 hours, p = 0.047). On multivariable regression, implementation of the ERASS protocol was independently predictive of lower POD 1 opiate consumption (β = −7.32, p < 0.001). There were no significant differences in any of the secondary outcomes.

CONCLUSIONS

The authors found that the development and implementation of a comprehensive ERASS protocol led to a modest reduction in postoperative opiate consumption and hospital length of stay in patients undergoing elective cervical or lumbar procedures. As suggested by these results and those of other groups, the implementation of ERASS protocols may reduce care costs and improve patient outcomes after spine surgery.

Restricted access

Zach Pennington, Ethan Cottrill, Daniel Lubelski, Jeff Ehresman, Nicholas Theodore, and Daniel M. Sciubba

OBJECTIVE

Spine surgery has been identified as a significant source of healthcare expenditures in the United States. Prolonged hospitalization has been cited as one source of increased spending, and there has been drive from providers and payors alike to decrease inpatient stays. One strategy currently being explored is the use of Enhanced Recovery After Surgery (ERAS) protocols. Here, the authors review the literature on adult spine ERAS protocols, focusing on clinical benefits and cost reductions. They also conducted a quantitative meta-analysis examining the following: 1) length of stay (LOS), 2) complication rate, 3) wound infection rate, 4) 30-day readmission rate, and 5) 30-day reoperation rate.

METHODS

Using the PRISMA guidelines, a search of the PubMed/Medline, Web of Science, Cochrane Reviews, Embase, CINAHL, and OVID Medline databases was conducted to identify all full-text articles in the English-language literature describing ERAS protocol implementation for adult spine surgery. A quantitative meta-analysis using random-effects modeling was performed for the identified clinical outcomes using studies that directly compared ERAS protocols with conventional care.

RESULTS

Of 950 articles reviewed, 34 were included in the qualitative analysis and 20 were included in the quantitative analysis. The most common protocol types were general spine surgery protocols and protocols for lumbar spine surgery patients. The most frequently cited benefits of ERAS protocols were shorter LOS (n = 12), lower postoperative pain scores (n = 6), and decreased complication rates (n = 4). The meta-analysis demonstrated shorter LOS for the general spine surgery (mean difference −1.22 days [95% CI −1.98 to −0.47]) and lumbar spine ERAS protocols (−1.53 days [95% CI −2.89 to −0.16]). Neither general nor lumbar spine protocols led to a significant difference in complication rates. Insufficient data existed to perform a meta-analysis of the differences in costs or postoperative narcotic use.

CONCLUSIONS

Present data suggest that ERAS protocol implementation may reduce hospitalization time among adult spine surgery patients and may lead to reductions in complication rates when applied to specific populations. To generate high-quality evidence capable of supporting practice guidelines, though, additional controlled trials are necessary to validate these early findings in larger populations.

Free access

Bowen Jiang, Zach Pennington, Alex Zhu, Stavros Matsoukas, A. Karim Ahmed, Jeff Ehresman, Smruti Mahapatra, Ethan Cottrill, Hailey Sheppell, Amir Manbachi, Neil Crawford, and Nicholas Theodore

OBJECTIVE

Robotic spine surgery systems are increasingly used in the US market. As this technology gains traction, however, it is necessary to identify mechanisms that assess its effectiveness and allow for its continued improvement. One such mechanism is the development of a new 3D grading system that can serve as the foundation for error-based learning in robot systems. Herein the authors attempted 1) to define a system of providing accuracy data along all three pedicle screw placement axes, that is, cephalocaudal, mediolateral, and screw long axes; and 2) to use the grading system to evaluate the mean accuracy of thoracolumbar pedicle screws placed using a single commercially available robotic system.

METHODS

The authors retrospectively reviewed a prospectively maintained, IRB-approved database of patients at a single tertiary care center who had undergone instrumented fusion of the thoracic or lumbosacral spine using robotic assistance. Patients with preoperatively planned screw trajectories and postoperative CT studies were included in the final analysis. Screw accuracy was measured as the net deviation of the planned trajectory from the actual screw trajectory in the mediolateral, cephalocaudal, and screw long axes.

RESULTS

The authors identified 47 patients, 51% male, whose pedicles had been instrumented with a total of 254 screws (63 thoracic, 191 lumbosacral). The patients had a mean age of 61.1 years and a mean BMI of 30.0 kg/m2. The mean screw tip accuracies were 1.3 ± 1.3 mm, 1.2 ± 1.1 mm, and 2.6 ± 2.2 mm in the mediolateral, cephalocaudal, and screw long axes, respectively, for a net linear deviation of 3.6 ± 2.3 mm and net angular deviation of 3.6° ± 2.8°. According to the Gertzbein-Robbins grading system, 184 screws (72%) were classified as grade A and 70 screws (28%) as grade B. Placement of 100% of the screws was clinically acceptable.

CONCLUSIONS

The accuracy of the discussed robotic spine system is similar to that described for other surgical systems. Additionally, the authors outline a new method of grading screw placement accuracy that measures deviation in all three relevant axes. This grading system could provide the error signal necessary for unsupervised machine learning by robotic systems, which would in turn support continued improvement in instrumentation placement accuracy.

Free access

Zach Pennington, Bowen Jiang, Erick M. Westbroek, Ethan Cottrill, Benjamin Greenberg, Philippe Gailloud, Jean-Paul Wolinsky, Ying Wei Lum, and Nicholas Theodore

OBJECTIVE

Myelopathy selectively involving the lower extremities can occur secondary to spondylotic changes, tumor, vascular malformations, or thoracolumbar cord ischemia. Vascular causes of myelopathy are rarely described. An uncommon etiology within this category is diaphragmatic crus syndrome, in which compression of an intersegmental artery supplying the cord leads to myelopathy. The authors present the operative technique for treating this syndrome, describing their experience with 3 patients treated for acute-onset lower-extremity myelopathy secondary to hypoperfusion of the anterior spinal artery.

METHODS

All patients had compression of a lumbar intersegmental artery supplying the cord; the compression was caused by the diaphragmatic crus. Compression of the intersegmental artery was probably producing the patients’ symptoms by decreasing blood flow through the artery of Adamkiewicz, causing lumbosacral ischemia.

RESULTS

All patients underwent surgery to transect the offending diaphragmatic crus. Each patient experienced substantial symptom improvement, and 2 patients made a full neurological recovery before discharge.

CONCLUSIONS

Diaphragmatic crus syndrome is a rare or under-recognized cause of ischemic myelopathy. Patients present with episodic acute-on-chronic lower-extremity paraparesis, gait instability, and numbness. Angiography confirms compression of an intersegmental artery that gives rise to a dominant radiculomedullary artery. Transecting the offending diaphragmatic crus can produce complete resolution of neurological symptoms.

Free access

Tyler S. Cole, Kaith K. Almefty, Jakub Godzik, Amy H. Muma, Randall J. Hlubek, Eduardo Martinez-del-Campo, Nicholas Theodore, U. Kumar Kakarla, and Jay D. Turner

OBJECTIVE

Cervical spondylotic myelopathy (CSM) is the primary cause of adult spinal cord dysfunction. Diminished hand strength and reduced dexterity associated with CSM contribute to disability. Here, the authors investigated the impact of CSM severity on hand function using quantitative testing and evaluated the response to surgical intervention.

METHODS

Thirty-three patients undergoing surgical treatment of CSM were prospectively enrolled in the study. An occupational therapist conducted 3 functional hand tests: 1) palmar dynamometry to measure grip strength, 2) hydraulic pinch gauge test to measure pinch strength, and 3) 9-hole peg test (9-HPT) to evaluate upper extremity dexterity. Tests were performed preoperatively and 6–8 weeks postoperatively. Test results were expressed as 1) a percentile relative to age- and sex-stratified norms and 2) achievement of a minimum clinically important (MCI) difference. Patients were stratified into groups (mild, moderate, and severe myelopathy) based on their modified Japanese Orthopaedic Association (mJOA) score. The severity of stenosis on preoperative MRI was graded by three independent physicians using the Kang classification.

RESULTS

The primary presenting symptoms were neck pain (33%), numbness (21%), imbalance (12%), and upper extremity weakness (12%). Among the 33 patients, 61% (20) underwent anterior approach decompression, with a mean (SD) of 2.9 (1.5) levels treated. At baseline, patients with moderate and low mJOA scores (indicating more severe myelopathy) had lower preoperative pinch (p < 0.001) and grip (p = 0.01) strength than those with high mJOA scores/mild myelopathy. Postoperative improvement was observed in all hand function domains except pinch strength in the nondominant hand, with MCI differences at 6 weeks ranging from 33% of patients in dominant-hand strength tests to 73% of patients in nondominant-hand dexterity tests. Patients with moderate baseline mJOA scores were more likely to have MCI improvement in dominant grip strength (58.3%) than those with low mJOA scores/severe myelopathy (30%) and high mJOA scores/mild myelopathy (9%, p = 0.04). Dexterity in the dominant hand as measured by the 9-HPT ranged from < 1 in patients with cord signal change to 15.9 in patients with subarachnoid effacement only (p = 0.03).

CONCLUSIONS

Patients with CSM achieved significant improvement in strength and dexterity postoperatively. Baseline strength measures correlated best with the preoperative mJOA score; baseline dexterity correlated best with the severity of stenosis on MRI. The majority of patients experienced MCI improvements in dexterity. Baseline pinch strength correlated with postoperative mJOA MCI improvement, and patients with moderate baseline mJOA scores were the most likely to have improvement in dominant grip strength postoperatively.

Restricted access

Zach Pennington, Daniel Lubelski, Erick M. Westbroek, A. Karim Ahmed, Jeff Ehresman, Matthew L. Goodwin, Sheng-Fu Lo, Timothy F. Witham, Ali Bydon, Nicholas Theodore, and Daniel M. Sciubba

OBJECTIVE

Postoperative C5 palsy affects 7%–12% of patients who undergo posterior cervical decompression for degenerative cervical spine pathologies. Minimal evidence exists regarding the natural history of expected recovery and variables that affect palsy recovery. The authors investigated pre- and postoperative variables that predict recovery and recovery time among patients with postoperative C5 palsy.

METHODS

The authors included patients who underwent posterior cervical decompression at a tertiary referral center between 2004 and 2018 and who experienced postoperative C5 palsy. All patients had preoperative MR images and full records, including operative note, postoperative course, and clinical presentation. Kaplan-Meier survival analysis was used to evaluate both times to complete recovery and to new neurological baseline—defined by deltoid strength on manual motor testing of the affected side—as a function of clinical symptoms, surgical maneuvers, and the severity of postoperative deficits.

RESULTS

Seventy-seven patients were included, with an average age of 64 years. The mean follow-up period was 17.7 months. The mean postoperative C5 strength was grade 2.7/5, and the mean time to first motor examination with documented C5 palsy was 3.5 days. Sixteen patients (21%) had bilateral deficits, and 9 (12%) had new-onset biceps weakness; 36% of patients had undergone C4–5 foraminotomy of the affected root, and 17% had presented with radicular pain in the dermatome of the affected root. On univariable analysis, patients’ reporting of numbness or tingling (p = 0.02) and a baseline deficit (p < 0.001) were the only predictors of time to recovery. Patients with grade 4+/5 weakness had significantly shorter times to recovery than patients with grade 4/5 weakness (p = 0.001) or ≤ grade 3/5 weakness (p < 0.001). There was no difference between those with grade 4/5 weakness and those with ≤ grade 3/5 weakness. Patients with postoperative strength < grade 3/5 had a < 50% chance of achieving complete recovery.

CONCLUSIONS

The timing and odds of recovery following C5 palsy were best predicted by the magnitude of the postoperative deficit. The use of C4–5 foraminotomy did not predict the time to or likelihood of recovery.

Free access

Ethan Cottrill, Zach Pennington, A. Karim Ahmed, Daniel Lubelski, Matthew L. Goodwin, Alexander Perdomo-Pantoja, Erick M. Westbroek, Nicholas Theodore, Timothy Witham, and Daniel Sciubba

OBJECTIVE

Nonunion is a common complication of spinal fusion surgeries. Electrical stimulation technologies (ESTs)—namely, direct current stimulation (DCS), capacitive coupling stimulation (CCS), and inductive coupling stimulation (ICS)—have been suggested to improve fusion rates. However, the evidence to support their use is based solely on small trials. Here, the authors report the results of meta-analyses of the preclinical and clinical data from the literature to provide estimates of the overall effect of these therapies at large and in subgroups.

METHODS

A systematic review of the English-language literature was performed using PubMed, Embase, and Web of Science databases. The query of these databases was designed to include all preclinical and clinical studies examining ESTs for spinal fusion. The primary endpoint was the fusion rate at the last follow-up. Meta-analyses were performed using a Freeman-Tukey double arcsine transformation followed by random-effects modeling.

RESULTS

A total of 33 articles (17 preclinical, 16 clinical) were identified, of which 11 preclinical studies (257 animals) and 13 clinical studies (2144 patients) were included in the meta-analysis. Among preclinical studies, the mean fusion rates were higher among EST-treated animals (OR 4.79, p < 0.001). Clinical studies similarly showed ESTs to increase fusion rates (OR 2.26, p < 0.001). Of EST modalities, only DCS improved fusion rates in both preclinical (OR 5.64, p < 0.001) and clinical (OR 2.13, p = 0.03) populations; ICS improved fusion in clinical studies only (OR 2.45, p = 0.014). CCS was not effective at increasing fusion, although only one clinical study was identified. A subanalysis of the clinical studies found that ESTs increased fusion rates in the following populations: patients with difficult-to-fuse spines, those who smoke, and those who underwent multilevel fusions.

CONCLUSIONS

The authors found that electrical stimulation devices may produce clinically significant increases in arthrodesis rates among patients undergoing spinal fusion. They also found that the pro-arthrodesis effects seen in preclinical studies are also found in clinical populations, suggesting that findings in animal studies are translatable. Additional research is needed to analyze the cost-effectiveness of these devices.