Search Results

You are looking at 71 - 80 of 110 items for

  • Author or Editor: Christopher Michael x
  • User-accessible content x
Clear All Modify Search
Free access

Andrew K. Chan, Erica F. Bisson, Mohamad Bydon, Steven D. Glassman, Kevin T. Foley, Eric A. Potts, Christopher I. Shaffrey, Mark E. Shaffrey, Domagoj Coric, John J. Knightly, Paul Park, Michael Y. Wang, Kai-Ming Fu, Jonathan R. Slotkin, Anthony L. Asher, Michael S. Virk, Panagiotis Kerezoudis, Mohammed Ali Alvi, Jian Guan, Regis W. Haid and Praveen V. Mummaneni

OBJECTIVE

The optimal minimally invasive surgery (MIS) approach for grade 1 lumbar spondylolisthesis is not clearly elucidated. In this study, the authors compared the 24-month patient-reported outcomes (PROs) after MIS transforaminal lumbar interbody fusion (TLIF) and MIS decompression for degenerative lumbar spondylolisthesis.

METHODS

A total of 608 patients from 12 high-enrolling sites participating in the Quality Outcomes Database (QOD) lumbar spondylolisthesis module underwent single-level surgery for degenerative grade 1 lumbar spondylolisthesis, of whom 143 underwent MIS (72 MIS TLIF [50.3%] and 71 MIS decompression [49.7%]). Surgeries were classified as MIS if there was utilization of percutaneous screw fixation and placement of a Wiltse plane MIS intervertebral body graft (MIS TLIF) or if there was a tubular decompression (MIS decompression). Parameters obtained at baseline through at least 24 months of follow-up were collected. PROs included the Oswestry Disability Index (ODI), numeric rating scale (NRS) for back pain, NRS for leg pain, EuroQol-5D (EQ-5D) questionnaire, and North American Spine Society (NASS) satisfaction questionnaire. Multivariate models were constructed to adjust for patient characteristics, surgical variables, and baseline PRO values.

RESULTS

The mean age of the MIS cohort was 67.1 ± 11.3 years (MIS TLIF 62.1 years vs MIS decompression 72.3 years) and consisted of 79 (55.2%) women (MIS TLIF 55.6% vs MIS decompression 54.9%). The proportion in each cohort reaching the 24-month follow-up did not differ significantly between the cohorts (MIS TLIF 83.3% and MIS decompression 84.5%, p = 0.85). MIS TLIF was associated with greater blood loss (mean 108.8 vs 33.0 ml, p < 0.001), longer operative time (mean 228.2 vs 101.8 minutes, p < 0.001), and longer length of hospitalization (mean 2.9 vs 0.7 days, p < 0.001). MIS TLIF was associated with a significantly lower reoperation rate (14.1% vs 1.4%, p = 0.004). Both cohorts demonstrated significant improvements in ODI, NRS back pain, NRS leg pain, and EQ-5D at 24 months (p < 0.001, all comparisons relative to baseline). In multivariate analyses, MIS TLIF—as opposed to MIS decompression alone—was associated with superior ODI change (β = −7.59, 95% CI −14.96 to −0.23; p = 0.04), NRS back pain change (β = −1.54, 95% CI −2.78 to −0.30; p = 0.02), and NASS satisfaction (OR 0.32, 95% CI 0.12–0.82; p = 0.02).

CONCLUSIONS

For symptomatic, single-level degenerative spondylolisthesis, MIS TLIF was associated with a lower reoperation rate and superior outcomes for disability, back pain, and patient satisfaction compared with posterior MIS decompression alone. This finding may aid surgical decision-making when considering MIS for degenerative lumbar spondylolisthesis.

Full access

Randy S. Bell, Corey M. Mossop, Michael S. Dirks, Frederick L. Stephens, Lisa Mulligan, Robert Ecker, Christopher J. Neal, Anand Kumar, Teodoro Tigno and Rocco A. Armonda

Object

Decompressive craniectomy has defined this era of damage-control wartime neurosurgery. Injuries that in previous conflicts were treated in an expectant manner are now aggressively decompressed at the far-forward Combat Support Hospital and transferred to Walter Reed Army Medical Center (WRAMC) and National Naval Medical Center (NNMC) in Bethesda for definitive care. The purpose of this paper is to examine the baseline characteristics of those injured warriors who received decompressive craniectomies. The importance of this procedure will be emphasized and guidance provided to current and future neurosurgeons deployed in theater.

Methods

The authors retrospectively searched a database for all soldiers injured in Operations Iraqi Freedom and Enduring Freedom between April 2003 and October 2008 at WRAMC and NNMC. Criteria for inclusion in this study included either a closed or penetrating head injury suffered during combat operations in either Iraq or Afghanistan with subsequent neurosurgical evaluation at NNMC or WRAMC. Exclusion criteria included all cases in which primary demographic data could not be verified. Primary outcome data included the type and mechanism of injury, Glasgow Coma Scale (GCS) score and injury severity score (ISS) at admission, and Glasgow Outcome Scale (GOS) score at discharge, 6 months, and 1–2 years.

Results

Four hundred eight patients presented with head injury during the study period. In this population, a total of 188 decompressive craniectomies were performed (154 for penetrating head injury, 22 for closed head injury, and 12 for unknown injury mechanism). Patients who underwent decompressive craniectomies in the combat theater had significantly lower initial GCS scores (7.7 ± 4.2 vs 10.8 ± 4.0, p < 0.05) and higher ISSs (32.5 ± 9.4 vs 26.8 ± 11.8, p < 0.05) than those who did not. When comparing the GOS scores at hospital discharge, 6 months, and 1–2 years after discharge, those receiving decompressive craniectomies had significantly lower scores (3.0 ± 0.9 vs 3.7 ± 0.9, 3.5 ± 1.2 vs 4.0 ± 1.0, and 3.7 ± 1.2 vs 4.4 ± 0.9, respectively) than those who did not undergo decompressive craniectomies. That said, intragroup analysis indicated consistent improvement for those with craniectomy with time, allowing them, on average, to participate in and improve from rehabilitation (p < 0.05). Overall, 83% of those for whom follow-up data are available achieved a 1-year GOS score of greater than 3.

Conclusions

This study of the provision of early decompressive craniectomy in a military population that sustained severe penetrating and closed head injuries represents one of the largest to date in both the civilian and military literature. The findings suggest that patients who undergo decompressive craniectomy had worse injuries than those receiving craniotomy and, while not achieving the same outcomes as those with a lesser injury, did improve with time. The authors recommend hemicraniectomy for damage control to protect patients from the effects of brain swelling during the long overseas transport to their definitive care, and it should be conducted with foresight concerning future complications and reconstructive surgical procedures.

Full access

Gregory D. Schroeder, Christopher K. Kepler, John D. Koerner, Jens R. Chapman, Carlo Bellabarba, F. Cumhur Oner, Max Reinhold, Marcel F. Dvorak, Bizhan Aarabi, Luiz Vialle, Michael G. Fehlings, Shanmuganathan Rajasekaran, Frank Kandziora, Klaus J. Schnake and Alexander R. Vaccaro

OBJECT

The aim of this study was to determine if the ability of a surgeon to correctly classify A3 (burst fractures with a single endplate involved) and A4 (burst fractures with both endplates involved) fractures is affected by either the region or the experience of the surgeon.

METHODS

A survey was sent to 100 AOSpine members from all 6 AO regions of the world (North America, South America, Europe, Africa, Asia, and the Middle East) who had no prior knowledge of the new AOSpine Thoracolumbar Spine Injury Classification System. Respondents were asked to classify 25 cases, including 6 thoracolumbar burst fractures (A3 or A4). This study focuses on the effect of region and experience on surgeons’ ability to properly classify these 2 controversial fracture variants.

RESULTS

All 100 surveyed surgeons completed the survey, and no significant regional (p > 0.50) or experiential (p > 0.21) variability in the ability to correctly classify burst fractures was identified; however, surgeons from all regions and with all levels of experience were more likely to correctly classify A3 fractures than A4 fractures (p < 0.01). Further analysis demonstrated that no region predisposed surgeons to increasing their assessment of severity of burst fractures.

CONCLUSIONS

A3 and A4 fractures are the most difficult 2 fractures to correctly classify, but this is not affected by the region or experience of the surgeon; therefore, regional variations in the treatment of thoracolumbar burst fractures (A3 and A4) is not due to differing radiographic interpretation of the fractures.

Full access

Paige J. Ostahowski, Nithya Kannan, Mark S. Wainwright, Qian Qiu, Richard B. Mink, Jonathan I. Groner, Michael J. Bell, Christopher C. Giza, Douglas F. Zatzick, Richard G. Ellenbogen, Linda Ng Boyle, Pamela H. Mitchell, Monica S. Vavilala and for the PEGASUS (Pediatric Guideline Adherence and Outcomes) Study

OBJECTIVE

Posttraumatic seizure is a major complication following traumatic brain injury (TBI). The aim of this study was to determine the variation in seizure prophylaxis in select pediatric trauma centers. The authors hypothesized that there would be wide variation in seizure prophylaxis selection and use, within and between pediatric trauma centers.

METHODS

In this retrospective multicenter cohort study including 5 regional pediatric trauma centers affiliated with academic medical centers, the authors examined data from 236 children (age < 18 years) with severe TBI (admission Glasgow Coma Scale score ≤ 8, ICD-9 diagnosis codes of 800.0–801.9, 803.0–804.9, 850.0–854.1, 959.01, 950.1–950.3, 995.55, maximum head Abbreviated Injury Scale score ≥ 3) who received tracheal intubation for ≥ 48 hours in the ICU between 2007 and 2011.

RESULTS

Of 236 patients, 187 (79%) received seizure prophylaxis. In 2 of the 5 centers, 100% of the patients received seizure prophylaxis medication. Use of seizure prophylaxis was associated with younger patient age (p < 0.001), inflicted TBI (p < 0.001), subdural hematoma (p = 0.02), cerebral infarction (p < 0.001), and use of electroencephalography (p = 0.023), but not higher Injury Severity Score. In 63% cases in which seizure prophylaxis was used, the patients were given the first medication within 24 hours of injury, and 50% of the patients received the first dose in the prehospital or emergency department setting. Initial seizure prophylaxis was most commonly with fosphenytoin (47%), followed by phenytoin (40%).

CONCLUSIONS

While fosphenytoin was the most commonly used medication for seizure prophylaxis, there was large variation within and between trauma centers with respect to timing and choice of seizure prophylaxis in severe pediatric TBI. The heterogeneity in seizure prophylaxis use may explain the previously observed lack of relationship between seizure prophylaxis and outcomes.

Full access

Michael P. Kelly, Lukas P. Zebala, Han Jo Kim, Daniel M. Sciubba, Justin S. Smith, Christopher I. Shaffrey, Shay Bess, Eric Klineberg, Gregory Mundis Jr., Douglas Burton, Robert Hart, Alex Soroceanu, Frank Schwab, Virginie Lafage and International Spine Study Group

OBJECT

The goal of this study was to examine the effectiveness of preoperative autologous blood donation (PABD) in adult spinal deformity (ASD) surgery.

METHODS

Patients undergoing single-stay ASD reconstructions were identified in a multicenter database. Patients were divided into groups according to PABD (either PABD or NoPABD). Propensity weighting was used to create matched cohorts of PABD and NoPABD patients. Allogeneic (ALLO) exposure, autologous (AUTO) wastage (unused AUTO), and complication rates were compared between groups.

RESULTS

Four hundred twenty-eight patients were identified as meeting eligibility criteria. Sixty patients were treated with PABD, of whom 50 were matched to 50 patients who were not treated with PABD (NoPABD). Nearly one-third of patients in the PABD group (18/60, 30%) did not receive any autologous transfusion and donated blood was wasted. In 6 of these cases (6/60, 10%), patients received ALLO blood transfusions without AUTO. In 9 cases (9/60, 15%), patients received ALLO and AUTO blood transfusions. Overall rates of transfusion of any type were similar between groups (PABD 70% [42/60], NoPABD 75% [275/368], p = 0.438). Major and minor in-hospital complications were similar between groups (Major PABD 10% [6/60], NoPABD 12% [43/368], p = 0.537; Minor PABD 30% [18/60], NoPABD 24% [87/368], p = 0.499). When controlling for potential confounders, PABD patients were more likely to receive some transfusion (OR 15.1, 95% CI 2.1-106.7). No relationship between PABD and ALLO blood exposure was observed, however, refuting the concept that PABD is protective against ALLO blood exposure. In the matched cohorts, PABD patients were more likely to sustain a major perioperative cardiac complication (PABD 8/50 [16%], NoPABD 1/50 [2%], p = 0.046). No differences in rates of infection or wound-healing complications were observed between cohorts.

CONCLUSIONS

Preoperative autologous blood donation was associated with a higher probability of perioperative transfusions of any type in patients with ASD. No protective effect of PABD against ALLO blood exposure was observed, and no risk of perioperative infectious complications was observed in patients exposed to ALLO blood only. The benefit of PABD in patients with ASD remains undefined.

Full access

Isaac O. Karikari, Christopher L. Gilchrist, Liufang Jing, David A. Alcorta, Jun Chen, William J. Richardson, Mostafa A. Gabr, Richard D. Bell, Michael J. Kelley, Carlos A. Bagley and Lori A. Setton

Object

Chordoma cells can generate solid-like tumors in xenograft models that express some molecular characteristics of the parent tumor, including positivity for brachyury and cytokeratins. However, there is a dearth of molecular markers that relate to chordoma tumor growth, as well as the cell lines needed to advance treatment. The objective in this study was to isolate a novel primary chordoma cell source and analyze the characteristics of tumor growth in a mouse xenograft model for comparison with the established U-CH1 and U-CH2b cell lines.

Methods

Primary cells from a sacral chordoma, called “DVC-4,” were cultured alongside U-CH1 and U-CH2b cells for more than 20 passages and characterized for expression of CD24 and brachyury. While brachyury is believed essential for driving tumor formation, CD24 is associated with healthy nucleus pulposus cells. Each cell type was subcutaneously implanted in NOD/SCID/IL2Rγnull mice. The percentage of solid tumors formed, time to maximum tumor size, and immunostaining scores for CD24 and brachyury (intensity scores of 0–3, heterogeneity scores of 0–1) were reported and evaluated to test differences across groups.

Results

The DVC-4 cells retained chordoma-like morphology in culture and exhibited CD24 and brachyury expression profiles in vitro that were similar to those for U-CH1 and U-CH2b. Both U-CH1 and DVC-4 cells grew tumors at rates that were faster than those for U-CH2b cells. Gross tumor developed at nearly every site (95%) injected with U-CH1 and at most sites (75%) injected with DVC-4. In contrast, U-CH2b cells produced grossly visible tumors in less than 50% of injected sites. Brachyury staining was similar among tumors derived from all 3 cell types and was intensely positive (scores of 2–3) in a majority of tissue sections. In contrast, differences in the pattern and intensity of staining for CD24 were noted among the 3 types of cell-derived tumors (p < 0.05, chi-square test), with evidence of intense and uniform staining in a majority of U-CH1 tumor sections (score of 3) and more than half of the DVC-4 tumor sections (scores of 2–3). In contrast, a majority of sections from U-CH2b cells stained modestly for CD24 (scores of 1–2) with a predominantly heterogeneous staining pattern.

Conclusions

This is the first report on xenografts generated from U-CH2b cells in which a low tumorigenicity was discovered despite evidence of chordoma-like characteristics in vitro. For tumors derived from a primary chordoma cell and U-CH1 cell line, similarly intense staining for CD24 was observed, which may correspond to their similar potential to grow tumors. In contrast, U-CH2b tumors stained less intensely for CD24. These results emphasize that many markers, including CD24, may be useful in distinguishing among chordoma cell types and their tumorigenicity in vivo.

Full access

Amit Jain, Hamid Hassanzadeh, Varun Puvanesarajah, Eric O. Klineberg, Daniel M. Sciubba, Michael P. Kelly, D. Kojo Hamilton, Virginie Lafage, Aaron J. Buckland, Peter G. Passias, Themistocles S. Protopsaltis, Renaud Lafage, Justin S. Smith, Christopher I. Shaffrey, Khaled M. Kebaish and the International Spine Study Group

OBJECTIVE

Using 2 complication-reporting methods, the authors investigated the incidence of major medical complications and mortality in elderly patients after surgery for adult spinal deformity (ASD) during a 2-year follow-up period.

METHODS

The authors queried a multicenter, prospective, surgeon-maintained database (SMD) to identify patients 65 years or older who underwent surgical correction of ASD from 2008 through 2014 and had a minimum 2 years of follow-up (n = 153). They also queried a Centers for Medicare & Medicaid Services claims database (MCD) for patients 65 years or older who underwent fusion of 8 or more vertebral levels from 2005 through 2012 (n = 3366). They calculated cumulative rates of the following complications during the first 6 weeks after surgery: cerebrovascular accident, congestive heart failure, deep venous thrombosis, myocardial infarction, pneumonia, and pulmonary embolism. Significance was set at p < 0.05.

RESULTS

During the perioperative period, rates of major medical complications were 5.9% for pneumonia, 4.1% for deep venous thrombosis, 3.2% for pulmonary embolism, 2.1% for cerebrovascular accident, 1.8% for myocardial infarction, and 1.0% for congestive heart failure. Mortality rates were 0.9% at 6 weeks and 1.8% at 2 years. When comparing the SMD with the MCD, there were no significant differences in the perioperative rates of major medical complications except pneumonia. Furthermore, there were no significant intergroup differences in the mortality rates at 6 weeks or 2 years. The SMD provided greater detail with respect to deformity characteristics and surgical variables than the MCD.

CONCLUSIONS

The incidence of most major medical complications in the elderly after surgery for ASD was similar between the SMD and the MCD and ranged from 1% for congestive heart failure to 5.9% for pneumonia. These complications data can be valuable for preoperative patient counseling and informed consent.

Full access

Christopher D. Wilson, Sam Safavi-Abbasi, Hai Sun, M. Yashar S. Kalani, Yan D. Zhao, Michael R. Levitt, Ricardo A. Hanel, Eric Sauvageau, Timothy B. Mapstone, Felipe C. Albuquerque, Cameron G. McDougall, Peter Nakaji and Robert F. Spetzler

OBJECTIVE

Aneurysmal subarachnoid hemorrhage (aSAH) may be complicated by hydrocephalus in 6.5%–67% of cases. Some patients with aSAH develop shunt dependency, which is often managed by ventriculoperitoneal shunt placement. The objectives of this study were to review published risk factors for shunt dependency in patients with aSAH, determine the level of evidence for each factor, and calculate the magnitude of each risk factor to better guide patient management.

METHODS

The authors searched PubMed and MEDLINE databases for Level A and Level B articles published through December 31, 2014, that describe factors affecting shunt dependency after aSAH and performed a systematic review and meta-analysis, stratifying the existing data according to level of evidence.

RESULTS

On the basis of the results of the meta-analysis, risk factors for shunt dependency included high Fisher grade (OR 7.74, 95% CI 4.47–13.41), acute hydrocephalus (OR 5.67, 95% CI 3.96–8.12), in-hospital complications (OR 4.91, 95% CI 2.79–8.64), presence of intraventricular blood (OR 3.93, 95% CI 2.80–5.52), high Hunt and Hess Scale score (OR 3.25, 95% CI 2.51–4.21), rehemorrhage (OR 2.21, 95% CI 1.24–3.95), posterior circulation location of the aneurysm (OR 1.85, 95% CI 1.35–2.53), and age ≥ 60 years (OR 1.81, 95% CI 1.50–2.19). The only risk factor included in the meta-analysis that did not reach statistical significance was female sex (OR 1.13, 95% CI 0.77–1.65).

CONCLUSIONS

The authors identified several risk factors for shunt dependency in aSAH patients that help predict which patients are likely to require a permanent shunt. Although some of these risk factors are not independent of each other, this information assists clinicians in identifying at-risk patients and managing their treatment.

Full access

Chad W. Washington, Colin P. Derdeyn, Rajat Dhar, Eric J. Arias, Michael R. Chicoine, DeWitte T. Cross, Ralph G. Dacey Jr., Byung Hee Han, Christopher J. Moran, Keith M. Rich, Ananth K. Vellimana and Gregory J. Zipfel

OBJECT

Studies show that phosphodiesterase-V (PDE-V) inhibition reduces cerebral vasospasm (CVS) and improves outcomes after experimental subarachnoid hemorrhage (SAH). This study was performed to investigate the safety and effect of sildenafil (an FDA-approved PDE-V inhibitor) on angiographic CVS in SAH patients.

METHODS

A2-phase, prospective, nonrandomized, human trial was implemented. Subarachnoid hemorrhage patients underwent angiography on Day 7 to assess for CVS. Those with CVS were given 10 mg of intravenous sildenafil in the first phase of the study and 30 mg in the second phase. In both, angiography was repeated 30 minutes after infusion. Safety was assessed by monitoring neurological examination findings and vital signs and for the development of adverse reactions. For angiographic assessment, in a blinded fashion, pre- and post-sildenafil images were graded as “improvement” or “no improvement” in CVS. Unblinded measurements were made between pre- and post-sildenafil angiograms.

RESULTS

Twelve patients received sildenafil; 5 patients received 10 mg and 7 received 30 mg. There were no adverse reactions. There was no adverse effect on heart rate or intracranial pressure. Sildenafil resulted in a transient decline in mean arterial pressure, an average of 17% with a return to baseline in an average of 18 minutes. Eight patients (67%) were found to have a positive angiographic response to sildenafil, 3 (60%) in the low-dose group and 5 (71%) in the high-dose group. The largest degree of vessel dilation was an average of 0.8 mm (range 0–2.1 mm). This corresponded to an average percentage increase in vessel diameter of 62% (range 0%–200%).

CONCLUSIONS

The results from this Phase I safety and proof-of-concept trial assessing the use of intravenous sildenafil in patients with CVS show that sildenafil is safe and well tolerated in the setting of SAH. Furthermore, the angiographic data suggest that sildenafil has a positive impact on human CVS.

Free access

Nancy McLaughlin, Michael A. Burke, Nisheeta P. Setlur, Douglas R. Niedzwiecki, Alan L. Kaplan, Christopher Saigal, Aman Mahajan, Neil A. Martin and Robert S. Kaplan

Object

To date, health care providers have devoted significant efforts to improve performance regarding patient safety and quality of care. To address the lagging involvement of health care providers in the cost component of the value equation, UCLA Health piloted the implementation of time-driven activity-based costing (TDABC). Here, the authors describe the implementation experiment, share lessons learned across the care continuum, and report how TDABC has actively engaged health care providers in costing activities and care redesign.

Methods

After the selection of pilots in neurosurgery and urology and the creation of the TDABC team, multidisciplinary process mapping sessions, capacity-cost calculations, and model integration were coordinated and offered to engage care providers at each phase.

Results

Reviewing the maps for the entire episode of care, varying types of personnel involved in the delivery of care were noted: 63 for the neurosurgery pilot and 61 for the urology pilot. The average cost capacities for care coordinators, nurses, residents, and faculty were $0.70 (range $0.63–$0.75), $1.55 (range $1.28–$2.04), $0.58 (range $0.56–$0.62), and $3.54 (range $2.29–$4.52), across both pilots. After calculating the costs for material, equipment, and space, the TDABC model enabled the linking of a specific step of the care cycle (who performed the step and its duration) and its associated costs. Both pilots identified important opportunities to redesign care delivery in a costconscious fashion.

Conclusions

The experimentation and implementation phases of the TDABC model have succeeded in engaging health care providers in process assessment and costing activities. The TDABC model proved to be a catalyzing agent for cost-conscious care redesign.