Browse

You are looking at 81 - 90 of 33,660 items for

Clear All Modify Search
Restricted access

Shyam S. Rao, David Y. Chung, Zoe Wolcott, Faheem Sheriff, Ayaz M. Khawaja, Hang Lee, Mary M. Guanci, Thabele M. Leslie-Mazwi, W. Taylor Kimberly, Aman B. Patel and Guy A. Rordorf

OBJECTIVE

There is variability and uncertainty about the optimal approach to the management and discontinuation of an external ventricular drain (EVD) after subarachnoid hemorrhage (SAH). Evidence from single-center randomized trials suggests that intermittent CSF drainage and rapid EVD weans are safe and associated with shorter ICU length of stay (LOS) and fewer EVD complications. However, a recent survey revealed that most neurocritical care units across the United States employ continuous CSF drainage with a gradual wean strategy. Therefore, the authors sought to determine the optimal EVD management approach at their institution.

METHODS

The authors reviewed records of 200 patients admitted to their institution from 2010 to 2016 with aneurysmal SAH requiring an EVD. In 2014, the neurocritical care unit of the authors’ institution revised the internal EVD management guidelines from a continuous CSF drainage with gradual wean approach (continuous/gradual) to an intermittent CSF drainage with rapid EVD wean approach (intermittent/rapid). The authors performed a retrospective multivariable analysis to compare outcomes before and after the guideline change.

RESULTS

The authors observed a significant reduction in ventriculoperitoneal (VP) shunt rates after changing to an intermittent CSF drainage with rapid EVD wean approach (13% intermittent/rapid vs 35% continuous/gradual, OR 0.21, p = 0.001). There was no increase in delayed VP shunt placement at 3 months (9.3% vs 8.6%, univariate p = 0.41). The intermittent/rapid EVD approach was also associated with a shorter mean EVD duration (10.2 vs 15.6 days, p < 0.001), shorter ICU LOS (14.2 vs 16.9 days, p = 0.001), shorter hospital LOS (18.2 vs 23.7 days, p < 0.0001), and lower incidence of a nonfunctioning EVD (15% vs 30%, OR 0.29, p = 0.006). The authors found no significant differences in the rates of symptomatic vasospasm (24.6% vs 20.2%, p = 0.52) or ventriculostomy-associated infections (1.3% vs 8.8%, OR 0.30, p = 0.315) between the 2 groups.

CONCLUSIONS

An intermittent CSF drainage with rapid EVD wean approach is associated with fewer VP shunt placements, fewer complications, and shorter LOS compared to a continuous CSF drainage with gradual EVD wean approach. There is a critical need for prospective multicenter studies to determine if the authors’ experience is generalizable to other centers.

Restricted access

Toru Serizawa, Masaaki Yamamoto, Yoshinori Higuchi, Yasunori Sato, Takashi Shuto, Atsuya Akabane, Hidefumi Jokura, Shoji Yomo, Osamu Nagano, Jun Kawagishi and Kazuhiro Yamanaka

OBJECTIVE

The Japanese Leksell Gamma Knife (JLGK)0901 study proved the efficacy of Gamma Knife radiosurgery (GKRS) in patients with 5–10 brain metastases (BMs) as compared to those with 2–4, showing noninferiority in overall survival and other secondary endpoints. However, the difference in local tumor progression between patients with 2–4 and those with 5–10 BMs has not been sufficiently examined for this data set. Thus, the authors reappraised this issue, employing the updated JLGK0901 data set with detailed observation via enhanced MRI. They applied sophisticated statistical methods to analyze the data.

METHODS

This was a prospective observational study of 1194 patients harboring 1–10 BMs treated with GKRS alone. Patients were categorized into groups A (single BM, 455 cases), B (2–4 BMs, 531 cases), and C (5–10 BMs, 208 cases). Local tumor progression was defined as a 20% increase in the maximum diameter of the enhanced lesion as compared to its smallest documented maximum diameter on enhanced MRI. The authors compared cumulative incidence differences determined by competing risk analysis and also conducted propensity score matching.

RESULTS

Local tumor progression was observed in 212 patients (17.8% overall, groups A/B/C: 93/89/30 patients). Cumulative incidences of local tumor progression in groups A, B, and C were 15.2%, 10.6%, and 8.7% at 1 year after GKRS; 20.1%, 16.9%, and 13.5% at 3 years; and 21.4%, 17.4%, and not available at 5 years, respectively. There were no significant differences in local tumor progression between groups B and C. Local tumor progression was classified as tumor recurrence in 139 patients (groups A/B/C: 68/53/18 patients), radiation necrosis in 67 (24/31/12), and mixed/undetermined lesions in 6 (1/5/0). There were no significant differences in tumor recurrence or radiation necrosis between groups B and C. Multivariate analysis using the Fine-Gray proportional hazards model revealed age < 65 years, neurological symptoms, tumor volume ≥ 1 cm3, and prescription dose < 22 Gy to be significant poor prognostic factors for local tumor progression. In the subset of 558 case-matched patients (186 in each group), there were no significant differences between groups B and C in local tumor progression, nor in tumor recurrence or radiation necrosis.

CONCLUSIONS

Local tumor progression incidences did not differ between groups B and C. This study proved that tumor progression after GKRS without whole-brain radiation therapy for patients with 5–10 BMs was satisfactorily treated with the doses prescribed according to the JLGK0901 study protocol and that results were not inferior to those in patients with a single or 2–4 BMs.

Clinical trial registration no.: UMIN000001812 (umin.ac.jp)

Restricted access

Ori Barzilai, Lily McLaughlin, Eric Lis, Yoshiya Yamada, Mark H. Bilsky and Ilya Laufer

OBJECTIVE

As patients with metastatic cancer live longer, an increased emphasis is placed on long-term therapeutic outcomes. The current study evaluates outcomes of long-term cancer survivors following surgery for spinal metastases.

METHODS

The study population included patients surgically treated at a tertiary cancer center between January 2010 and December 2015 who survived at least 24 months postoperatively. A retrospective chart and imaging review was performed to collect data regarding patient demographics; tumor histology; type and extent of spinal intervention; radiation data, including treatment dose and field; long-term sequelae, including local tumor control; and reoperations, repeat irradiation, or postoperative kyphoplasty at a previously treated level.

RESULTS

Eighty-eight patients were identified, of whom 44 were male, with a mean age of 61 years. The mean clinical follow-up for the cohort was 44.6 months (range 24.2–88.3 months). Open posterolateral decompression and stabilization was performed in 67 patients and percutaneous minimally invasive surgery in 21. In the total cohort, 84% received postoperative adjuvant radiation and 27% were operated on for progression following radiation. Posttreatment local tumor progression was identified in 10 patients (11%) at the index treatment level and 5 additional patients had a marginal failure; all of these patients were treated with repeat irradiation with 5 patients requiring a reoperation. In total, at least 1 additional surgical intervention was performed at the index level in 20 (23%) of the 88 patients: 11 for hardware failure, 5 for progression of disease, 3 for wound complications, and 1 for postoperative hematoma. Most reoperations (85%) were delayed at more than 3 months from the index surgery. Wound infections or dehiscence requiring additional surgical intervention occurred in 3 patients, all of which occurred more than a year postoperatively. Kyphoplasty at a previously operated level was performed in 3 cases due to progressive fractures.

CONCLUSIONS

Durable tumor control can be achieved in long-term cancer survivors surgically treated for symptomatic spinal metastases with limited complications. Complications observed after long-term follow-up include local tumor recurrence/progression, marginal tumor control failures, early or late hardware complications, late wound complications, and progressive spinal instability or deformity.

Restricted access

Nils H. Ulrich, Jakob M. Burgstaller, Isaac Gravestock, Giuseppe Pichierri, Maria M. Wertli, Johann Steurer, Mazda Farshad and François Porchet

OBJECTIVE

In this retrospective analysis of a prospective multicenter cohort study, the authors assessed which surgical approach, 1) the unilateral laminotomy with bilateral spinal canal decompression (ULBD; also called “over the top”) or 2) the standard open bilateral decompression (SOBD), achieves better clinical outcomes in the long-term follow-up. The optimal surgical approach (ULBD vs SOBD) to treat lumbar spinal stenosis remains controversial.

METHODS

The main outcomes of this study were changes in a spinal stenosis measure (SSM) symptoms score, SSM function score, and quality of life (sum score of the 3-level version of the EQ-5D tool [EQ-5D-3L]) over time. These outcome parameters were measured at baseline and at 12-, 24-, and 36-month follow-ups. To obtain an unbiased result on the effect of ULBD compared to SOBD the authors used matching techniques relying on propensity scores. The latter were calculated based on a logistic regression model including relevant confounders. Additional outcomes of interest were raw changes in main outcomes and in the Roland and Morris Disability Questionnaire from baseline to 12, 24, and 36 months.

RESULTS

For this study, 277 patients met the inclusion criteria. One hundred forty-nine patients were treated by ULBD, and 128 were treated by SOBD. After propensity score matching, 128 patients were left in each group. In the matched cohort, the mean (95% CI) estimated differences between ULBD and SOBD for change in SSM symptoms score from baseline to 12 months were −0.04 (−0.25 to 0.17), to 24 months −0.07 (−0.29 to 0.15), and to 36 months −0.04 (−0.28 to 0.21). For change in SSM function score, the estimated differences from baseline to 12 months were 0.06 (−0.08 to 0.21), to 24 months 0.08 (−0.07 to 0.22), and to 36 months 0.01 (−0.16 to 0.17). Differences in changes between groups in EQ-5D-3L sum scores were estimated to be −0.32 (−4.04 to 3.40), −0.89 (−4.76 to 2.98), and −2.71 (−7.16 to 1.74) from baseline to 12, 24, and 36 months, respectively. None of the group differences between ULBD and SOBD were statistically significant.

CONCLUSIONS

Both surgical techniques, ULBD and SOBD, may provide effective treatment options for DLSS patients. The authors further determined that the patient outcome results for the technically more challenging ULBD seem not to be superior to those for the SOBD even after 3 years of follow-up.

Restricted access

Jawad M. Khalifeh, Christopher F. Dibble, Ammar H. Hawasli and Wilson Z. Ray

OBJECTIVE

The Patient-Reported Outcomes Measurement Information System (PROMIS) is an adaptive, self-reported outcomes assessment tool that utilizes item response theory and computer adaptive testing to efficiently and precisely evaluate symptoms and perceived health status. Efforts to implement and report PROMIS outcomes in spine clinical practice remain limited. The objective of this retrospective cohort study is to evaluate the performance and psychometric properties of PROMIS physical function (PF) and pain interference (PI) among patients undergoing spine surgery.

METHODS

The authors identified all patients who underwent spine surgery at their institution between 2016 and 2018, and for whom there was retrievable PROMIS data. Descriptive statistics were calculated to summarize demographics, operative characteristics, and patient-reported outcomes. Assessments were evaluated preoperatively, and postoperatively within 2 months (early), 6 months (intermediate), and up to 2 years (late). Pairwise change scores were calculated to evaluate within-subjects differences and construct responsiveness over time. Pearson’s correlation coefficients were used to evaluate the association between PROMIS PF and PI domains. Subgroup analysis was performed based on the primary diagnoses of cervical radiculopathy, cervical myelopathy, or lumbar degenerative disease.

RESULTS

A total of 2770 patients (1395 males, 50.4%) were included in the analysis. The mean age at the time of surgery was 57.3 ± 14.4 years. Mean postoperative follow-up duration was 7.6 ± 6.2 months. Preoperatively, patients scored an average 15.1 ± 7.4 points below the normative population (mean 50 ± 10 points) in PF, and 15.8 ± 6.8 points above the mean in PI. PROMIS PF required a mean of 4.1 ± 0.6 questions and median 40 seconds (interquartile range [IQR] 29–58 seconds) to be completed, which was similar to PI (median 4.3 ± 1.1 questions and 38 seconds [IQR 27–59 seconds]). Patients experienced clinically meaningful improvements in PF and PI, which were sustained throughout the postoperative course. PROMIS instruments were able to capture anticipated changes in PF and PI, although to a lesser degree in PF early postoperatively. There was a strong negative correlation between PROMIS PF and PI scores at baseline (Pearson’s r = −0.72) and during follow-up appointments (early, intermediate, and late |r| > 0.6, each). Subgroup analysis demonstrated similar results within diagnostic groups compared to the overall cohort. However, the burden of PF limitations and PI was greater within the lumbar spine disease subgroup, compared to patients with cervical radiculopathy and myelopathy.

CONCLUSIONS

Patients receiving care at a tertiary spine surgery outpatient clinic experience significant overall disability and PI, as measured by PROMIS PF and PI computer adaptive tests. PROMIS PF and PI health domains are strongly correlated, responsive to changes over time, and facilitate time-efficient evaluations of perceived health status outcomes in patients undergoing spine surgery.

Restricted access

Georgios A. Maragkos, Luis C. Ascanio, Mohamed M. Salem, Sricharan Gopakumar, Santiago Gomez-Paz, Alejandro Enriquez-Marulanda, Abhi Jain, Clemens M. Schirmer, Paul M. Foreman, Christoph J. Griessenauer, Peter Kan, Christopher S. Ogilvy and Ajith J. Thomas

OBJECTIVE

The Pipeline embolization device (PED) is a routine choice for the endovascular treatment of select intracranial aneurysms. Its success is based on the high rates of aneurysm occlusion, followed by near-zero recanalization probability once occlusion has occurred. Therefore, identification of patient factors predictive of incomplete occlusion on the last angiographic follow-up is critical to its success.

METHODS

A multicenter retrospective cohort analysis was conducted on consecutive patients treated with a PED for unruptured aneurysms in 3 academic institutions in the US. Patients with angiographic follow-up were selected to identify the factors associated with incomplete occlusion.

RESULTS

Among all 3 participating institutions a total of 523 PED placement procedures were identified. There were 284 procedures for 316 aneurysms, which had radiographic follow-up and were included in this analysis (median age 58 years; female-to-male ratio 4.2:1). Complete occlusion (100% occlusion) was noted in 76.6% of aneurysms, whereas incomplete occlusion (≤ 99% occlusion) at last follow-up was identified in 23.4%. After accounting for factor collinearity and confounding, multivariable analysis identified older age (> 70 years; OR 4.46, 95% CI 2.30–8.65, p < 0.001); higher maximal diameter (≥ 15 mm; OR 3.29, 95% CI 1.43–7.55, p = 0.005); and fusiform morphology (OR 2.89, 95% CI 1.06–7.85, p = 0.038) to be independently associated with higher rates of incomplete occlusion at last follow-up. Thromboembolic complications were noted in 1.4% and hemorrhagic complications were found in 0.7% of procedures.

CONCLUSIONS

Incomplete aneurysm occlusion following placement of a PED was independently associated with age > 70 years, aneurysm diameter ≥ 15 mm, and fusiform morphology. Such predictive factors can be used to guide individualized treatment selection and counseling in patients undergoing cerebrovascular neurosurgery.

Restricted access

Nasser Mohammed, Dale Ding, Yi-Chieh Hung, Zhiyuan Xu, Cheng-Chia Lee, Hideyuki Kano, Roberto Martínez-Álvarez, Nuria Martínez-Moreno, David Mathieu, Mikulas Kosak, Christopher P. Cifarelli, Gennadiy A. Katsevman, L. Dade Lunsford, Mary Lee Vance and Jason P. Sheehan

OBJECTIVE

The role of primary stereotactic radiosurgery (SRS) in patients with medically refractory acromegaly who are not operative candidates or who refuse resection is poorly understood. The aim of this multicenter, matched cohort study was to compare the outcomes of primary versus postoperative SRS for acromegaly.

METHODS

The authors reviewed an International Radiosurgery Research Foundation database of 398 patients with acromegaly who underwent SRS and categorized them into primary or postoperative cohorts. Patients in the primary SRS cohort were matched, in a 1:2 ratio, to those in the postoperative SRS cohort, and the outcomes of the 2 matched cohorts were compared.

RESULTS

The study cohort comprised 78 patients (median follow-up 66.4 months), including 26 and 52 in the matched primary and postoperative SRS cohorts, respectively. In the primary SRS cohort, the actuarial endocrine remission rates at 2 and 5 years were 20% and 42%, respectively. The Cox proportional hazards model showed that a lower pre-SRS insulin-like growth factor–1 level was predictive of initial endocrine remission (p = 0.03), whereas a lower SRS margin dose was predictive of biochemical recurrence after initial remission (p = 0.01). There were no differences in the rates of radiological tumor control (p = 0.34), initial endocrine remission (p = 0.23), biochemical recurrence after initial remission (p = 0.33), recurrence-free survival (p = 0.32), or hypopituitarism (p = 0.67) between the 2 matched cohorts.

CONCLUSIONS

Primary SRS has a reasonable benefit-to-risk profile for patients with acromegaly in whom resection is not possible, and it has similar outcomes to endocrinologically comparable patients who undergo postoperative SRS. SRS with medical therapy in the latent period can be used as an alternative to surgery in selected patients who cannot or do not wish to undergo resection.

Restricted access

Hyoung-Sub Kim, Jong Beom Lee, Jong Hyeok Park, Ho Jin Lee, Jung Jae Lee, Shumayou Dutta, Il Sup Kim and Jae Taek Hong

OBJECTIVE

Little is known about the risk factors for postoperative subaxial cervical kyphosis following craniovertebral junction (CVJ) fixation. The object of this study was to evaluate postoperative changes in cervical alignment and to identify the risk factors for postoperative kyphotic change in the subaxial cervical spine after CVJ fixation.

METHODS

One hundred fifteen patients were retrospectively analyzed for postoperative subaxial kyphosis after CVJ fixation. Relations between subaxial kyphosis and radiological risk factors, including segmental angles and ranges of motion (ROMs) at C0–1, C1–2, and C2–7, and clinical factors, such as age, sex, etiology, occipital fixation, extensor muscle resection at C2, additional C1–2 posterior wiring, and subaxial laminoplasty, were investigated. Univariate and multivariate logistic regression analyses were conducted to identify the risk factors for postoperative kyphotic changes in the subaxial cervical spine.

RESULTS

The C2–7 angle change was more than −10° in 30 (26.1%) of the 115 patients. Risk factor analysis showed CVJ fixation combined with subaxial laminoplasty (OR 9.336, 95% CI 1.484–58.734, p = 0.017) and a small ROM at the C0–1 segment (OR 0.836, 95% CI 0.757–0.923, p < 0.01) were related to postoperative subaxial kyphotic change. On the other hand, age, sex, resection of the C2 extensor muscle, rheumatoid arthritis, additional C1–2 posterior wiring, and postoperative segmental angles were not risk factors for postoperative subaxial kyphosis

CONCLUSIONS

Subaxial alignment change is not uncommon after CVJ fixation. Muscle detachment at the C2 spinous process was not a risk factor of kyphotic change. The study findings suggest that a small ROM at the C0–1 segment with or without occipital fixation and combined subaxial laminoplasty are risk factors for subaxial kyphotic change.

Restricted access

James C. Dickerson, Katherine L. Harriel, Robert J. Dambrino IV, Lorne I. Taylor, Jordan A. Rimes, Ryan W. Chapman, Andrew S. Desrosiers, Jason E. Tullis and Chad W. Washington

OBJECTIVE

Deep vein thrombosis (DVT) is a major focus of patient safety indicators and a common cause of morbidity and mortality. Many practices have employed lower-extremity screening ultrasonography in addition to chemoprophylaxis and the use of sequential compression devices in an effort to reduce poor outcomes. However, the role of screening in directly decreasing pulmonary emboli (PEs) and mortality is unclear. At the University of Mississippi Medical Center, a policy change provided the opportunity to compare independent groups: patients treated under a prior paradigm of weekly screening ultrasonography versus a post–policy change group in which weekly surveillance was no longer performed.

METHODS

A total of 2532 consecutive cases were reviewed, with a 4-month washout period around the time of the policy change. Criteria for inclusion were admission to the neurosurgical service or consultation for ≥ 72 hours and hospitalization for ≥ 72 hours. Patients with a known diagnosis of DVT on admission or previous inferior vena cava (IVC) filter placement were excluded. The primary outcome examined was the rate of PE diagnosis, with secondary outcomes of all-cause mortality at discharge, DVT diagnosis rate, and IVC filter placement rate. A p value < 0.05 was considered significant.

RESULTS

A total of 485 patients met the criteria for the pre–policy change group and 504 for the post–policy change group. Data are presented as screening (pre–policy change) versus no screening (post–policy change). There was no difference in the PE rate (2% in both groups, p = 0.72) or all-cause mortality at discharge (7% vs 6%, p = 0.49). There were significant differences in the lower-extremity DVT rate (10% vs 3%, p < 0.01) or IVC filter rate (6% vs 2%, p < 0.01).

CONCLUSIONS

Based on these data, screening Doppler ultrasound examinations, in conjunction with standard-of-practice techniques to prevent thromboembolism, do not appear to confer a benefit to patients. While the screening group had significantly higher rates of DVT diagnosis and IVC filter placement, the screening, additional diagnoses, and subsequent interventions did not appear to improve patient outcomes. Ultimately, this makes DVT screening difficult to justify.