Browse

You are looking at 1 - 10 of 210 items for

  • All content x
  • By Author: Mummaneni, Praveen V. x
Clear All
Restricted access

Praveen V. Mummaneni, Mohamad Bydon, John J. Knightly, Mohammed Ali Alvi, Yagiz U. Yolcu, Andrew K. Chan, Kevin T. Foley, Jonathan R. Slotkin, Eric A. Potts, Mark E. Shaffrey, Christopher I. Shaffrey, Kai-Ming Fu, Michael Y. Wang, Paul Park, Cheerag D. Upadhyaya, Anthony L. Asher, Luis Tumialan, and Erica F. Bisson

OBJECTIVE

Optimizing patient discharge after surgery has been shown to impact patient recovery and hospital/physician workflow and to reduce healthcare costs. In the current study, the authors sought to identify risk factors for nonroutine discharge after surgery for cervical myelopathy by using a national spine registry.

METHODS

The Quality Outcomes Database cervical module was queried for patients who had undergone surgery for cervical myelopathy between 2016 and 2018. Nonroutine discharge was defined as discharge to postacute care (rehabilitation), nonacute care, or another acute care hospital. A multivariable logistic regression predictive model was created using an array of demographic, clinical, operative, and patient-reported outcome characteristics.

RESULTS

Of the 1114 patients identified, 11.2% (n = 125) had a nonroutine discharge. On univariate analysis, patients with a nonroutine discharge were more likely to be older (age ≥ 65 years, 70.4% vs 35.8%, p < 0.001), African American (24.8% vs 13.9%, p = 0.007), and on Medicare (75.2% vs 35.1%, p < 0.001). Among the patients younger than 65 years of age, those who had a nonroutine discharge were more likely to be unemployed (70.3% vs 36.9%, p < 0.001). Overall, patients with a nonroutine discharge were more likely to present with a motor deficit (73.6% vs 58.7%, p = 0.001) and more likely to have nonindependent ambulation (50.4% vs 14.0%, p < 0.001) at presentation. On multivariable logistic regression, factors associated with higher odds of a nonroutine discharge included African American race (vs White, OR 2.76, 95% CI 1.38–5.51, p = 0.004), Medicare coverage (vs private insurance, OR 2.14, 95% CI 1.00–4.65, p = 0.04), nonindependent ambulation at presentation (OR 2.17, 95% CI 1.17–4.02, p = 0.01), baseline modified Japanese Orthopaedic Association severe myelopathy score (0–11 vs moderate 12–14, OR 2, 95% CI 1.07–3.73, p = 0.01), and posterior surgical approach (OR 11.6, 95% CI 2.12–48, p = 0.004). Factors associated with lower odds of a nonroutine discharge included fewer operated levels (1 vs 2–3 levels, OR 0.3, 95% CI 0.1–0.96, p = 0.009) and a higher quality of life at baseline (EQ-5D score, OR 0.43, 95% CI 0.25–0.73, p = 0.001). On predictor importance analysis, baseline quality of life (EQ-5D score) was identified as the most important predictor (Wald χ2 = 9.8, p = 0.001) of a nonroutine discharge; however, after grouping variables into distinct categories, socioeconomic and demographic characteristics (age, race, gender, insurance status, employment status) were identified as the most significant drivers of nonroutine discharge (28.4% of total predictor importance).

CONCLUSIONS

The study results indicate that socioeconomic and demographic characteristics including age, race, gender, insurance, and employment may be the most significant drivers of a nonroutine discharge after surgery for cervical myelopathy.

Free access

Enrique Vargas, Dennis T. Lockney, Praveen V. Mummaneni, Alexander F. Haddad, Joshua Rivera, Xiao Tan, Alysha Jamieson, Yasmine Mahmoudieh, Sigurd Berven, Steve E. Braunstein, and Dean Chou

OBJECTIVE

Within the Spine Instability Neoplastic Score (SINS) classification, tumor-related potential spinal instability (SINS 7–12) may not have a clear treatment approach. The authors aimed to examine the proportion of patients in this indeterminate zone who later required surgical stabilization after initial nonoperative management. By studying this patient population, they sought to determine if a clear SINS cutoff existed whereby the spine is potentially unstable due to a lesion and would be more likely to require stabilization.

METHODS

Records from patients treated at the University of California, San Francisco, for metastatic spine disease from 2005 to 2019 were retrospectively reviewed. Seventy-five patients with tumor-related potential spinal instability (SINS 7–12) who were initially treated nonoperatively were included. All patients had at least a 1-year follow-up with complete medical records. A univariate chi-square test and Student t-test were used to compare categorical and continuous outcomes, respectively, between patients who ultimately underwent surgery and those who did not. A backward likelihood multivariate binary logistic regression model was used to investigate the relationship between clinical characteristics and surgical intervention. Recursive partitioning analysis (RPA) and single-variable logistic regression were performed as a function of SINS.

RESULTS

Seventy-five patients with a total of 292 spinal metastatic sites were included in this study; 26 (34.7%) patients underwent surgical intervention, and 49 (65.3%) did not. There was no difference in age, sex, comorbidities, or lesion location between the groups. However, there were more patients with a SINS of 12 in the surgery group (55.2%) than in the no surgery group (44.8%) (p = 0.003). On multivariate analysis, SINS > 11 (OR 8.09, CI 1.96–33.4, p = 0.004) and Karnofsky Performance Scale (KPS) score < 60 (OR 0.94, CI 0.89–0.98, p = 0.008) were associated with an increased risk of surgery. KPS score was not correlated with SINS (p = 0.4). RPA by each spinal lesion identified an optimal cutoff value of SINS > 10, which were associated with an increased risk of surgical intervention. Patients with a surgical intervention had a higher incidence of complications on multivariable analysis (OR 2.96, CI 1.01–8.71, p = 0.048).

CONCLUSIONS

Patients with a mean SINS of 11 or greater may be at increased risk of mechanical instability requiring surgery after initial nonoperative management. RPA showed that patients with a KPS score of 60 or lower and a SINS of greater than 10 had increased surgery rates.

Restricted access

Simon G. Ammanuel, Caleb S. Edwards, Andrew K. Chan, Praveen V. Mummaneni, Joseph Kidane, Enrique Vargas, Sarah D’Souza, Amy D. Nichols, Sujatha Sankaran, Adib A. Abla, Manish K. Aghi, Edward F. Chang, Shawn L. Hervey-Jumper, Sandeep Kunwar, Paul S. Larson, Michael T. Lawton, Philip A. Starr, Philip V. Theodosopoulos, Mitchel S. Berger, and Michael W. McDermott

OBJECTIVE

Surgical site infection (SSI) is a complication linked to increased costs and length of hospital stay. Prevention of SSI is important to reduce its burden on individual patients and the healthcare system. The authors aimed to assess the efficacy of preoperative chlorhexidine gluconate (CHG) showers on SSI rates following cranial surgery.

METHODS

In November 2013, a preoperative CHG shower protocol was implemented at the authors’ institution. A total of 3126 surgical procedures were analyzed, encompassing a time frame from April 2012 to April 2016. Cohorts before and after implementation of the CHG shower protocol were evaluated for differences in SSI rates.

RESULTS

The overall SSI rate was 0.6%. No significant differences (p = 0.11) were observed between the rate of SSI of the 892 patients in the preimplementation cohort (0.2%) and that of the 2234 patients in the postimplementation cohort (0.8%). Following multivariable analysis, implementation of preoperative CHG showers was not associated with decreased SSI (adjusted OR 2.96, 95% CI 0.67–13.1; p = 0.15).

CONCLUSIONS

This is the largest study, according to sample size, to examine the association between CHG showers and SSI following craniotomy. CHG showers did not significantly alter the risk of SSI after a cranial procedure.

Restricted access

Praveen V. Mummaneni, Ibrahim Hussain, Christopher I. Shaffrey, Robert K. Eastlack, Gregory M. Mundis Jr., Juan S. Uribe, Richard G. Fessler, Paul Park, Leslie Robinson, Joshua Rivera, Dean Chou, Adam S. Kanter, David O. Okonkwo, Pierce D. Nunley, Michael Y. Wang, Frank La Marca, Khoi D. Than, Kai-Ming Fu, and the International Spine Study Group

OBJECTIVE

Minimally invasive surgery (MIS) for spinal deformity uses interbody techniques for correction, indirect decompression, and arthrodesis. Selection criteria for choosing a particular interbody approach are lacking. The authors created the minimally invasive interbody selection algorithm (MIISA) to provide a framework for rational decision-making in MIS for deformity.

METHODS

A retrospective data set of circumferential MIS (cMIS) for adult spinal deformity (ASD) collected over a 5-year period was analyzed by level in the lumbar spine to identify surgeon preferences and evaluate segmental lordosis outcomes. These data were used to inform a Delphi session of minimally invasive deformity surgeons from which the algorithm was created. The algorithm leads to 1 of 4 interbody approaches: anterior lumbar interbody fusion (ALIF), anterior column release (ACR), lateral lumbar interbody fusion (LLIF), and transforaminal lumbar interbody fusion (TLIF). Preoperative and 2-year postoperative radiographic parameters and clinical outcomes were compared.

RESULTS

Eleven surgeons completed 100 cMISs for ASD with 338 interbody devices, with a minimum 2-year follow-up. The type of interbody approach used at each level from L1 to S1 was recorded. The MIISA was then created with substantial agreement. The surgeons generally preferred LLIF for L1–2 (91.7%), L2–3 (85.2%), and L3–4 (80.7%). ACR was most commonly performed at L3–4 (8.4%) and L2–3 (6.2%). At L4–5, LLIF (69.5%), TLIF (15.9%), and ALIF (9.8%) were most commonly utilized. TLIF and ALIF were the most selected approaches at L5–S1 (61.4% and 38.6%, respectively). Segmental lordosis at each level varied based on the approach, with greater increases reported using ALIF, especially at L4–5 (9.2°) and L5–S1 (5.3°). A substantial increase in lordosis was achieved with ACR at L2–3 (10.9°) and L3–4 (10.4°). Lateral interbody arthrodesis without the use of an ACR did not generally result in significant lordosis restoration. There were statistically significant improvements in lumbar lordosis (LL), pelvic incidence–LL mismatch, coronal Cobb angle, and Oswestry Disability Index at the 2-year follow-up.

CONCLUSIONS

The use of the MIISA provides consistent guidance for surgeons who plan to perform MIS for deformity. For L1–4, the surgeons preferred lateral approaches to TLIF and reserved ACR for patients who needed the greatest increase in segmental lordosis. For L4–5, the surgeons’ order of preference was LLIF, TLIF, and ALIF, but TLIF failed to demonstrate any significant lordosis restoration. At L5–S1, the surgical team typically preferred an ALIF when segmental lordosis was desired and preferred a TLIF if preoperative segmental lordosis was adequate.

Restricted access

Meng Huang, Avery Buchholz, Anshit Goyal, Erica Bisson, Zoher Ghogawala, Eric Potts, John Knightly, Domagoj Coric, Anthony Asher, Kevin Foley, Praveen V. Mummaneni, Paul Park, Mark Shaffrey, Kai-Ming Fu, Jonathan Slotkin, Steven Glassman, Mohamad Bydon, and Michael Wang

OBJECTIVE

Surgical treatment for degenerative spondylolisthesis has been proven to be clinically challenging and cost-effective. However, there is a range of thresholds that surgeons utilize for incorporating fusion in addition to decompressive laminectomy in these cases. This study investigates these surgeon- and site-specific factors by using the Quality Outcomes Database (QOD).

METHODS

The QOD was queried for all cases that had undergone surgery for grade 1 spondylolisthesis from database inception to February 2019. In addition to patient-specific covariates, surgeon-specific covariates included age, sex, race, years in practice (0–10, 11–20, 21–30, > 30 years), and fellowship training. Site-specific variables included hospital location (rural, suburban, urban), teaching versus nonteaching status, and hospital type (government, nonfederal; private, nonprofit; private, investor owned). Multivariable regression and predictor importance analyses were performed to identify predictors of the treatment performed (decompression alone vs decompression and fusion). The model was clustered by site to account for site-specific heterogeneity in treatment selection.

RESULTS

A total of 12,322 cases were included with 1988 (16.1%) that had undergone decompression alone. On multivariable regression analysis clustered by site, adjusting for patient-level clinical covariates, no surgeon-specific factors were found to be significantly associated with the odds of selecting decompression alone as the surgery performed. However, sites located in suburban areas (OR 2.32, 95% CI 1.09–4.84, p = 0.03) were more likely to perform decompression alone (reference = urban). Sites located in rural areas had higher odds of performing decompression alone than hospitals located in urban areas, although the results were not statistically significant (OR 1.33, 95% CI 0.59–2.61, p = 0.49). Nonteaching status was independently associated with lower odds of performing decompression alone (OR 0.40, 95% CI 0.19–0.97, p = 0.04). Predictor importance analysis revealed that the most important determinants of treatment selection were dominant symptom (Wald χ2 = 34.7, accounting for 13.6% of total χ2) and concurrent diagnosis of disc herniation (Wald χ2 = 31.7, accounting for 12.4% of total χ2). Hospital teaching status was also found to be relatively important (Wald χ2 = 4.2, accounting for 1.6% of total χ2) but less important than other patient-level predictors.

CONCLUSIONS

Nonteaching centers were more likely to perform decompressive laminectomy with supplemental fusion for spondylolisthesis. Suburban hospitals were more likely to perform decompression only. Surgeon characteristics were not found to influence treatment selection after adjustment for clinical covariates. Further large database registry experience from surgeons at high-volume academic centers at which surgically and medically complex patients are treated may provide additional insight into factors associated with treatment preference for degenerative spondylolisthesis.

Restricted access

Dominic Amara, Praveen V. Mummaneni, Shane Burch, Vedat Deviren, Christopher P. Ames, Bobby Tay, Sigurd H. Berven, and Dean Chou

OBJECTIVE

Radiculopathy from the fractional curve, usually from L3 to S1, can create severe disability. However, treatment methods of the curve vary. The authors evaluated the effect of adding more levels of interbody fusion during treatment of the fractional curve.

METHODS

A single-institution retrospective review of adult patients treated for scoliosis between 2006 and 2016 was performed. Inclusion criteria were as follows: fractional curves from L3 to S1 > 10°, ipsilateral radicular symptoms concordant on the fractional curve concavity side, patients who underwent at least 1 interbody fusion at the level of the fractional curve, and a minimum 1-year follow-up. Primary outcomes included changes in fractional curve correction, lumbar lordosis change, pelvic incidence − lumbar lordosis mismatch change, scoliosis major curve correction, and rates of revision surgery and postoperative complications. Secondary analysis compared the same outcomes among patients undergoing posterior, anterior, and lateral approaches for their interbody fusion.

RESULTS

A total of 78 patients were included. There were no significant differences in age, sex, BMI, prior surgery, fractional curve degree, pelvic tilt, pelvic incidence, pelvic incidence − lumbar lordosis mismatch, sagittal vertical axis, coronal balance, scoliotic curve magnitude, proportion of patients undergoing an osteotomy, or average number of levels fused among the groups. The mean follow-up was 35.8 months (range 12–150 months). Patients undergoing more levels of interbody fusion had more fractional curve correction (7.4° vs 12.3° vs 12.1° for 1, 2, and 3 levels; p = 0.009); greater increase in lumbar lordosis (−1.8° vs 6.2° vs 13.7°, p = 0.003); and more scoliosis major curve correction (13.0° vs 13.7° vs 24.4°, p = 0.01). There were no statistically significant differences among the groups with regard to postoperative complications (overall rate 47.4%, p = 0.85) or need for revision surgery (overall rate 30.7%, p = 0.25). In the secondary analysis, patients undergoing anterior lumbar interbody fusion (ALIF) had a greater increase in lumbar lordosis (9.1° vs −0.87° for ALIF vs transforaminal lumbar interbody fusion [TLIF], p = 0.028), but also higher revision surgery rates unrelated to adjacent-segment pathology (25% vs 4.3%, p = 0.046). Higher ALIF revision surgery rates were driven by rod fracture in the majority (55%) of cases.

CONCLUSIONS

More levels of interbody fusion resulted in increased lordosis, scoliosis curve correction, and fractional curve correction. However, additional levels of interbody fusion up to 3 levels did not result in more postoperative complications or morbidity. ALIF resulted in a greater lumbar lordosis increase than TLIF, but ALIF had higher revision surgery rates.

Restricted access

Chih-Chang Chang, Dean Chou, Brenton Pennicooke, Joshua Rivera, Lee A. Tan, Sigurd Berven, and Praveen V. Mummaneni

OBJECTIVE

Potential advantages of using expandable versus static cages during transforaminal lumbar interbody fusion (TLIF) are not fully established. The authors aimed to compare the long-term radiographic outcomes of expandable versus static TLIF cages.

METHODS

A retrospective review of 1- and 2-level TLIFs over a 10-year period with expandable and static cages was performed at the University of California, San Francisco. Patients with posterior column osteotomy (PCO) were subdivided. Fusion assessment, cage subsidence, anterior and posterior disc height, foraminal dimensions, pelvic incidence (PI), segmental lordosis (SL), lumbar lordosis (LL), pelvic incidence–lumbar lordosis mismatch (PI-LL), pelvic tilt (PT), sacral slope (SS), and sagittal vertical axis (SVA) were assessed.

RESULTS

A consecutive series of 178 patients (with a total of 210 levels) who underwent TLIF using either static (148 levels) or expandable cages (62 levels) was reviewed. The mean patient age was 60.3 ± 11.5 years and 62.8 ± 14.1 years for the static and expandable cage groups, respectively. The mean follow-up was 42.9 ± 29.4 months for the static cage group and 27.6 ± 14.1 months for the expandable cage group. Within the 1-level TLIF group, the SL and PI-LL improved with statistical significance regardless of whether PCO was performed; however, the static group with PCOs also had statistically significant improvement in LL and SVA. The expandable cage with PCO subgroup had significant improvement in SL only. All of the foraminal parameters improved with statistical significance, regardless of the type of cages used; however, the expandable cage group had greater improvement in disc height restoration. The incidence of cage subsidence was higher in the expandable group (19.7% vs 5.4%, p = 0.0017). Within the expandable group, the unilateral facetectomy-only subgroup had a 5.6 times higher subsidence rate than the PCO subgroup (26.8% vs 4.8%, p = 0.04). Four expandable cages collapsed over time.

CONCLUSIONS

Expandable TLIF cages may initially restore disc height better than static cages, but they also have higher rates of subsidence. Unilateral facetectomy alone may result in more subsidence with expandable cages than using bilateral PCO, potentially because of insufficient facet release. Although expandable cages may have more power to induce lordosis and restore disc height than static cages, subsidence and endplate violation may negate any significant gains compared to static cages.

Free access

Andrew K. Chan, Michele Santacatterina, Brenton Pennicooke, Shane Shahrestani, Alexander M. Ballatori, Katie O. Orrico, John F. Burke, Geoffrey T. Manley, Phiroz E. Tarapore, Michael C. Huang, Sanjay S. Dhall, Dean Chou, Praveen V. Mummaneni, and Anthony M. DiGiorgio

OBJECTIVE

Spine surgery is especially susceptible to malpractice claims. Critics of the US medical liability system argue that it drives up costs, whereas proponents argue it deters negligence. Here, the authors study the relationship between malpractice claim density and outcomes.

METHODS

The following methods were used: 1) the National Practitioner Data Bank was used to determine the number of malpractice claims per 100 physicians, by state, between 2005 and 2010; 2) the Nationwide Inpatient Sample was queried for spinal fusion patients; and 3) the Area Resource File was queried to determine the density of physicians, by state. States were categorized into 4 quartiles regarding the frequency of malpractice claims per 100 physicians. To evaluate the association between malpractice claims and death, discharge disposition, length of stay (LOS), and total costs, an inverse-probability-weighted regression-adjustment estimator was used. The authors controlled for patient and hospital characteristics. Covariates were used to train machine learning models to predict death, discharge disposition not to home, LOS, and total costs.

RESULTS

Overall, 549,775 discharges following spinal fusions were identified, with 495,640 yielding state-level information about medical malpractice claim frequency per 100 physicians. Of these, 124,425 (25.1%), 132,613 (26.8%), 130,929 (26.4%), and 107,673 (21.7%) were from the lowest, second-lowest, second-highest, and highest quartile states, respectively, for malpractice claims per 100 physicians. Compared to the states with the fewest claims (lowest quartile), surgeries in states with the most claims (highest quartile) showed a statistically significantly higher odds of a nonhome discharge (OR 1.169, 95% CI 1.139–1.200), longer LOS (mean difference 0.304, 95% CI 0.256–0.352), and higher total charges (mean difference [log scale] 0.288, 95% CI 0.281–0.295) with no significant associations for mortality. For the machine learning models—which included medical malpractice claim density as a covariate—the areas under the curve for death and discharge disposition were 0.94 and 0.87, and the R2 values for LOS and total charge were 0.55 and 0.60, respectively.

CONCLUSIONS

Spinal fusion procedures from states with a higher frequency of malpractice claims were associated with an increased odds of nonhome discharge, longer LOS, and higher total charges. This suggests that medicolegal climate may potentially alter practice patterns for a given spine surgeon and may have important implications for medical liability reform. Machine learning models that included medical malpractice claim density as a feature were satisfactory in prediction and may be helpful for patients, surgeons, hospitals, and payers.

Restricted access

Ping-Guo Duan, Praveen V. Mummaneni, Jeremy M. V. Guinn, Joshua Rivera, Sigurd H. Berven, and Dean Chou

OBJECTIVE

The aim of this study was to investigate whether fat infiltration of the lumbar multifidus (LM) muscle affects revision surgery rates for adjacent-segment degeneration (ASD) after L4–5 transforaminal lumbar interbody fusion (TLIF) for degenerative spondylolisthesis.

METHODS

A total of 178 patients undergoing single-level L4–5 TLIF for spondylolisthesis (2006 to 2016) were retrospectively analyzed. Inclusion criteria were a minimum 2-year follow-up, preoperative MR images and radiographs, and single-level L4–5 TLIF for degenerative spondylolisthesis. Twenty-three patients underwent revision surgery for ASD during the follow-up. Another 23 patients without ASD were matched with the patients with ASD. Demographic data, Roussouly curvature type, and spinopelvic parameter data were collected. The fat infiltration of the LM muscle (L3, L4, and L5) was evaluated on preoperative MRI using the Goutallier classification system.

RESULTS

A total of 46 patients were evaluated. There were no differences in age, sex, BMI, or spinopelvic parameters with regard to patients with and those without ASD (p > 0.05). Fat infiltration of the LM was significantly greater in the patients with ASD than in those without ASD (p = 0.029). Fat infiltration was most significant at L3 in patients with ASD than in patients without ASD (p = 0.017). At L4 and L5, there was an increasing trend of fat infiltration in the patients with ASD than in those without ASD, but the difference was not statistically significant (p = 0.354 for L4 and p = 0.077 for L5).

CONCLUSIONS

Fat infiltration of the LM may be associated with ASD after L4–5 TLIF for spondylolisthesis. Fat infiltration at L3 may also be associated with ASD at L3–4 after L4–5 TLIF.

Restricted access

Ping-Guo Duan, Praveen V. Mummaneni, Minghao Wang, Andrew K. Chan, Bo Li, Rory Mayer, Sigurd H. Berven, and Dean Chou

OBJECTIVE

In this study, the authors’ aim was to investigate whether obesity affects surgery rates for adjacent-segment degeneration (ASD) after transforaminal lumbar interbody fusion (TLIF) for spondylolisthesis.

METHODS

Patients who underwent single-level TLIF for spondylolisthesis at the University of California, San Francisco, from 2006 to 2016 were retrospectively analyzed. Inclusion criteria were a minimum 2-year follow-up, single-level TLIF, and degenerative lumbar spondylolisthesis. Exclusion criteria were trauma, tumor, infection, multilevel fusions, non-TLIF fusions, or less than a 2-year follow-up. Patient demographic data were collected, and an analysis of spinopelvic parameters was performed. The patients were divided into two groups: mismatched, or pelvic incidence (PI) minus lumbar lordosis (LL) ≥ 10°; and balanced, or PI-LL < 10°. Within the two groups, the patients were further classified by BMI (< 30 and ≥ 30 kg/m2). Patients were then evaluated for surgery for ASD, matched by BMI and PI-LL parameters.

RESULTS

A total of 190 patients met inclusion criteria (72 males and 118 females, mean age 59.57 ± 12.39 years). The average follow-up was 40.21 ± 20.42 months (range 24–135 months). In total, 24 patients (12.63% of 190) underwent surgery for ASD. Within the entire cohort, 82 patients were in the mismatched group, and 108 patients were in the balanced group. Within the mismatched group, adjacent-segment surgeries occurred at the following rates: BMI < 30 kg/m2, 2.1% (1/48); and BMI ≥ 30 kg/m2, 17.6% (6/34). Significant differences were seen between patients with BMI ≥ 30 and BMI < 30 (p = 0.018). A receiver operating characteristic curve for BMI as a predictor for ASD was established, with an AUC of 0.69 (95% CI 0.49–0.90). The optimal BMI cutoff value determined by the Youden index is 29.95 (sensitivity 0.857; specificity 0.627). However, in the balanced PI-LL group (108/190 patients), there was no difference in surgery rates for ASD among the patients with different BMIs (p > 0.05).

CONCLUSIONS

In patients who have a PI-LL mismatch, obesity may be associated with an increased risk of surgery for ASD after TLIF, but in obese patients without PI-LL mismatch, this association was not observed.