Search Results

You are looking at 1 - 1 of 1 items for

  • Author or Editor: Nicholas A. Pallotta x
Clear All Modify Search
Restricted access

Mostafa H. El Dafrawy, Owoicho Adogwa, Adam M. Wegner, Nicholas A. Pallotta, Michael P. Kelly, Khaled M. Kebaish, Keith H. Bridwell and Munish C. Gupta

OBJECTIVE

In this study, the authors’ goal was to determine the intra- and interobserver reliability of a new classification system that allows the description of all possible constructs used across three-column osteotomies (3COs) in terms of rod configuration and density.

METHODS

Thirty-five patients with multirod constructs (MRCs) across a 3CO were classified by two spinal surgery fellows according to the new system, and then were reclassified 2 weeks later. Constructs were classified as follows: the number of rods across the osteotomy site followed by a letter corresponding to the type of rod configuration: “M” is for a main rod configuration, defined as a single rod spanning the osteotomy. “L” is for linked rod configurations, defined as 2 rods directly connected to each other at the osteotomy site. “S” is for satellite rod configurations, which were defined as a short rod independent of the main rod with anchors above and below the 3CO. “A” is for accessory rods, defined as an additional rod across the 3CO attached to main rods but not attached to any anchors across the osteotomy site. “I” is for intercalary rod configurations, defined as a rod connecting 2 separate constructs across the 3CO, without the intercalary rod itself attached to any anchors across the osteotomy site. The intra- and interobserver reliability of this classification system was determined.

RESULTS

A sample estimation for validation assuming two readers and 35 subjects results in a two-sided 95% confidence interval with a width of 0.19 and a kappa value of 0.8 (SD 0.3). The Fleiss kappa coefficient (κ) was used to calculate the degree of agreement between interrater and intraobserver reliability. The interrater kappa coefficient was 0.3, and the intrarater kappa coefficient was 0.63 (good reliability). This scenario represents a high degree of agreement despite a low kappa coefficient. Correct observations by both observers were 34 of 35 and 33 of 35 at both time points. Misclassification was related to difficulty in determining connectors versus anchors.

CONCLUSIONS

MRCs across 3COs have variable rod configurations. Currently, no classification system or agreement on nomenclature exists to define the configuration of rods across 3COs. The authors present a new, comprehensive MRC classification system with good inter- and intraobserver reliability and a high degree of agreement that allows for a standardized description of MRCs across 3COs.