A computer vision approach to identifying the manufacturer and model of anterior cervical spinal hardware

Restricted access

OBJECTIVE

Recent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.

METHODS

Patient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.

RESULTS

A total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.

CONCLUSIONS

A computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.

ABBREVIATIONS ACDF = anterior cervical discectomy and fusion; AP = anterior-posterior; BoVWs = bag-of-visual-words.

Downloadable materials

  • Supplemental Fig. 1 (PDF 455 KB)

Article Information

Correspondence Kevin T. Huang: Harvard Medical School, Brigham and Women’s Hospital, Boston, MA. khuang3@partners.org.

INCLUDE WHEN CITING Published online September 6, 2019; DOI: 10.3171/2019.6.SPINE19463.

Disclosures Dr. Chi reports being a consultant for K2M. He has received clinical or research support from Spineology for a study unrelated to the present one.

© AANS, except where prohibited by US copyright law.

Headings

Figures

  • View in gallery

    A: Images were cropped manually to focus on only the ACDF fixation system. Key features of the image are then algorithmically detected and described (KAZE feature extraction algorithm in the current study). When these features and their descriptors are extracted from all images in a training set, they can then be clustered (via k-means clustering; k = 500) to form a set of “visual words” or a “visual vocabulary” (rightmost image: square, circle, and triangle describe a set of 3 visual words that result from the clustering of similar features). B: After a visual vocabulary has been defined, each image in the training set can be described by a histogram of the number of instances of each visual word. In this example, 2 different types of hardware (represented by the moon and star, respectively) are shown, with clustering within a given hardware type based on the number of visual words present. This represents the basis for training our classifier (note: the example here depicts 3 visual words, but in our study, there was a 500-word vocabulary leading to a 500-dimensional classification space). C: Once the classifier is trained, test images (cloud symbol) are similarly described by a histogram of their visual words. When mapped onto the visual-word space, these test images can then be classified according to their similarity to known hardware-type clusters. Figure is available in color online only.

  • View in gallery

    Examples of each of the 9 identified hardware systems: the NuVasive Archon (A), Medtronic ATLANTIS VISION (B), DePuy Synthes CSLP (C), Orthofix Medical Inc. CONSTRUX (D), DePuy Synthes carbon fiber BENGAL cage (Interbody Only) (E), Zimmer Biomet MaxAn (F), DePuy Synthes SKYLINE (G), DePuy Synthes UNIPLATE (H), and DePuy Synthes ZERO-P (I).

  • View in gallery

    Confusion matrix demonstrating the results of a cross-validation analysis and 100 bootstrap iterations. The rows represent the ground-truth labels of the test images and the columns denote the top decoded result of the classifier. Numbers represent percentages of the total possible classifications into a given ground-truth label (i.e., percentages in each row sum to 100%). Figure is available in color online only.

  • View in gallery

    Examples of within-class variation of plates included in the classification system. These include examples of alternative screw placement and angulation (A), varying size of plate (B), varying opacity of overlying bony structures (C), varying rotational angle of the image (D), and variations in both radiograph sharpness and presence of partially radiopaque interbody spacers (E).

TrendMD

Metrics

Metrics

All Time Past Year Past 30 Days
Abstract Views 119 119 119
Full Text Views 22 22 22
PDF Downloads 14 14 14
EPUB Downloads 0 0 0

PubMed

Google Scholar