Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: development and preclinical evaluation (IDEAL stage 0)

View More View Less
  • 1 Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London;
  • | 2 Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London;
  • | 3 Digital Surgery Ltd., Medtronic, London, United Kingdom;
  • | 4 Department of Neurosurgery, Trauma Center, Gamma Knife Center, Cannizzaro Hospital, Catania, Italy; and
  • | 5 Nuffield Department of Surgical Sciences, University of Oxford, United Kingdom
Restricted access

Purchase Now

USD  $45.00

JNS + Pediatrics - 1 year subscription bundle (Individuals Only)

USD  $515.00

JNS + Pediatrics + Spine - 1 year subscription bundle (Individuals Only)

USD  $612.00
Print or Print + Online

OBJECTIVE

Surgical workflow analysis involves systematically breaking down operations into key phases and steps. Automatic analysis of this workflow has potential uses for surgical training, preoperative planning, and outcome prediction. Recent advances in machine learning (ML) and computer vision have allowed accurate automated workflow analysis of operative videos. In this Idea, Development, Exploration, Assessment, Long-term study (IDEAL) stage 0 study, the authors sought to use Touch Surgery for the development and validation of an ML-powered analysis of phases and steps in the endoscopic transsphenoidal approach (eTSA) for pituitary adenoma resection, a first for neurosurgery.

METHODS

The surgical phases and steps of 50 anonymized eTSA operative videos were labeled by expert surgeons. Forty videos were used to train a combined convolutional and recurrent neural network model by Touch Surgery. Ten videos were used for model evaluation (accuracy, F1 score), comparing the phase and step recognition of surgeons to the automatic detection of the ML model.

RESULTS

The longest phase was the sellar phase (median 28 minutes), followed by the nasal phase (median 22 minutes) and the closure phase (median 14 minutes). The longest steps were step 5 (tumor identification and excision, median 17 minutes); step 3 (posterior septectomy and removal of sphenoid septations, median 14 minutes); and step 4 (anterior sellar wall removal, median 10 minutes). There were substantial variations within the recorded procedures in terms of video appearances, step duration, and step order, with only 50% of videos containing all 7 steps performed sequentially in numerical order. Despite this, the model was able to output accurate recognition of surgical phases (91% accuracy, 90% F1 score) and steps (76% accuracy, 75% F1 score).

CONCLUSIONS

In this IDEAL stage 0 study, ML techniques have been developed to automatically analyze operative videos of eTSA pituitary surgery. This technology has previously been shown to be acceptable to neurosurgical teams and patients. ML-based surgical workflow analysis has numerous potential uses—such as education (e.g., automatic indexing of contemporary operative videos for teaching), improved operative efficiency (e.g., orchestrating the entire surgical team to a common workflow), and improved patient outcomes (e.g., comparison of surgical techniques or early detection of adverse events). Future directions include the real-time integration of Touch Surgery into the live operative environment as an IDEAL stage 1 (first-in-human) study, and further development of underpinning ML models using larger data sets.

ABBREVIATIONS

AI = artificial intelligence; CNN = convolutional neural network; DNN = deep neural network; eTSA = endoscopic transsphenoidal approach; IDEAL = Idea, Development, Exploration, Assessment, Long-term study; ML = machine learning; RNN = recurrent neural network.

JNS + Pediatrics - 1 year subscription bundle (Individuals Only)

USD  $515.00

JNS + Pediatrics + Spine - 1 year subscription bundle (Individuals Only)

USD  $612.00
  • 1

    Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):4456.

  • 2

    Wu S, Roberts K, Datta S, Du J, Ji Z, Si Y, et al. Deep learning in clinical natural language processing: a methodical review. J Am Med Inform Assoc. 2020;27(3):457470.

    • Search Google Scholar
    • Export Citation
  • 3

    Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: promises and perils. Ann Surg. 2018;268(1):7076.

  • 4

    Lalys F, Jannin P. Surgical process modelling: a review. Int J CARS. 2014;9(3):495511.

  • 5

    Sarker SK, Chang A, Albrani T, Vincent C. Constructing hierarchical task analysis in surgery. Surg Endosc. 2008;22(1):107111.

  • 6

    Dijkstra FA, Bosker RJI, Veeger NJGM, van Det MJ, Pierie JP. Procedural key steps in laparoscopic colorectal surgery, consensus through Delphi methodology. Surg Endosc. 2015;29(9):26202627.

    • Search Google Scholar
    • Export Citation
  • 7

    Strauss G, Fischer M, Meixensberger J, Falk V, Trantakis C, Winkler D, et al. Workflow analysis to assess the efficiency of intraoperative technology using the example of functional endoscopic sinus surgery. Article in German. HNO. 2006;54(7):528535.

    • Search Google Scholar
    • Export Citation
  • 8

    Krauss A, Muensterer OJ, Neumuth T, Wachowiak R, Donaubauer B, Korb W, Burgert O. Workflow analysis of laparoscopic Nissen fundoplication in infant pigs—a model for surgical feedback and training. J Laparoendosc Adv Surg Tech A. 2009;19(suppl 1):S117S122.

    • Search Google Scholar
    • Export Citation
  • 9

    Grenda TR, Pradarelli JC, Dimick JB. Using surgical video to improve technique and skill. Ann Surg. 2016;264(1):3233.

  • 10

    Maktabi M, Neumuth T. Online time and resource management based on surgical workflow time series analysis. Int J CARS. 2017;12(2):325338.

    • Search Google Scholar
    • Export Citation
  • 11

    Hashimoto DA, Rosman G, Witkowski ER, Stafford C, Navarette-Welton AJ, Rattner DW, et al. Computer vision analysis of intraoperative video: automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann Surg. 2019;270(3):414421.

    • Search Google Scholar
    • Export Citation
  • 12

    Zisimopoulos O, Flouty E, Luengo I, Giataganas P, Nehme J, Chow A, et al. DeepPhase: surgical phase recognition in CATARACTS videos. In:Proceeding of 21st International Conference, Medical Image Computing and Computer-Assisted Intervention; September 16–20, 2018;Granada, Spain. Springer;2018:265272.

    • Search Google Scholar
    • Export Citation
  • 13

    Lecuyer G, Ragot M, Martin N, Launay L, Jannin P. Assisted phase and step annotation for surgical videos. Int J CARS. 2020;15(4):673680.

    • Search Google Scholar
    • Export Citation
  • 14

    Yu F, Silva Croso G, Kim TS, Song Z, Parker F, Hager GD, et al. Assessment of automated identification of phases in videos of cataract surgery using machine learning and deep learning techniques. JAMA Netw Open. 2019;2(4):e191860.

    • Search Google Scholar
    • Export Citation
  • 15

    Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N. Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging. 2017;36(1):8697.

    • Search Google Scholar
    • Export Citation
  • 16

    Hirst A, Philippou Y, Blazeby J, Campbell B, Campbell M, Feinberg J, et al. No surgical innovation without evaluation: evolution and further development of the IDEAL framework and recommendations. Ann Surg. 2019;269(2):211220.

    • Search Google Scholar
    • Export Citation
  • 17

    Marcus HJ, Bennet A, Chiari A, Day T, Hirst A, Hughes-Hallett A, et al. IDEAL-D framework for device innovation: a consensus statement on the preclinical stage. Ann Surg.Published online on April 7, 2021. doi: 10.1097/SLA.0000000000004907

    • Search Google Scholar
    • Export Citation
  • 18

    Liu X, Cruz Rivera S, Moher D, Calvert MJ, Dennston AK. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. 2020;2(10):e537e548.

    • Search Google Scholar
    • Export Citation
  • 19

    Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Circulation. 2015;131(2):211219.

    • Search Google Scholar
    • Export Citation
  • 20

    Kadkhodamohammadi A, Sivanesan Uthraraj N, Giataganas P, Gras G, Kerr K, Luengo I, et al. Towards video-based surgical workflow understanding in open orthopaedic surgery. Comput Methods Biomech Biomed Eng Imaging Vis. 2020;9(3):286293.

    • Search Google Scholar
    • Export Citation
  • 21

    Garrow CR, Kowalewski KF, Li L, Wagner M, Schmidt MW, Engelhardt S, et al. Machine learning for surgical phase recognition: a systematic review. Ann Surg. 2020;273(4):684693.

    • Search Google Scholar
    • Export Citation
  • 22

    Buchlak QD, Esmaili N, Leveque JC, Farrokhi F, Bennett C, Piccardi M, et al. Machine learning applications to clinical decision support in neurosurgery: an artificial intelligence augmented systematic review. Neurosurg Rev. 2019;43(5):12351253.

    • Search Google Scholar
    • Export Citation
  • 23

    Cappabianca P, Cavallo LM, de Divitiis E. Endoscopic endonasal transsphenoidal surgery. Neurosurgery. 2004;55(4):933941.

  • 24

    Liu JK, Das K, Weiss MH, Laws ER Jr, Couldwell WT. The history and evolution of transsphenoidal surgery. J Neurosurg. 2001;95(6):10831096.

    • Search Google Scholar
    • Export Citation
  • 25

    Couldwell WT, Weiss MH, Rabb C, Liu JK, Apfelbaum RI, Fukushima T. Variations on the standard transsphenoidal approach to the sellar region, with emphasis on the extended approaches and parasellar approaches: surgical experience in 105 cases. Neurosurgery. 2004;55(3):539550.

    • Search Google Scholar
    • Export Citation
  • 26

    Buchfelder M, Schlaffer S. Pituitary surgery for Cushing’s disease. Neuroendocrinology. 2010;92(suppl 1):102106.

  • 27

    Lucas JW, Zada G. Endoscopic surgery for pituitary tumors. Neurosurg Clin N Am. 2012;23(4):555569.

  • 28

    Shah NJ, Navnit M, Deopujari CE, Mukerji SS. Endoscopic pituitary surgery—a beginner’s guide. Indian J Otolaryngol Head Neck Surg. 2004;56(1):7178.

    • Search Google Scholar
    • Export Citation
  • 29

    Cappabianca P, Cavallo LM, de Divitiis O, Solari D, Esposito F, Colao A. Endoscopic pituitary surgery. Pituitary. 2008;11(4):385390.

  • 30

    Leach P, Abou-Zeid AH, Kearney T, Davis J, Trainer PJ, Gnanalingham KK. Endoscopic transsphenoidal pituitary surgery: evidence of an operative learning curve. Neurosurgery. 2010;67(5):12051212.

    • Search Google Scholar
    • Export Citation
  • 31

    Snyderman C, Kassam A, Carrau R, Mintz A, Gardner P, Prevedello DM. Acquisition of surgical skills for endonasal skull base surgery: a training program. Laryngoscope. 2007;117(4):699705.

    • Search Google Scholar
    • Export Citation
  • 32

    McLaughlin N, Laws ER, Oyesiku NM, Katznelson L, Kelly DF. Pituitary centers of excellence. Neurosurgery. 2012;71(5):91692.

  • 33

    Jane JA Jr, Sulton LD, Laws ER Jr. Surgery for primary brain tumors at United States academic training centers: results from the Residency Review Committee for neurological surgery. J Neurosurg. 2005;103(5):789793.

    • Search Google Scholar
    • Export Citation
  • 34

    Casanueva FF, Barkan AL, Buchfelder M, Klibanski A, Laws ER, Loeffler JS, et al. Criteria for the definition of pituitary tumor centers of excellence (PTCOE): a Pituitary Society statement. Pituitary. 2017;20(5):489498.

    • Search Google Scholar
    • Export Citation
  • 35

    Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I. Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J CARS. 2018;13(3):443455.

    • Search Google Scholar
    • Export Citation
  • 36

    Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F. Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw Open. 2020;3(3):e201664.

    • Search Google Scholar
    • Export Citation
  • 37

    Loukas C. Video content analysis of surgical procedures. Surg Endosc. 2018;32(2):553568.

  • 38

    Horsfall HL, Palmisciano P, Khan DZ, Muirhead W, Koh CH, Stoyanov D, et al. Attitudes of the surgical team toward artificial intelligence in neurosurgery: an international two-stage cross-sectional survey. World Neurosurg. 2020;146:e724e730.

    • Search Google Scholar
    • Export Citation
  • 39

    van de Graaf FW, Lange MM, Spakman JI, van Grevenstein WMU, Lips D, de Graaf EJR, et al. Comparison of systematic video documentation with narrative operative report in colorectal cancer surgery. JAMA Surg. 2019;154(5):381389.

    • Search Google Scholar
    • Export Citation
  • 40

    Padoy N. Machine and deep learning for workflow recognition during surgery. Minim Invasive Ther Allied Technol. 2019;28(2):8290.

  • 41

    Bonrath EM, Gordon LE, Grantcharov TP. Characterising ‘near miss’ events in complex laparoscopic surgery through video analysis. BMJ Qual Saf. 2015;24(8):516521.

    • Search Google Scholar
    • Export Citation
  • 42

    Palmisciano P, Jamjoom AAB, Taylor D, Stoyanov D, Marcus HJ. Attitudes of patients and their relatives toward artificial intelligence in neurosurgery. World Neurosurg. 2020;138:e627e633.

    • Search Google Scholar
    • Export Citation
  • 43

    Ward TM, Hashimoto DA, Ban Y, Rattner DW, Inoue H, Lillemoe KD, et al. Automated operative phase identification in peroral endoscopic myotomy. Surg Endosc. 2020;35(7):40084015.

    • Search Google Scholar
    • Export Citation
  • 44

    Bodenstedt S, Wagner M, Katić D, Mietkowski P, Mayer B, Kenngott H, et al. Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. arXiv. Preprint posted online February 13, 2017. doi: 1702.03684

    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 1037 1037 761
Full Text Views 167 167 152
PDF Downloads 138 138 109
EPUB Downloads 0 0 0