Code-free machine learning for object detection in surgical video: a benchmarking, feasibility, and cost study

Vyom Unadkat Department of Computer Science, USC Viterbi School of Engineering, Los Angeles, California;
Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Vyom Unadkat in
jns
Google Scholar
PubMed
Close
 BS
,
Dhiraj J. Pangal Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Dhiraj J. Pangal in
jns
Google Scholar
PubMed
Close
 BS
,
Guillaume Kugener Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Guillaume Kugener in
jns
Google Scholar
PubMed
Close
 MEng
,
Arman Roshannai Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Arman Roshannai in
jns
Google Scholar
PubMed
Close
,
Justin Chan Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Justin Chan in
jns
Google Scholar
PubMed
Close
 BS
,
Yichao Zhu Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Yichao Zhu in
jns
Google Scholar
PubMed
Close
 MS
,
Nicholas Markarian Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Nicholas Markarian in
jns
Google Scholar
PubMed
Close
 BS
,
Gabriel Zada Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and

Search for other papers by Gabriel Zada in
jns
Google Scholar
PubMed
Close
 MD, MS
, and
Daniel A. Donoho Division of Neurosurgery, Center for Neurosciences, Children’s National Hospital, Washington, DC

Search for other papers by Daniel A. Donoho in
jns
Google Scholar
PubMed
Close
 MD
Free access

OBJECTIVE

While the utilization of machine learning (ML) for data analysis typically requires significant technical expertise, novel platforms can deploy ML methods without requiring the user to have any coding experience (termed AutoML). The potential for these methods to be applied to neurosurgical video and surgical data science is unknown.

METHODS

AutoML, a code-free ML (CFML) system, was used to identify surgical instruments contained within each frame of endoscopic, endonasal intraoperative video obtained from a previously validated internal carotid injury training exercise performed on a high-fidelity cadaver model. Instrument-detection performances using CFML were compared with two state-of-the-art ML models built using the Python coding language on the same intraoperative video data set.

RESULTS

The CFML system successfully ingested surgical video without the use of any code. A total of 31,443 images were used to develop this model; 27,223 images were uploaded for training, 2292 images for validation, and 1928 images for testing. The mean average precision on the test set across all instruments was 0.708. The CFML model outperformed two standard object detection networks, RetinaNet and YOLOv3, which had mean average precisions of 0.669 and 0.527, respectively, in analyzing the same data set. Significant advantages to the CFML system included ease of use, relatively low cost, displays of true/false positives and negatives in a user-friendly interface, and the ability to deploy models for further analysis with ease. Significant drawbacks of the CFML model included an inability to view the structure of the trained model, an inability to update the ML model once trained with new examples, and the inability for robust downstream analysis of model performance and error modes.

CONCLUSIONS

This first report describes the baseline performance of CFML in an object detection task using a publicly available surgical video data set as a test bed. Compared with standard, code-based object detection networks, CFML exceeded performance standards. This finding is encouraging for surgeon-scientists seeking to perform object detection tasks to answer clinical questions, perform quality improvement, and develop novel research ideas. The limited interpretability and customization of CFML models remain ongoing challenges. With the further development of code-free platforms, CFML will become increasingly important across biomedical research. Using CFML, surgeons without significant coding experience can perform exploratory ML analyses rapidly and efficiently.

ABBREVIATIONS

CFML = code-free ML; CSV = comma-separated values; mAP = mean average precision; ML = machine learning; SOCAL = Simulated Outcomes following Carotid Artery Laceration; UI = user interface.

OBJECTIVE

While the utilization of machine learning (ML) for data analysis typically requires significant technical expertise, novel platforms can deploy ML methods without requiring the user to have any coding experience (termed AutoML). The potential for these methods to be applied to neurosurgical video and surgical data science is unknown.

METHODS

AutoML, a code-free ML (CFML) system, was used to identify surgical instruments contained within each frame of endoscopic, endonasal intraoperative video obtained from a previously validated internal carotid injury training exercise performed on a high-fidelity cadaver model. Instrument-detection performances using CFML were compared with two state-of-the-art ML models built using the Python coding language on the same intraoperative video data set.

RESULTS

The CFML system successfully ingested surgical video without the use of any code. A total of 31,443 images were used to develop this model; 27,223 images were uploaded for training, 2292 images for validation, and 1928 images for testing. The mean average precision on the test set across all instruments was 0.708. The CFML model outperformed two standard object detection networks, RetinaNet and YOLOv3, which had mean average precisions of 0.669 and 0.527, respectively, in analyzing the same data set. Significant advantages to the CFML system included ease of use, relatively low cost, displays of true/false positives and negatives in a user-friendly interface, and the ability to deploy models for further analysis with ease. Significant drawbacks of the CFML model included an inability to view the structure of the trained model, an inability to update the ML model once trained with new examples, and the inability for robust downstream analysis of model performance and error modes.

CONCLUSIONS

This first report describes the baseline performance of CFML in an object detection task using a publicly available surgical video data set as a test bed. Compared with standard, code-based object detection networks, CFML exceeded performance standards. This finding is encouraging for surgeon-scientists seeking to perform object detection tasks to answer clinical questions, perform quality improvement, and develop novel research ideas. The limited interpretability and customization of CFML models remain ongoing challenges. With the further development of code-free platforms, CFML will become increasingly important across biomedical research. Using CFML, surgeons without significant coding experience can perform exploratory ML analyses rapidly and efficiently.

Contemporary neurosurgical practice generates prodigious quantities of visual and numerical data, including intraoperative recordings, radiographic images, patient demographic information, laboratory results, and written notes.1–4 Machine learning (ML) is well suited to analyze these data; ML is a subset of computer science that includes a variety of learning and network architectures to generate classifications and predictions based on complex patterns within data. ML is achieving ubiquity within the world of data science, and its influence is increasingly apparent within neurosurgery.5,6

Implementing ML techniques traditionally requires moderate coding experience to build, train, and test a model. The strengths of coding ML models are obvious: they permit direct examination of the model, customization of model parameters, and development of novel ML techniques. However, for applications of existing ML models without customization, coding requires significant time investment for a novice user. Repetitive tasks that humans can perform effectively but require time and effort are well suited for computer automation.7,8 Private companies in ML have developed methods to automate the deployment of existing ML methods known as AutoML. Some of these options permit the utilization of ML techniques without the need to write a single line of original code, known as code-free ML (CFML).9–11

The first few CFML implantations published using medical data sets reported the classification of objects within medical images: detection of invasive ductal carcinoma, classification of ophthalmological disease, and identification of spinal implants.10,12,13 The performance of CFML for surgical video analysis is an exciting, untested use case. Despite the increasing quantity of intraoperative visual data generated by neurosurgeons, humans rarely review video since manual classification is time consuming and tedious. Automating the detection of features within intraoperative video would be of value to neurosurgeons and allow for improved analysis of intraoperative recordings, a largely untapped resource on surgeon performance.14–17

We evaluate the feasibility of using CFML and without extensive computational expertise to generate an ML model that automatically identifies surgical instruments contained within endoscopic endonasal intraoperative video. We describe the benefits and drawbacks of using a CFML platform and potential use cases, and we provide a step-by-step guide for surgeons using a CFML web application.

Methods

Data Set

The Simulated Outcomes following Carotid Artery Laceration (SOCAL) data set was used to train, validate, and test the CFML model.18 SOCAL contains operative video from a nationwide training exercise for neurosurgeons and otolaryngologists controlling an injury to the internal carotid artery during an endoscopic endonasal approach.19 The trials have been previously validated as having high levels of realism, face, and construct validity.19–22 Surgical instruments were annotated manually with bounding boxes and cross-validated in accordance with previously published methods. SOCAL has been validated as a test bed for surgical data science methodologies.15,23

CFML System

We used AutoML by Google Inc. to analyze the SOCAL database to identify the presence and location of surgical instruments in-frame. AutoML has an array of services to offer, ranging from Natural Language processing to Vision, as highlighted in Fig. 1A.12,24 For analysis of the SOCAL data set, AutoML Vision Classification was employed (input: unlabeled images, output: object identification and classification). All SOCAL videos were split into frames and uploaded onto Google’s Cloud Storage. A comma-separated values (CSV) file with the URL of the image stored on the cloud, the name of the label(s) in the image, and the spatial coordinates of the bounding box annotation was uploaded as training data. The CSV file in addition contains a tag to each image denoting whether the image belongs to training, validation, or test data.

FIG. 1.
FIG. 1.

Stitch of different screens from the AutoML system. A and B: Screenshots showing the various artificial intelligence products offered by AutoML, including within Vision (B). C and D: Screen shots showing the process of uploading and training the model using a point-and-click UI.

The multilabel classification model was used to analyze the SOCAL data and annotate the images with bounding boxes capturing detected instruments in view. As shown in Fig. 1B–D, the CFML dashboard is a point-and-click system with an intuitive user experience similar to any website, allowing for the uploading of data, training, and testing of models by simply clicking prompts. After uploading a CSV file to the CFML dashboard, the images are fetched. Following syncing images from the cloud to the dashboard, the model can be trained (Fig. 1). Complete instructions for uploading data, training, and testing a model can be accessed here: https://cloud.google.com/vision/automl/docs/quickstart. Mean average precision (mAP) is a commonly used metric for ML object detection systems and was used as the metric to gauge the performance of the models. The CFML system provides the overall mAP for all the labels together as well as for each tool.

Conventional ML System

Two conventional object detection systems, RetinaNet and YOLOv3 (You Only Look Once, version 3), developed using the Python coding language were used for comparison with the CFML system. These detection systems are off-the-shelf object detection models and are the industry standard for baseline object detection.25,26 Details on the implementation and architecture behind these systems and their applications to the SOCAL data set have been described in previous publications and are replicated here.27,28

Results

There were a total of 31,443 frames in the SOCAL data set, which were divided into 27,223 images for training, 2292 images for validation, and 1928 images for testing. The mAP on the test set was 0.708. The AutoML model outperformed two state-of-the-art object detection networks, RetinaNet and YOLOv3, which had mAPs of 0.669 and 0.527, respectively, in analyzing the same data set. Significant improvement in detection accuracy of muscle and cottonoid were seen, while there was a slight decrease in accuracy of capturing the suction using CFML (Table 1). Outputs from the CFML system are shown in Fig. 2.

TABLE 1.

Pros and cons of conventional ML techniques compared to CFML techniques

FactorConventional MLCFML
Technical expertise requirementsHighLow
Infrastructure setup requiredHighLow
Model customization (e.g., tuning hyperparameters)YesNo
Ability for downstream analysisYesNo
Black box outputNoYes
PriceBilled only for hrs training & testing modelBilled for all active hrs (training, test, & idle)
FIG. 2.
FIG. 2.

Stitch of different model outputs from AutoML Object Detection. A and C: Model-wide and instrument specific outputs, including total images, precision, recall, and confidence curves. B and D: Examples of false-positive (B, model made incorrect predictions) and true-positive (D, model made correct predictions) predictions by the model.

The dashboard additionally provides insight into the classifications and allows users to browse through example true positives, false negatives, and false positives. Each section highlights all the images falling in each of those categories along with the actual and the model-predicted bounding (Fig. 3).

FIG. 3.
FIG. 3.

Pros and cons of CFML, and potential use cases for clinician-scientists.

Discussion

There have been significant efforts to apply ML processes to analyze neurosurgical data.29–31 Nonetheless, the implementation of ML models is limited by the technical specialization required for ML computation, which is generally outside the scope of neurosurgeons.5,32,33 AutoML solutions are vital components of the effort to make ML more accessible but have not been reported in surgical object classification or using operative video.34,35 Google AutoML is a cloud-based service offered to execute ML models without any explicit coding to create customizable models that cater to the requirements of the user.10,12,24,36 Target users of Google AutoML are professionals from nontechnical backgrounds; the main objective is to enable such professionals to leverage the power of ML without explicitly writing scripts and developing models, helping them gain additional insights without sacrificing their domain expertise. Here, we outline the pros and cons of CFML and potential use cases, and we identify key cost components for neurosurgeon-scientists looking to leverage ML technology.

Pros and Cons of CFML

CFML is an efficient, easy-to-use, low-cost method of ML analysis. However, there are significant limitations to its use that preclude its widespread use within the surgical data science community. We highlight advantages and disadvantages of CFML (specific to Google AutoML) below and in Fig. 2.

Advantages of CFML

The primary advantage of the CFML system used is ease of use. Deploying models on Google’s AutoML platform requires no technical knowledge and is completely user interface (UI) based. The process is well documented, making ML an easy “follow-the-steps” process. This UI-based platform provides a main dashboard, a centralized screen where the user can sequentially navigate across all the steps involved in an ML pipeline, right from uploading data to training the model and testing new data. The dashboard provides detailed analysis of the results and evaluations with overall mAP along with the mAPs for individual labels as well. It also provides an exhaustive report of the true positives, false negatives, and false positives (Fig. 1). Another major advantage is that once the model has been trained, it can be easily integrated into an array of applications, such as web applications and mobile applications, and can easily integrate new images and data. An additional advantage is the low cost of deployment: the entire project along with importing the images and training the model was executed in less than 10 hours, resulting in an overall project cost of less than $20 for initial implementation.

Disadvantages of CFML

While CFML systems are easy to use and require minimal computational expertise, there are a number of limitations that preclude widespread adoption for neurosurgeon-scientists. For one, while training and evaluating a model is inexpensive, long-term usage of the dashboard can result in significant costs, as billing occurs for every hour the model is hosted on the cloud, even if there are no models being trained or tested.37 Another drawback is the lack of provision to look into the model and hyperparameters; the underlying model architecture is hidden from the user, preventing users from understanding how the model is making its classifications or adjusting parameters to customize performance. Perhaps most prohibitively, with Google AutoML, no downstream analysis can be performed; tool detections and annotations in this case remain on the dashboard and cannot be exported for further classification and usage.

Use Cases of CFML

Despite its limitations, CFML represents an exciting approach to surgical data science and allows for those with nontechnical domain expertise to explore ML methods. CFML systems outperformed conventional ML object detection systems, likely due to the CFML system utilizing a number of different object detection models and choosing the one with the best performance. Successfully deploying a single off-the-shelf model requires domain expertise to identify the appropriate model architecture to evaluate and that the model be well documented or readily available via a coding repository.

CFML therefore has a number of use cases. For one, it is useful for those who already have large quantities of data on hand, whether through a database previously used for outcomes tracking, collections of radiology images, or outputs from robotic systems. In the past, utilizing these data with baseline statistical or computational analysis would require significant computational expertise. CFML systems may be an efficient way to identify signal in data (e.g., is there any predictive value in a data set?), or for initial exploratory analysis. CFML may then be used as a proof of concept to motivate hiring or investing resources in data set cleanup, computational expertise, and other data science–driven practices. Without initial exploratory analyses those expenses come at significantly more cost and/or risk.

Following initial discovery, however, there may only be limited utility in continuing with CFML platforms given the relative inability for downstream analysis and continuous passive cost. We outline the differences in abilities and requirements using CFML and conventional ML techniques in Table 2.

TABLE 2.

Results comparing AutoML object detection (mAP) with standard object detection models built using the Python programming language

LabelAutoML mAPStandard Object Detector mAP
RetinaNetYOLOv3
All labels0.7080.6690.527
Cottonoid0.5540.5210.325
Grasper0.7630.7690.684
Muscle0.2780.2510.097
String0.6360.4970.332
Suction0.8920.9110.821

Cost

Particularly for surgeons whose focus may not be ML analysis of surgical data, the cost of CFML systems is likely a major factor to consider. Using the Google AutoML platform, Image Object Detection Model Deployment for AutoML Vision is billed at $1.82 per node-hour for every hour the project is active (https://cloud.google.com/vision/automl/pricing). Our model was able to be successfully trained and deployed for less than $20, allowing initial exploration to be done at minimal cost. However, projects that require the model to be active for large numbers of hours may not be conducive to CFML systems. Users should investigate potential pricing options thoroughly before deploying these models to avoid incurring additional charges.

Future Directions of CFML Systems

While current applications of AutoML in neurosurgery are limited, the concept of “democratized” ML techniques is growing in popularity. Multiple AutoML-type systems exist and are growing in functionality—for example, OpenAI Codex is an artificial intelligence system that translates natural language (i.e., English) into computer code.38,39 These types of applications will only grow in function, and it is imperative for the neurosurgical community to identify ways of leveraging these technologies to facilitate research endeavors. As artificial intelligence techniques develop, there will also be an added emphasis placed on interpretability; newer approaches to ML, including the idea of spatial and temporal attention, attempt to improve on the largely black box methodology of preliminary deep learning techniques, which will make ML models even more useful for clinician-scientists engaging in clinical decision support research.40 Finally, these CFML systems set the stage for more sophisticated (surgical) data science suites akin to Microsoft Office (Microsoft Corp.), which are all-in-one ML-based applications where surgical video, outcomes, metrics of performance, and other sources of data can be analyzed in consortium, some of which are currently being developed.41

An additional use case for the application of CFML surgical instrument tracking is in the case of robotic systems and neuronavigation. Current robotic and neuronavigation systems allow for surgical road mapping using instrument registration and fiducial coordinate systems. The next generation of systems will likely utilize computer vision techniques as additional inputs into the system, allowing for the robot or navigational unit to “interact” with the surgical field and provide dynamic, accurate localization.42–44 In this context, AutoML or other CFML systems can quickly and efficiently perform instrument recognition or tracking tasks on surgical video and may be of use to develop the next generation of robotic and navigational technology.

Limitations

Our work has a number of limitations. The data we used were annotated and purposed for data science applications; the startup cost and resources to develop a data set from scratch would be significantly higher, even with CFML systems. Moreover, our feasibility study investigated one aspect of CFML (namely, identification of surgical tools) and did not explore other avenues such as natural language processing, video processing, or ML with tabular data, where the advantages and disadvantages to CFML as presented may not apply.

Conclusions

The growing availability of CFML systems offers an exciting opportunity to neurosurgeons eager to apply cutting-edge ML methods to the analysis of their own data set. We demonstrate that CFML can accurately detect surgical instruments contained within neurosurgical video with higher accuracy than prior ML methods. CFML systems can feasibly be used as a low-cost, low barrier to entry mechanism for ML for neurosurgical teams with limited computational expertise.

Disclosures

The authors report no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper.

Author Contributions

Conception and design: Donoho, Unadkat, Pangal, Kugener, Roshannai. Acquisition of data: Unadkat, Pangal, Kugener, Roshannai. Analysis and interpretation of data: Donoho, Unadkat, Pangal, Kugener. Drafting the article: Donoho, Unadkat, Pangal, Chan, Zhu. Critically revising the article: Donoho, Unadkat, Pangal, Kugener, Chan, Zhu, Markarian. Reviewed submitted version of manuscript: all authors. Approved the final version of the manuscript on behalf of all authors: Donoho. Statistical analysis: Unadkat, Pangal. Administrative/technical/material support: Donoho, Zada. Study supervision: Donoho, Zada.

References

  • 1

    Knopf JD, Kumar R, Barats M, et al. Neurosurgical operative videos: an analysis of an increasingly popular educational resource. World Neurosurg. 2020;144:e428e437.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2

    Konakondla S, Fong R, Schirmer CM. Simulation training in neurosurgery: advances in education and practice. Adv Med Educ Pract. 2017;8:465473.

  • 3

    Jian A, Jang K, Manuguerra M, Liu S, Magnussen J, Di Ieva A. Machine learning for the prediction of molecular markers in glioma on magnetic resonance imaging: a systematic review and meta-analysis. Neurosurgery. 2021;89(1):3144.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 4

    Chan J, Pangal DJ, Cardinal T, et al. A systematic review of virtual reality for the assessment of technical skills in neurosurgery. Neurosurg Focus. 2021;51(2):E15.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 5

    Buchlak QD, Esmaili N, Leveque JC, et al. Machine learning applications to clinical decision support in neurosurgery: an artificial intelligence augmented systematic review. Neurosurg Rev. 2020;43(5):12351253.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6

    Dagi TF, Barker FG, Glass J. Machine learning and artificial intelligence in neurosurgery: status, prospects, and challenges. Neurosurgery. 2021;89(2):133142.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7

    Fabacher T, Godet J, Klein D, Velten M, Jegu J. Machine learning application for incident prostate adenocarcinomas automatic registration in a French regional cancer registry. Int J Med Inform. 2020;139:104139.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 8

    Khouani A, El Habib Daho M, Mahmoudi SA, Chikh MA, Benzineb B. Automated recognition of white blood cells using deep learning. Biomed Eng Lett. 2020;10(3):359367.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 9

    Faes L, Wagner SK, Fu DJ, et al. Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study. Lancet Digit Health. 2019;1(5):e232e242.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 10

    Zeng Y, Zhang J. A machine learning model for detecting invasive ductal carcinoma with Google Cloud AutoML Vision. Comput Biol Med. 2020;122:103861.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 11

    Korot E, Guan Z, Ferraz D, et al. Code-free deep learning for multi-modality medical image classification. Nat Mach Intell. 2021;3(4):288298.

  • 12

    Yang HS, Kim KR, Kim S, Park JY. Deep learning application in spinal implant identification. Spine (Phila Pa 1976).2021;46(5):E318E324.

  • 13

    Kim IK, Lee K, Park JH, Baek J, Lee WK. Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform. Br J Ophthalmol. 2021;105(6):856861.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 14

    Hung AJ, Liu Y, Anandkumar A. Deep learning to automate technical skills assessment in robotic surgery. JAMA Surg. 2021;156(11):10591060.

  • 15

    Pangal DJ, Kugener G, Shahrestani S, Attenello F, Zada G, Donoho DA. Technical note: a guide to annotation of neurosurgical intraoperative video for machine learning analysis and computer vision. World Neurosurg. 2021;150:2630.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 16

    Ward TM, Mascagni P, Ban Y, et al. Computer vision in surgery. Surgery. 2021;169(5):12531256.

  • 17

    Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol. 2021;124(2):221230.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 18

    Kugener G, Pangal DJ, Zada G. Simulated outcomes following carotid artery laceration. Published online August 9, 2021.Accessed February 16, 2022. https://figshare.com/articles/dataset/Simulated_Outcomes_following_Carotid_Artery_Laceration/15132468

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 19

    Donoho DA, Pangal DJ, Kugener G, et al. Improved surgeon performance following cadaveric simulation of internal carotid artery injury during endoscopic endonasal surgery: training outcomes of a nationwide prospective educational intervention. J Neurosurg. 2021;135(5):13471355.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 20

    Donoho DA, Johnson CE, Hur KT, et al. Costs and training results of an objectively validated cadaveric perfusion-based internal carotid artery injury simulation during endoscopic skull base surgery. Int Forum Allergy Rhinol. 2019;9(7):787794.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 21

    Shen J, Hur K, Zhang Z, et al. Objective Validation of perfusion-based human cadaveric simulation training model for management of internal carotid artery injury in endoscopic endonasal sinus and skull base surgery. Oper Neurosurg (Hagerstown). 2018;15(2):231238.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 22

    Zada G, Bakhsheshian J, Pham M, et al. Development of a perfusion-based cadaveric simulation model integrated into neurosurgical training: feasibility based on reconstitution of vascular and cerebrospinal fluid systems. Oper Neurosurg (Hagerstown). 2018;14(1):7280.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 23

    Pangal DJ, Kugener G, Cardinal T, et al. Use of surgical video-based automated performance metrics to predict blood loss and success of simulated vascular injury control in neurosurgery: a pilot study. J Neurosurg. Published online December 31, 2021.doi: 10.3171/2021.10.JNS211064

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 24

    Cloud AutoML. Making AI accessible to every business. Google Cloud. Published January 17, 2018. Accessed February 16, 2022. https://blog.google/products/google-cloud/cloud-automl-making-ai-accessible-every-business/

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 25

    He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv. Preprint posted online December 10, 2015. http://arxiv.org/abs/1512.03385

  • 26

    Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016:779788.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 27

    Kugener G, Pangal D, Cardinal T, Collet C, Zhu Y. Utility of the simulated outcomes following carotid artery laceration (SOCAL) video dataset for machine learning applications. JAMA Netw Open. In press.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 28

    Kugener G, Zhu Y, Pangal DJ, et al. Deep neural networks can accurately detect blood loss and hemorrhage control task success from video. Neurosurgery. In press.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 29

    Senders JT, Arnaout O, Karhade AV, et al. Natural and artificial intelligence in neurosurgery: a systematic review. Neurosurgery. 2018;83(2):181192.

  • 30

    Senders JT, Staples PC, Karhade AV, et al. Machine learning and neurosurgical outcome prediction: a systematic review. World Neurosurg. 2018;109:476486.e1.

  • 31

    Tonutti M, Gras G, Yang GZ. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artif Intell Med. 2017;80:3947.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 32

    Velagapudi L, D’Souza T, Matias CM, Sharan AD. Bridging machine learning and clinical practice in neurosurgery: hurdles and solutions. Letter. World Neurosurg. 2020;134:678679.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 33

    Thrall JH, Li X, Li Q, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15(3 Pt B)(3, Part B):504508.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 34

    Antaki F, Kahwati G, Sebag J, et al. Predictive modeling of proliferative vitreoretinopathy using automated machine learning by ophthalmologists without coding experience. Sci Rep. 2020;10(1):19528.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 35

    Waring J, Lindvall C, Umeton R. Automated machine learning: review of the state-of-the-art and opportunities for healthcare. Artif Intell Med. 2020;104:101822.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 36

    Livingstone D, Chau J. Otoscopic diagnosis using computer vision: an automated machine learning approach. Laryngoscope. 2020;130(6):14081413.

  • 37

    AutoML Vision pricing. Google Cloud. Accessed February 16, 2022. https://cloud.google.com/vision/automl/pricing

  • 38

    Your AI pair programmer. GitHub Copilot. Accessed February 16, 2022. https://copilot.github.com/

  • 39

    OpenAI Codex. Published August 10, 2021.Accessed February 16, 2022. https://openai.com/blog/openai-codex/

  • 40

    Sankaran B, Mi H, Al-Onaizan Y, Ittycheriah A. Temporal attention model for neural machine translation. arXiv. Preprint posted online August 9, 2016. https://arxiv.org/abs/1608.02927v1

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 41

    Touch Surgery—Prepare for Surgery. Accessed February 16, 2022. https://www.touchsurgery.com/

  • 42

    Chae YS, Lee SH, Lee HK, Kim MY. Optical coordinate tracking system using afocal optics for image-guided surgery. Int J CARS. 2015;10(2):231241.

  • 43

    Lai M, Skyrman S, Shan C, et al. Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS One. 2020;15(1):e0227312.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 44

    Liu Y, Li Y, Zhuang Z, Song T. Improvement of robot accuracy with an optical tracking system. Sensors (Basel). 2020;20(21):E6341.

  • Collapse
  • Expand
Artwork from Agarwal et al. (E9). Copyright Kenneth X. Probst. Published with permission.
  • FIG. 1.

    Stitch of different screens from the AutoML system. A and B: Screenshots showing the various artificial intelligence products offered by AutoML, including within Vision (B). C and D: Screen shots showing the process of uploading and training the model using a point-and-click UI.

  • FIG. 2.

    Stitch of different model outputs from AutoML Object Detection. A and C: Model-wide and instrument specific outputs, including total images, precision, recall, and confidence curves. B and D: Examples of false-positive (B, model made incorrect predictions) and true-positive (D, model made correct predictions) predictions by the model.

  • FIG. 3.

    Pros and cons of CFML, and potential use cases for clinician-scientists.

  • 1

    Knopf JD, Kumar R, Barats M, et al. Neurosurgical operative videos: an analysis of an increasingly popular educational resource. World Neurosurg. 2020;144:e428e437.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2

    Konakondla S, Fong R, Schirmer CM. Simulation training in neurosurgery: advances in education and practice. Adv Med Educ Pract. 2017;8:465473.

  • 3

    Jian A, Jang K, Manuguerra M, Liu S, Magnussen J, Di Ieva A. Machine learning for the prediction of molecular markers in glioma on magnetic resonance imaging: a systematic review and meta-analysis. Neurosurgery. 2021;89(1):3144.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 4

    Chan J, Pangal DJ, Cardinal T, et al. A systematic review of virtual reality for the assessment of technical skills in neurosurgery. Neurosurg Focus. 2021;51(2):E15.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 5

    Buchlak QD, Esmaili N, Leveque JC, et al. Machine learning applications to clinical decision support in neurosurgery: an artificial intelligence augmented systematic review. Neurosurg Rev. 2020;43(5):12351253.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6

    Dagi TF, Barker FG, Glass J. Machine learning and artificial intelligence in neurosurgery: status, prospects, and challenges. Neurosurgery. 2021;89(2):133142.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7

    Fabacher T, Godet J, Klein D, Velten M, Jegu J. Machine learning application for incident prostate adenocarcinomas automatic registration in a French regional cancer registry. Int J Med Inform. 2020;139:104139.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 8

    Khouani A, El Habib Daho M, Mahmoudi SA, Chikh MA, Benzineb B. Automated recognition of white blood cells using deep learning. Biomed Eng Lett. 2020;10(3):359367.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 9

    Faes L, Wagner SK, Fu DJ, et al. Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study. Lancet Digit Health. 2019;1(5):e232e242.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 10

    Zeng Y, Zhang J. A machine learning model for detecting invasive ductal carcinoma with Google Cloud AutoML Vision. Comput Biol Med. 2020;122:103861.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 11

    Korot E, Guan Z, Ferraz D, et al. Code-free deep learning for multi-modality medical image classification. Nat Mach Intell. 2021;3(4):288298.

  • 12

    Yang HS, Kim KR, Kim S, Park JY. Deep learning application in spinal implant identification. Spine (Phila Pa 1976).2021;46(5):E318E324.

  • 13

    Kim IK, Lee K, Park JH, Baek J, Lee WK. Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform. Br J Ophthalmol. 2021;105(6):856861.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 14

    Hung AJ, Liu Y, Anandkumar A. Deep learning to automate technical skills assessment in robotic surgery. JAMA Surg. 2021;156(11):10591060.

  • 15

    Pangal DJ, Kugener G, Shahrestani S, Attenello F, Zada G, Donoho DA. Technical note: a guide to annotation of neurosurgical intraoperative video for machine learning analysis and computer vision. World Neurosurg. 2021;150:2630.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 16

    Ward TM, Mascagni P, Ban Y, et al. Computer vision in surgery. Surgery. 2021;169(5):12531256.

  • 17

    Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol. 2021;124(2):221230.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 18

    Kugener G, Pangal DJ, Zada G. Simulated outcomes following carotid artery laceration. Published online August 9, 2021.Accessed February 16, 2022. https://figshare.com/articles/dataset/Simulated_Outcomes_following_Carotid_Artery_Laceration/15132468

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 19

    Donoho DA, Pangal DJ, Kugener G, et al. Improved surgeon performance following cadaveric simulation of internal carotid artery injury during endoscopic endonasal surgery: training outcomes of a nationwide prospective educational intervention. J Neurosurg. 2021;135(5):13471355.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 20

    Donoho DA, Johnson CE, Hur KT, et al. Costs and training results of an objectively validated cadaveric perfusion-based internal carotid artery injury simulation during endoscopic skull base surgery. Int Forum Allergy Rhinol. 2019;9(7):787794.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 21

    Shen J, Hur K, Zhang Z, et al. Objective Validation of perfusion-based human cadaveric simulation training model for management of internal carotid artery injury in endoscopic endonasal sinus and skull base surgery. Oper Neurosurg (Hagerstown). 2018;15(2):231238.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 22

    Zada G, Bakhsheshian J, Pham M, et al. Development of a perfusion-based cadaveric simulation model integrated into neurosurgical training: feasibility based on reconstitution of vascular and cerebrospinal fluid systems. Oper Neurosurg (Hagerstown). 2018;14(1):7280.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 23

    Pangal DJ, Kugener G, Cardinal T, et al. Use of surgical video-based automated performance metrics to predict blood loss and success of simulated vascular injury control in neurosurgery: a pilot study. J Neurosurg. Published online December 31, 2021.doi: 10.3171/2021.10.JNS211064

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 24

    Cloud AutoML. Making AI accessible to every business. Google Cloud. Published January 17, 2018. Accessed February 16, 2022. https://blog.google/products/google-cloud/cloud-automl-making-ai-accessible-every-business/

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 25

    He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv. Preprint posted online December 10, 2015. http://arxiv.org/abs/1512.03385

  • 26

    Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016:779788.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 27

    Kugener G, Pangal D, Cardinal T, Collet C, Zhu Y. Utility of the simulated outcomes following carotid artery laceration (SOCAL) video dataset for machine learning applications. JAMA Netw Open. In press.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 28

    Kugener G, Zhu Y, Pangal DJ, et al. Deep neural networks can accurately detect blood loss and hemorrhage control task success from video. Neurosurgery. In press.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 29

    Senders JT, Arnaout O, Karhade AV, et al. Natural and artificial intelligence in neurosurgery: a systematic review. Neurosurgery. 2018;83(2):181192.

  • 30

    Senders JT, Staples PC, Karhade AV, et al. Machine learning and neurosurgical outcome prediction: a systematic review. World Neurosurg. 2018;109:476486.e1.

  • 31

    Tonutti M, Gras G, Yang GZ. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artif Intell Med. 2017;80:3947.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 32

    Velagapudi L, D’Souza T, Matias CM, Sharan AD. Bridging machine learning and clinical practice in neurosurgery: hurdles and solutions. Letter. World Neurosurg. 2020;134:678679.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 33

    Thrall JH, Li X, Li Q, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15(3 Pt B)(3, Part B):504508.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 34

    Antaki F, Kahwati G, Sebag J, et al. Predictive modeling of proliferative vitreoretinopathy using automated machine learning by ophthalmologists without coding experience. Sci Rep. 2020;10(1):19528.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 35

    Waring J, Lindvall C, Umeton R. Automated machine learning: review of the state-of-the-art and opportunities for healthcare. Artif Intell Med. 2020;104:101822.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 36

    Livingstone D, Chau J. Otoscopic diagnosis using computer vision: an automated machine learning approach. Laryngoscope. 2020;130(6):14081413.

  • 37

    AutoML Vision pricing. Google Cloud. Accessed February 16, 2022. https://cloud.google.com/vision/automl/pricing

  • 38

    Your AI pair programmer. GitHub Copilot. Accessed February 16, 2022. https://copilot.github.com/

  • 39

    OpenAI Codex. Published August 10, 2021.Accessed February 16, 2022. https://openai.com/blog/openai-codex/

  • 40

    Sankaran B, Mi H, Al-Onaizan Y, Ittycheriah A. Temporal attention model for neural machine translation. arXiv. Preprint posted online August 9, 2016. https://arxiv.org/abs/1608.02927v1

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 41

    Touch Surgery—Prepare for Surgery. Accessed February 16, 2022. https://www.touchsurgery.com/

  • 42

    Chae YS, Lee SH, Lee HK, Kim MY. Optical coordinate tracking system using afocal optics for image-guided surgery. Int J CARS. 2015;10(2):231241.

  • 43

    Lai M, Skyrman S, Shan C, et al. Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS One. 2020;15(1):e0227312.

    • Crossref
    • PubMed
    • Search Google Scholar
    • Export Citation
  • 44

    Liu Y, Li Y, Zhuang Z, Song T. Improvement of robot accuracy with an optical tracking system. Sensors (Basel). 2020;20(21):E6341.

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1449 372 30
PDF Downloads 1328 389 22
EPUB Downloads 0 0 0