In many low- and middle-income countries (LMICs), medical care infrastructure is still not at the level of industrialized nations. In many of these regions, only basic medical care can be provided.1 Medical procedures and surgical interventions requiring specialized knowledge and training are often limited, even in large metropolitan areas.2 Spine surgery is particularly affected by this, as can be seen in the example of East Africa.2,3 At the same time, spine surgery is a discipline that should be widely accessible, given that spinal injuries can occur at any time and need to be treated promptly in order to provide the best possible outcome for patients.3,4 Various concepts have been developed to meet this challenge.5 One of these is intraoperative support by surgeons who are not on site where the surgery is being performed.6
The introduction of Google Glass in the year 2013 as one of the first generally available smart glasses models offered new possibilities for remote support. Compared to other devices like smartphones, smart glasses allow a hands-free audiovisual communication and, not long after its introduction, attempts were made to implement this technology for telemedical support.7–9 The results were promising but did not meet requirements for standardized implementation for telemedical support in LMICs. Especially under challenging lighting conditions in the operating room (OR) and at the surgical site, Google Glass did not meet the requirements that are needed for proper identification of surgical landmarks and anatomical structures to provide remotely reliable surgical support.10,11 Since then, many companies have been working on the application of smart glasses for surgeons. Recently, a new generation of smart glasses has been launched by a US-based company, Vuzix (Vuzix Corp.), which attempts to address the intraoperative image quality issues of Google Glass. We wanted to evaluate whether this new technology was suitable for use in telemedical support in spine surgery in LMICs by conducting a feasibility study in Tanzania as a case example. To our knowledge, this is the first evaluation of this type of smart glasses for telemedical assistance during spine surgery in an LMIC.
Methods
To evaluate practical application, we conducted a feasibility study at Muhimbili Orthopedic Institute in Dar es Salaam, Tanzania, in collaboration with NewYork-Presbyterian Hospital/Weill Cornell Medicine in New York City, in the US. Smart glasses from the company Vuzix (model M400) were used to connect surgeons from both study centers (Fig. 1). According to the manufacturer, this model can tolerate impact from a drop of 2 m and is protected against water and dust, which is an additional aspect to be considered for use in LMICs.12,13 The smart glasses are commercially available for approximately $1800,12 and were provided to the research team for trial purposes by a third party. None of the members of the research team are involved with Vuzix Corp. or the third party. The Weill Cornell institutional review board approved the study.
A: Detailed view of the smart glasses used in our field test, without mounting frame and battery pack. B: Smart glasses mounted to the frame and connected to the battery pack. This setup was used in our field test. C: A surgeon wearing the smart glasses in our study setup and preparing them for surgery in Dar es Salaam. OLED = organic LED. © Vuzix, published with permission.
Integrated into the smart glasses is a camera with a theoretical resolution of up to 12.8 megapixels, 4K-resolution video, and optical image stabilization. The display integrated in the smart glasses is an organic LED (OLED) display with a resolution of 640 × 360 pixels. The smart glasses have an 8-core 2.52-GHz processor and 64-GB internal flash memory. Wireless connectivity is possible via Wi-Fi or Bluetooth. The installed operating system is the Android 9.0 OS. The smart glasses were connected to the local network via Wi-Fi during the study. For all technical data, we refer the reader to the manufacturer’s specifications.12,13
To provide virtual assistance, we used the Helplightning version 14.4 software. Through the software it is possible to connect with the smart glasses and access the camera in real time and to communicate via the built-in speakers and microphone. Furthermore, viewers can draw markings on the video stream in real time, which can be seen by the glasses’ user. This is the main advantage of this software compared to common communication applications. When using the smart glasses, these markings are projected directly into the surgeon’s field of view. This allows a mentor surgeon to highlight anatomical structures or give instructions that are precisely adapted to the mentee surgeon’s field of view.
To evaluate the smart glasses and the software, telemedical assistance was simulated during surgical procedures. Due to the time difference between the Eastern Time Zone and East Africa, only procedures that began between 5 am and 12 am Eastern Time were included, corresponding to 1 pm to 8 am in East African time. The evaluation was performed according to the same standardized protocol. A checklist was used to ensure protocol compliance, which was completed on both sides during the simulated assistance. The protocol included a preoperative and an intraoperative part. Preoperatively, the functionality of the smart glasses and the internet connection were checked using a standardized internet speed test tool from the website www.speedtest.net.14 Only cases with a stable internet connection of at least 1 Mbps upload and download speeds were included in our evaluation. This is the minimum speed required for the software. For reference, the mean internet speed in the US is 136.53 Mbps for downloads and 19.79 Mbps for uploads.15
After verification of a sufficient internet connection, an objective evaluation of the image quality was performed preoperatively with the aid of recordings of test images. An eye test chart and a screen test pattern were chosen for the evaluation of the image quality. For the eye test chart, a modified version of the Snellen test was chosen, which is a common test to evaluate visual acuity (Figs. 2A and 3A).16 To adapt the eye test chart for our purposes, we had to slightly modify the measuring method. Instead of standing a certain distance away from the chart, the mentor surgeon had to look at the chart on a PC screen. The visual evaluation score was determined by the last row in which the mentor surgeon could identify every pattern. If all rows were visible the resolution was defined as 100% and none of the lines visible was defined as 0%. The rows in between were scored with a percentage scale, meaning an improvement of 12.5% visibility for each row (8 rows × 12.5% = 100%). The scoring was performed twice, once with a black background and white pattern, and a second time with a white background and black pattern.
Comparison between the actual look of the black background eye test chart (A) and the reproduction of the chart through the smart glasses for the mentor surgeon (B). If all rows are visible the resolution is defined as 100% and none of the lines visible is defined as 0%. The rows in between were scored with a percentage scale, meaning an improvement of 12.5% visibility for each row (8 rows × 12.5% = 100%). The case example shows a 75% image quality score.
Comparison between the actual look of the white background eye test chart (A) and the reproduction of the chart through the smart glasses for the mentor surgeon (B). If all rows are visible the resolution is defined as 100% and none of the lines visible is defined as 0%. The rows in between were scored with a percentage scale, meaning an improvement of 12.5% visibility for each row (8 rows × 12.5% = 100%). The case example shows a 75% image quality score.
For the evaluation of the smart glasses’ ability to display contrast, brightness, and hue, a modified screen test pattern was used that was initially designed to calibrate analog TV screens. This test pattern allowed for a proper assessment of these 3 parameters (Fig. 4). The hue was evaluated on the color scale in the upper third of the pattern, the contrast was evaluated using the lines in the middle of the pattern, and the brightness distinction was assessed with the pattern in the lower part of the image. The image quality of hue, contrast, and brightness was rated on a modified Likert scale from 1 to 5, with a score of 1 for insufficient image quality and 5 for excellent image quality. To have the same light conditions, all of these evaluations were performed on a high-resolution PC screen size of 13.3” and with a resolution of 2560 × 1600 pixels. All video calls with the smart glasses were recorded, the scoring was performed after the call by 5 reviewers, and the image quality was rated by consent. Disagreements were solved by discussion.
This modified screen test chart was used to score the hue, contrast, and brightness reproduction by using a Likert scale with scores from 1 to 5. A score of 1 was defined as insufficient reproduction, 5 was defined as excellent reproduction. Hue was evaluated using the upper field, contrast was evaluated in the middle field, and brightness was evaluated in the lower field. Comparison between the actual look of the screen test chart (A) and the reproduction of hue, contrast, and brightness through the smart glasses for the mentor surgeon (B).
After the preoperative evaluation of the image quality, the smart glasses were evaluated in an intraoperative setting. Intraoperatively, the usability of the smart glasses was assessed in 3 different environments. First, the smart glasses were used to identify a surgical instrument on the well-lit instrument table. In this evaluation step, the surgeon receiving assistance in Africa (mentee) looked at the instrument table and the surgeon providing assistance from the US (mentor) used software to mark an instrument that the mentee should correctly identify. The number of attempts required by the mentee to correctly identify the instrument was evaluated.
Second, the usability of the glasses during surgery and the image quality in the complex surgical field were evaluated. All cases were performed for open deformity correction, with incisions and exposure field at least 20 cm long in the craniocaudal plane, which would permit the optimal visualization area for the smart glasses. For this purpose, the mentee looked at the surgical field with the smart glasses and the mentor in the US used the software to mark an anatomical structure, which the mentee was supposed to recognize. The perceived image quality and number of trials required to identify the correct structure were evaluated.
The last intraoperative area in which the smart glasses were evaluated was in the assessment of intraoperative fluoroscopic images. In the last evaluation step, the mentee looked at a radiographic image on the C-arm screen and the mentor used software to mark an anatomical structure that the mentee should recognize. As an additional criterion for visual resolution, the mentor was asked to recognize the text information given on the C-arm monitor. The subjectively perceived resolution as well as the attempts needed by the mentee to correctly recognize the marked anatomical structure were evaluated.
Additionally, the connection losses and the subjective usability of the device on both sides were evaluated. Following the video streams, a short debriefing was conducted between mentee and mentor to evaluate the workflow. A control group of surgeons with an identical model of the smart glasses in the US study center performed the tasks with the same setup to determine if the local internet infrastructure influenced outcomes.
Results
The glasses were used in the Tanzanian study center in 4 cases; however, in 1 of these cases it was not possible to connect the smart glasses to the mentor surgeon in New York City, despite a sufficient internet connection test. The internet connection speed for downloads in Tanzania was on average 45.12 ± 11.60 Mbps and the upload speed was on average 58.89 ± 22.28 Mbps. The software manufacturer recommends a connection speed of at least 1 Mbps for downloads, which was met and exceeded in each case.17 The latency between the networks was measured using the packet internet groper (PING) method. The latency was constantly 4 ± 0 msec in all measurements, indicating a sufficient working connection between the networks.18
The image quality of the videos transmitted through the smart glasses was acceptable. However, the camera could not reach the resolution of 4K (3840 × 2160 pixels, comparable to 8.3 megapixels) in any of the reviewed cases when transmitting data over the internet.
When evaluating the image quality by using the modified Snellen test chart, an average visibility percentage of 62.5% ± 10.21% was reached for the black background (Fig. 2). The image quality improved slightly on the chart with the white background, with an average visibility percentage of 70.83% ± 11.79% (Fig. 3). The quality of hue, contrast, and brightness was rated by 5 reviewers by consent on a scale from 1 (insufficient quality) to 5 (excellent quality). The reproduction of hue through the glasses was rated 2.7 ± 0.9. The glasses’ reproduction quality of contrast was rated 3.3 ± 0.5, and the reproduction quality of brightness was rated 2.7 ± 0.9 (Fig. 4).
The intraoperative evaluation assessed the image quality of the glasses at background and light conditions of the instrument table, in addition to at the surgical site and radiographic screen. At the instrument table, the average image quality needed to recognize the instruments was 3.7 ± 0.5. Recognition of marked surgical instruments on the instrument table was possible on the first attempt in every case (Fig. 5A).
Screen view of the intraoperative image quality. A: Evaluation from the instrument table. The image shows the view of the mentor surgeon through the Helplightning software. The yellow arrow was drawn by the mentor surgeon to point out an instrument (a handgrip in this case). The toolbar on the right side allows the mentor surgeon to navigate through the application. B: Image quality of the surgical site. The clipped white effect occurred in every case when using the overhead lights, and did not allow the recognition of any relevant anatomical structure. C: Screen view of the surgical site after removal of the overhead lights. The clipped white effect disappeared and the image quality slightly increased; however, it was still not possible to recognize any relevant anatomical structure under the light conditions.
Whereas illumination during the preoperative evaluation consisted of uniform light distribution by overhead lights, intraoperative illumination by surgical lamps focused on the surgical field. This high intensity of illumination on a rather small field caused the smart glasses in all cases to produce a local overexposure and a clipped white effect, which was a representation of the overexposed area as a completely white field without visible structure or color differences in it (Fig. 5B). This phenomenon occurred in every case when using the OR lamps. It was technically not possible to direct the light exposure to the field of interest with the smart glasses and software combination that we used. As a result, the mentor surgeon was unable to recognize landmarks in the surgical field and could not give any support based on the real-time anatomy.
An assessment of the surgical field was also not possible when using the operating light due to the described clipped white effect. After switching off the operating light and using background room light, the visibility of the surgical field improved. However, due to the limited light conditions under these circumstances, it was not possible to identify relevant anatomical structures with the proper reliability (Fig. 5C). In addition, switching off the surgical lights worsened the visualization for the surgeon, which is why we did not consider this to be a viable alternative. Furthermore, the visualization of the complex 3D surgical field and similar color tones in the surgical field caused significant difficulties and made a valid assessment impossible. The average assessment of the image quality of the surgical field was on average 1 ± 0, suggesting that none of the reviewers were able to recognize any relevant structure in the surgical field on the recorded images.
The last intraoperative evaluation was the assessment of the glasses for fluoroscopic images on the monitor. The glasses could not reproduce the different gray scales on a quality level that allowed a sufficient assessment of the fluoroscopic images. The average score of the radiographic imaging assessment was 1 ± 0, meaning that none of the reviewers could identify the relevant structures with the necessary certainty (Fig. 6). Due to the limited image quality of the surgical site and the radiographic screen, it was not possible to adjust the surgical strategy of the mentor surgeon sufficiently. However, it was possible to provide general support and to support the intraoperative decision-making.
Screen view of the intraoperative quality of fluoroscopic images. The image quality did not allow the mentor surgeon to recognize any relevant anatomical structure with the necessary reliability to give medical advice.
During the simulated calls, the glasses lost connection on average 7 ± 2 times per case. The results are summarized in Table 1. In the control group, 5 spine surgeries in the US study center were evaluated. The internet connection in the US was more stable and noticeably faster than the internet in Tanzania, with a download speed of 104.0 ± 4.6 Mbps and an upload speed of 93.4 ± 6.6 Mbps. However, the image quality improved only slightly. On the modified Snellen chart with the white background, imaging performance was rated on average 77.5% ± 5.6%, and on the Snellen chart with the black background it was 70% ± 6.9%. Color reproduction was rated on the modified Likert scale as 3.0 ± 1, contrast reproduction as 2.8 ± 1.3, and brightness reproduction as 2.8 ± 0.8. Intraoperatively, image quality was rated on average 4 ± 0.6 when observing the instrument table, and each marked instrument could be correctly identified. When viewing the surgical site, the same difficulties with overexposure and the clipped white effect occurred as in Tanzania, and therefore marking of anatomically relevant points was not possible. Image quality was rated an average of 1 ± 0. When viewing radiographic images, it was also not possible to identify and mark relevant anatomical structures with the necessary certainty, which is why the image quality was also rated an average of 1 ± 0.
Summary of internet connection metrics and image quality in the pre- and intraoperative testing setup
Testing Setup Features | Value |
---|---|
Internet connection metrics | |
Minimum software requirement (Mbps) | >1.00 |
Download speed (Mbps) | 45.12 ± 11.60 |
Upload speed (Mbps) | 58.89 ± 22.28 |
Network latency, PING (msec) | 4 ± 0 |
Preop image quality | |
Modified Snellen chart, white background | 70.83 ± 11.79% |
Modified Snellen chart, black background | 62.5 ± 10.21% |
Hue reproduction (Likert 1–5) | 2.7 ± 0.9 |
Contrast reproduction (Likert 1–5) | 3.3 ± 0.5 |
Brightness reproduction (Likert 1–5) | 2.7 ± 0.9 |
Intraop image quality | |
Instrument table (Likert 1–5) | 3.7 ± 0.5 |
Surgical site (Likert 1–5) | 1 ± 0 |
Fluoroscope screen (Likert 1–5) | 1 ± 0 |
Connection losses per case | 7 ± 2 |
PING = packet internet groper.
The modified Likert scale used 1 for insufficient and 5 for excellent imaging quality. All values except for the minimum software requirement are expressed as the mean ± SD.
Discussion
The use of smart glasses for telemedical support in LMICs is a viable approach to support the local infrastructure. In practice, however, weaknesses have been identified that limit the use of smart glasses in this context. A major challenge was the time difference between the US and the African study centers. During regular OR hours in Tanzania, it is nighttime on the East Coast, which made it difficult to include patients. This problem has already been described in the literature and should be considered.7 Another major difficulty was establishing and maintaining a stable and sufficiently fast internet connection in the African study center. Even though the software requirements for internet speed are relatively low, the internet connection must not drop below the required value. In Tanzania there were repeated disconnections that may be attributed to the local telecommunications infrastructure there; however, it is not entirely clear if the instability occurred due to the local internet infrastructure or due to the smart glasses and software combination. The unstable connection resulted in the mentee having to repeatedly log in to the network and communication software. This was possible because all cases were test runs in which the mentee surgeon intentionally took no relevant role during the surgery. During an actual surgery, it is practically impossible to find the time to repeatedly log in to the network and software with the smart glasses. Another consequence of the limited internet connection is that the smart glasses automatically reduce the image quality to maintain a continuous video transmission. This phenomenon was also described in the literature during a field test with the Google Glass and significantly limited the imaging quality.7
In our study the intraoperative imaging resolution was limited due to the restricted 3D visualization of the surgical field. However, the main problem was the clipped white effect, which made assessment of the surgical site in all cases not possible. This is a known issue when using smart glasses for medical assistance, and has been described when an older generation of smart glasses were used for intraoperative support in earlier studies.7,8,10,11 The progress in the development of electronic devices has increased rapidly in the past several years; however, this problem persists in the latest generation of smart glasses in our evaluation and limits their medical use. Optical filters that are attached in front of the camera lens could slightly reduce the problem. However, these are often not certified and carry the risk of becoming detached and falling into the surgical field, which is why we refrained from using them.
On the side of the mentor surgeon, it was noticeable that when marking structures using the Helplightning software, there was a relevant time delay before the marking was visible to the mentee and mentor on the monitor. This complicated the process and led to the mentee having to hold their head completely still for several seconds to avoid misunderstandings regarding the marking. A solution to this challenge could be taking photographs of the surgical site with the smart glasses and showing this image in the display of the mentee surgeon’s smart glasses for highlighting structures there.
The smart glasses used a glasses frame to attach all the hardware (Fig. 1B and C). This resulted in the entire weight of the smart glasses and additional battery being put on the surgeon’s nose and ears, which led to discomfort after a short while. A headband is available from the manufacturer, which could solve this issue, but has not been evaluated. Since the smart glasses are individually adjustable to the surgeon’s face, in our experience they did not cause a relevant distraction or visual inconvenience during the surgeries.
Smart glasses have been tested for surgical use in a variety of surgical specialties.7,10,11,19 Users described similar limitations as observed in our evaluation. McCullough et al. faced similar difficulties using Google Glass, as we did in terms of focal overexposure, clipped white effect in the surgical field, unstable internet connection, and time lag.7 This indicates that, unfortunately, not all the problems of older versions of smart glasses have been optimally addressed in the latest versions.
Our study also has some methodological limitations. The number of cases is comparatively small, but corresponds to the usual values in the literature for the evaluation of smart glasses.8 However, given that none of the cases achieved sufficient imaging quality, further testing was not indicated because significant improvement could not be expected without modification of the software, hardware, or internet connection. Another limitation of our study is that it is not possible to determine the cause of the insufficient image quality. The smart glasses’ hardware, the installed software, and the unstable internet connection can all be responsible for the quality limitations. To clarify the cause more precisely, the test approach should be repeated with different devices and different software applications for remote support.
Conclusions
Application of smart glasses for telemedicine offers a promising tool for surgical education and remote training, especially in LMICs. However, our study highlights some limitations of this technology, including optical resolution, intraoperative lighting, and internet connection challenges. With continued collaboration between clinicians and industry, future iterations of smart glasses technology will need to address these issues to stimulate robust clinical utilization.
Acknowledgments
Ohana One International Surgical Aid & Education provided the devices that were evaluated in the study.
Disclosures
Dr. Härtl is a consultant for Ulrich, Brainlab, and DePuy-Synthes, and receives royalties from Zimmer. Dr. Sommer received speaking fees from Baxter.
Author Contributions
Conception and design: Härtl, Sommer. Acquisition of data: Sommer, Waterkeyn, Ahmad, Balsano, Medary, Shabani. Analysis and interpretation of data: Härtl, Sommer, Waterkeyn, Hussain, Goldberg, Kirnaz, Navarro-Ramirez, Medary, Gadjradj. Drafting the article: Sommer, Hussain. Critically revising the article: Härtl, Sommer, Waterkeyn, Hussain, Goldberg, Kirnaz, Navarro-Ramirez, Ahmad, Balsano, Gadjradj. Reviewed submitted version of manuscript: all authors. Statistical analysis: Gadjradj. Administrative/technical/material support: Sommer, Hussain, Medary, Shabani. Study supervision: Härtl, Sommer, Gadjradj.
References
- 1↑
Faruk N, Surajudeen-Bakinde NT, Abdulkarim A, et al. Rural healthcare delivery in sub-Saharan Africa: an ICT-driven approach. Int J Healthc Inf Syst Inform. 2020;15(3):1–21.
- 2↑
Lessing NL, Lazaro A, Zuckerman SL, et al. Nonoperative treatment of traumatic spinal injuries in Tanzania: who is not undergoing surgery and why? Spinal Cord. 2020;58(11):1197–1205.
- 3↑
Magogo J, Lazaro A, Mango M, et al. Operative treatment of traumatic spinal injuries in Tanzania: surgical management, neurologic outcomes, and time to surgery. Global Spine J. 2021;11(1):89–98.
- 4↑
Lessing NL, Zuckerman SL, Lazaro A, et al. Cost-effectiveness of operating on traumatic spinal injuries in low-middle income countries: a preliminary report from a Major East African referral center. Abstract 511. Neurosurgery. 2020;67(suppl 1):173.
- 5↑
Schmidt FA, Kirnaz S, Wipplinger C, Kuzan-Fischer CM, Härtl R, Hoffman C. Review of the highlights from the First Annual Global Neurosurgery 2019: a practical symposium. World Neurosurg. 2020;137:46–54.
- 6↑
Makhni MC, Riew GJ, Sumathipala MG. Telemedicine in orthopaedic surgery: challenges and opportunities. J Bone Joint Surg Am. 2020;102(13):1109–1115.
- 7↑
McCullough MC, Kulber L, Sammons P, Santos P, Kulber DA. Google Glass for remote surgical tele-proctoring in low- and middle-income countries: a feasibility study from Mozambique. Plast Reconstr Surg Glob Open. 2018;6(12):e1999.
- 8↑
Wei NJ, Dougherty B, Myers A, Badawy SM. Using Google Glass in surgical settings: systematic review. JMIR Mhealth Uhealth. 2018;6(3):e54.
- 9
Glauser W. Doctors among early adopters of Google Glass. CMAJ. 2013;185(16):1385.
- 10↑
Sinkin JC, Rahman OF, Nahabedian MY. Google Glass in the operating room: the plastic surgeon’s perspective. Plast Reconstr Surg. 2016;138(1):298–302.
- 11↑
Hashimoto DA, Phitayakorn R, Fernandez-del Castillo C, Meireles O. A blinded assessment of video quality in wearable technology for telementoring in open surgery: the Google Glass experience. Surg Endosc. 2016;30(1):372–378.
- 13↑
Source IEx.com. Degrees of Protection. Accessed April 8, 2022. https://www.sourceiex.com/Catalogs/IP Degress Testing Details.pdf
- 15↑
Ookla. Speedtest Global Index. Accessed April 8, 2022. https://www.speedtest.net/global-index/united-states#mobile
- 16↑
Peters HB. Vision screening with a Snellen chart. Am J Optom Arch Am Acad Optom. 1961;38(9):487–505.
- 17↑
Helplightning Inc. Remote Assistance Customer Support. Accessed April 8, 2022. https://helplightning.com/support/
- 18↑
Percacci R, Vespignani A. Scale-free behavior of the Internet global performance. Eur Phys J B. 2003;32(4):411–414.
- 19↑
Nakhla J, Kobets A, De la Garza Ramos R, et al. Use of Google Glass to enhance surgical education of neurosurgery residents: “proof-of-concept” study. World Neurosurg. 2017;98:711–714.