Neurosurgery resident training using blended learning concepts: course development and participant evaluation

View More View Less
  • 1 Department of Neurosurgery, Kantonsspital St. Gallen; and
  • | 2 Ostschweizer Schulungs- und Trainingszentrum, Kantonsspital St. Gallen, Switzerland
Free access

OBJECTIVE

Restrictions on working time and healthcare expenditures, as well as increasing subspecialization with caseload requirements per surgeon and increased quality-of-care expectations, provide limited opportunities for surgical residents to be trained in the operating room. Yet, surgical training requires goal-oriented and focused practice. As a result, training simulators are increasingly utilized. The authors designed a two-step blended course consisting of a personalized adaptive electronic learning (e-learning) module followed by simulator training. This paper reports on course development and the evaluation by the first participants.

METHODS

Adaptive e-learning was curated by learning engineers based on theoretical information provided by clinicians (subject matter experts). A lumbar spine model for image-guided spinal injections was used for the simulator training. Residents were assigned to the e-learning module first; after its completion, they participated in the simulator training. Performance data were recorded for each participant’s e-learning module, which was necessary to personalize the learning experience to each individual’s knowledge and needs. Simulator training was organized in small groups with a 1-to-4 instructor-to-participant ratio. Structured assessments were undertaken, adapted from the Student Evaluation of Educational Quality.

RESULTS

The adaptive e-learning module was curated, reviewed, and approved within 10 weeks. Eight participants have taken the course to date. The overall rating of the course is very good (4.8/5). Adaptive e-learning is well received compared with other e-learning types (8/10), but scores lower regarding usefulness, efficiency, and fun compared with the simulator training, despite improved conscious competency (32.6% ± 15.1%) and decreased subconscious incompetency (22.8% ± 10.2%). The subjective skill level improved by 20%. Asked about the estimated impact of the course, participants indicated that they had either learned something new that they plan to use in their practice (71.4%) or felt reassured in their practice (28.6%).

CONCLUSIONS

The development of a blended training course combining adaptive e-learning and simulator training in a rapid manner is feasible and leads to improved skills. Simulator training is rated more valuable by surgical trainees than theoretical e-learning; the impact of this type of training on patient care needs to be further investigated.

ABBREVIATIONS

e-learning = electronic learning; OR = operating room; SEEQ = Student Evaluation of Educational Quality; ZPD = zone of proximal development.

OBJECTIVE

Restrictions on working time and healthcare expenditures, as well as increasing subspecialization with caseload requirements per surgeon and increased quality-of-care expectations, provide limited opportunities for surgical residents to be trained in the operating room. Yet, surgical training requires goal-oriented and focused practice. As a result, training simulators are increasingly utilized. The authors designed a two-step blended course consisting of a personalized adaptive electronic learning (e-learning) module followed by simulator training. This paper reports on course development and the evaluation by the first participants.

METHODS

Adaptive e-learning was curated by learning engineers based on theoretical information provided by clinicians (subject matter experts). A lumbar spine model for image-guided spinal injections was used for the simulator training. Residents were assigned to the e-learning module first; after its completion, they participated in the simulator training. Performance data were recorded for each participant’s e-learning module, which was necessary to personalize the learning experience to each individual’s knowledge and needs. Simulator training was organized in small groups with a 1-to-4 instructor-to-participant ratio. Structured assessments were undertaken, adapted from the Student Evaluation of Educational Quality.

RESULTS

The adaptive e-learning module was curated, reviewed, and approved within 10 weeks. Eight participants have taken the course to date. The overall rating of the course is very good (4.8/5). Adaptive e-learning is well received compared with other e-learning types (8/10), but scores lower regarding usefulness, efficiency, and fun compared with the simulator training, despite improved conscious competency (32.6% ± 15.1%) and decreased subconscious incompetency (22.8% ± 10.2%). The subjective skill level improved by 20%. Asked about the estimated impact of the course, participants indicated that they had either learned something new that they plan to use in their practice (71.4%) or felt reassured in their practice (28.6%).

CONCLUSIONS

The development of a blended training course combining adaptive e-learning and simulator training in a rapid manner is feasible and leads to improved skills. Simulator training is rated more valuable by surgical trainees than theoretical e-learning; the impact of this type of training on patient care needs to be further investigated.

Resident training in neurosurgery faces multiple challenges. Working time restrictions, increasing administrative duties, and rising healthcare costs requiring efficient operating room (OR) utilization results in limited time for surgical hands-on training in the OR. At the same time, quality-of-care requirements and patient expectations regarding outcome are increasing, while neurosurgery continues to undergo subspecialization and continual technical advances. Nonetheless, how we learn and how we become experts at specific tasks has not changed; i.e., repetitive practice and exposure are required.1,2 In fact, the level of expertise is directly related to time spent practicing and is associated with measurable outcome.3 While how often one practices matters, it is both the quality and type of practice that matter even more.46 To achieve a high standard of patient care, residents should practice basic tasks, preferably as deliberate practice, outside the OR. This would allow better use of their limited OR time to focus on complex tasks requiring integrative abilities, which are difficult to obtain with standardized models. Over the past few decades, increasing efforts have been made to develop electronic learning (e-learning) programs utilizing online resources and realistic simulator models for hands-on training, with the intention to improve both theoretical knowledge (e-learning) and skills (simulator training) outside the OR710 without the need for lectures at specific times or difficult-to-conduct cadaver courses. Skills acquired during rigorous simulation-based training can be transferred to the clinical setting11 and thus improve patient care and safety.12 To enhance the effect of practical simulator training, residents should be provided with theoretical knowledge beforehand. With online resources for self-training increasingly available and with different levels of prior knowledge, residents can profit from individualized theoretical training.10 Improved adherence and learning outcomes can be achieved by using adaptive learning strategies to convey background information on specific tasks based on an individual’s prior knowledge.

In this study, we present our experience with the recent (2021) development of a blended learning course on spinal injections, which is part of neurosurgical spinal care at our institution, combining adaptive e-learning and simulator training modules. We describe the course development and report on some preliminary feedback from the participants.

Methods

Course Development

In 2021, two subject matter specialists (A.K.H. and A.F., both clinicians) defined the main learning goals of the course and prepared the medical background information over the course of 12 weeks, guided by a teaching psychologist (C.O.). The main learning goals were to know types of injection including indications, contraindications, and complications, and how to manage them; to know probable outcome and patient positioning; to know anatomical and radiographic landmarks; to understand radiation protection (e.g., safe zones around the C-arm) and correct positioning of the C-arm; and to know all steps of the procedure and the medication used. These goals were then transformed into an adaptive e-learning format by the company Area9 Lyceum.

The learning engineering team curated the material, created short exercises to assess existing knowledge applying various approaches ("probes," such as image labeling, gap text, and multiple-choice questions), and put together learning resources to increase knowledge on the subject and distractors, to minimize the chance of accidentally giving correct answers. The medical experts (A.K.H., A.F., and M.N.S.) subsequently reviewed the content, which was followed by an adjustment period, and finally the release of the program (Fig. 1).

FIG. 1.
FIG. 1.

Workflow: development of adaptive e-learning.

Adaptive e-Learning

The Area9 Lyceum online platform can be accessed through multiple devices such as a personal computer, smartphone, or tablet computer, and allows for individualized training imitating a one-on-one mentorship by adapting to the trainee’s prior knowledge. Through performance analysis, the training is automatically adjusted to enhance the learning experience and to adjust the focus on content areas of need (knowledge gap), while spending little time on areas in which there is sufficient knowledge following an individualized order of different probes (Fig. 2).

FIG. 2.
FIG. 2.

Individual learning paths of participants automatically recorded and generated through the learning platform. Left: Four individual learning paths of 4 different participants for the same learning objective. Each diagram depicts 1 trainee. Right: Four individual learning paths of 1 participant for 4 different learning objectives. Each diagram depicts one learning objective of the same participant. The gray circle indicates the starting point. Each colored dot represents one probe to be completed by the trainee. Larger dots indicate more time spent on the probe. Arrows show connections between different probes indicating that trainees did not follow a straightforward identical sequence of probes, but rather an individualized order based on their performance to enhance the learning experience.

Simulation Training

A lumbar spine model for image-guided spinal injections (3B Scientific GmbH) was purchased for the practical simulator training module. The 3-hour simulator training for participants took place using the typical clinical injection setting (C-arm machine, injection equipment, and sterile material) and infrastructure, as well as all resources available at our hospital. Instructors confirmed real-life haptic feedback for the simulator. All relevant anatomical details were visible on radiographs taken during the training procedures (Fig. 3). Throughout the training, each participant had the opportunity to perform each of the following injections at least once: transforaminal epidural/periradicular injection, intraarticular facet joint injection, medial branch block, and caudal epidural injection.

FIG. 3.
FIG. 3.

Simulator training module. A: Participant placing a needle under fluoroscopic guidance. B: Oblique radiograph of the simulator model (lumbar spine) localizing the injection target. C: Anteroposterior radiograph of the simulator model (sacrum) with the needle in the S1 foramen. D: Instructor explaining the radiographic images.

Course Implementation

The course is intended to be offered to residents twice per year. During the first offering in November 2021 and January 2022, orthopedic (n = 1) and neurosurgical (n = 7) trainees at different levels of training were assigned to participate in the course. A personalized link to access the e-learning module was provided for self-training, prior to the simulator training. The practical hands-on training took place in small groups with a 1-to-4 instructor-to-participant ratio (Fig. 4).

FIG. 4.
FIG. 4.

Simulator training. Upper: Setup of the simulator training site. Lower: One-to-four instructor-to-participant ratio.

Course Assessments

Prior to taking the training course, participants were asked to rate their skills on a numeric scale (1 = no skills, 2–3 = below average, 4–6 = average/mediocre, 7–9 = above average, 10 = excellent skills) and provide basic information on previous training, such as time spent on training and type of training.

The performance of each participant during the e-learning was assessed automatically by collecting data on learner progress, time spent on tasks, and performance for both a group of learners and for individuals. To assess competency level, each participant was asked to estimate whether the previous task had been completed correctly, yielding four categories: conscious competency (correct answer was expected); subconscious competency (correct answer was not expected); conscious incompetency (wrong answer was expected); and subconscious incompetency (wrong answer was not expected).

After completion of the two course modules, each participant was asked to complete an evaluation form based on the multidimensional Student Evaluation of Educational Quality (SEEQ) questionnaire, which rates the overall learning experience, instructor performance, and specific domains of the teaching/learning experience: learning/value, enthusiasm, organization/clarity, group interaction, individual rapport, breadth of coverage, and assignments/e-learning.

The original SEEQ as designed by Marsh13 assesses 9 components of teaching in higher education using 31 questions with 6 possible answers, each on a 5-point Likert scale (5 = strongly agree/very good, 4 = agree/good, 3 = neutral/average, 2 = disagree/poor, 1 = strongly disagree/very poor; 6 = not applicable). In this study, participants were also asked to directly rate aspects of the e-learning and the practical/simulator training (presentation, usefulness, fun, and efficiency) on a 10-point numeric scale (1 = poor, 10 = excellent), the impact of the training on their daily practice, and whether the course was recommendable. To assess subjective improvement of their skills after the training, the question regarding their skills was repeated (numeric scale; 1 = no skills, 2–3 = below average, 4–6 = average/mediocre, 7–9 = above average, 10 = excellent skills). Improvement per individual trainee was calculated, in which improvement by 1 point = 11.1%.

Statistical Analysis

The analysis of the course evaluation and the e-learning data was performed using the SPSS statistical program (version 27, IBM Corp.). Data were analyzed descriptively. The Kolmogorov-Smirnov test was applied to assess data distribution. All results are reported in means ± standard deviations for better readability. A paired-sample t-test was applied to compare participant feedback between adaptive e-learning and simulator training. Significance was assumed at p values < 0.05.

Results

Course Development

After establishing the course objectives and preparing the medical background information (12 weeks), the e-learning course was designed and reviewed in a 4-step process over 10 weeks. The overall working time totaled 332 hours. The subject matter experts (clinicians) reviewed the designed material 3 times at different stages of the course development and approved it to move forward into the testing period (Fig. 1). Curation and review steps partially overlapped and subject matter experts received automated information on modules to be reviewed, ensuring speedy processing. The complete process took place online through the platform and was tracked automatically. Ninety-four probes were designed for 94 learning objectives, with 87 accompanying learning resources and numerous distractors.

Participants

To date, 8 participants with mediocre skill levels (4.5 ± 1.7 of 10) have been assigned to take the course, of whom 5 (62.5%) finished the entire e-learning prior to the hands-on training. One person completed 25%, one 61%, and a third stopped after 81% of the course content. Course evaluation was completed by 7 participants (87.5%). Attendance for the 3-hour hands-on course was 100%. Table 1 provides information on medical background and prior experience with the topic.

TABLE 1.

Participant characteristics

VariableValue (%)
Gender
 Male6 (75)
 Female2 (25)
Subspecialty
 Neurosurgery7 (87.5)
 Orthopedic surgery1 (12.5)
Prior clinical experience
 1st year of residency1 (14.3)
 2nd year of residency3 (42.9)
 4th year of residency2 (28.6)
 Past residency (>6 yrs)1 (14.3)
 Prior experience with the task (yes)7 (100)
Prior training on the task2 (28.6)*
 Theoretical & practical1 (14.3)
 Bedside only1 (14.3)
Time spent on prior training
 <2 hrs1 (14.3)

Five missing (71.4%).

Six missing (85.7%).

e-Learning

Participants spent an average of 101 ± 41.6 minutes (range 23–143 minutes) on the e-learning. The mean time for completion of the e-learning was 123.2 ± 17.31 minutes; 75.9% of the tasks were completed with correct self-perception of competency (conscious competency 68% ± 10.4%, conscious incompetency 7.8% ± 6.9%). Self-perception of performance was wrong in 23.5% of probes (subconscious competency 3.1% ± 1.6%, subconscious incompetency 20.4% ± 9.7%). Through the learning experience, competency improved significantly (Table 2; improvement of conscious competency 32.6% ± 15.1%, improvement of subconscious incompetency −22.8% ± 10.2%, improvement of conscious incompetency −10.9% ± 10.2%). The difference in improvement between subconscious and conscious improvement was not significant (p > 0.05). Figure 2 illustrates individual learners’ pathways.

TABLE 2.

Improvement of competency

Competency/ IncompetencyMean % Before Training ± SDMean % After Training ± SDp Value
Conscious competency61.6 ± 15.495 ± 1.40.040
Subconscious competency2.3 ± 1.93.5 ± 2.60.310
Conscious incompetency11.4 ± 10.31.1 ± 1.80.029
Subconscious incompetency23.5 ± 9.71.3 ± 1.7<0.0001

Based on the overall performance of the participants, 9 (9.6%) of 94 learning probes will need to be revised because they were too difficult to solve (high rate of false answers, long time needed for completion). Participant feedback comments indicated that it was difficult for trainees to answer open-ended questions because sentences had to be phrased in a specific way without use of common abbreviations (n = 3, 42.9%) or questions were unclear (n = 1, 14.2%). One participant (14.2%) felt that the e-learning was too long, and another (14.2%) would have preferred a self-reading session without examination.

Course Evaluation

The two-step course received a high overall rating (4.8 ± 0.25 of 5), with the e-learning receiving the lowest ratings (SEEQ assignment 2.9 ± 0.74 of 5). Compared with other e-learning courses that participants had experienced in the past, it scored above average (8 ± 2.45 of 10). The simulator training was believed to be more useful, efficient, and fun compared with the adaptive e-learning (p < 0.05), whereas content presentation was rated comparable (p = 0.056). Table 3 contains the detailed items of the SEEQ assessment.

TABLE 3.

Course evaluation

ItemScore
SEEQ items (range 0–5)
 Learning4.5 ± 0.39
 Enthusiasm 5.0 ± 0.0
 Organization 4.5 ± 0.53
 Group interaction4.8 ± 0.56
 Individual rapport5.0 ± 0.0
 Breadth4.4 ± 0.81
 Assignment2.9 ± 0.74
 Overall course rating4.8 ± 0.25
Additional Evaluations (range 1–10)e-LearningSimulatorp Value
Presentation6.7 ± 3.359.7 ± 0.490.056
Usefulness4.9 ± 2.049.7 ± 0.490.001
Fun3.9 ± 2.4110.0 ± 0.00.001
Efficiency3.9 ± 1.7710.0 ± 0.0<0.0001
Performance compared to other e-learning programs8.0 ± 2.45

Values presented as mean ± SD, except for p values.

Participants rated their skills as mediocre prior to the training, with subjective improvement after the training of 21.4% ± 9.8% (mean improvement in scores 1.9 ± 0.88; mean score prior to training 4.5 ± 1.7, mean score after training 6.5 ± 1.1).

All participants would recommend the course to their colleagues. When asked specifically about the impact of the course, the majority (n = 5, 71.4%) learned something new, which they plan to use in their daily practice. Two participants (28.6%) stated that they felt reassured in their current practice by the course content. All participants said they would have preferred to have taken the course earlier in their training.

Discussion

Neurosurgery requires long periods of high concentration, fine finger motor skills, movement regulations, quick adaptation to environmental changes, integration of prior experiences, control over emotions, and goal-directed behavior, all of which can only be mastered with repetitive, continuous training. Encephalographic and MRI studies show different utilization of brain areas between novices and experts in sports, which yield physiological explanations for the greater efficiency of experts while performing a task and lower levels of exhaustion after the task.14,15 While the reorganization of brain activity from whole-brain activation in novices to the focal activation of areas needed for specialized processing in experts15 remains unchanged, what has changed is the training environment and the quality requirements encountered by neurosurgery residents. Working time restrictions, subspecialization, and increasing healthcare costs necessitate optimal utilization of OR time, which limits opportunities to attain surgical skills. While practical training remains irreplaceable, the location of the training has shifted from the OR to simulators in laboratories.1,2,9 In neurosurgery, a great variety of simulation-based trainings have been established to overcome the shortage of practical training time in the OR per resident. These trainings account for the greater expectations of residents’ surgical skills when treating patients.1,79 Skills acquired during simulator training are transferrable to the operative setting,11 leading to improved surgical outcome.3,12

Learning is a multifactorial, individualized process and a prerequisite for coping with the demands of complex working environments. Teaching should account for individual needs of trainees and provide a coordinated approach applying diverse media to transfer knowledge. Personal factors such as learning history, skills, abilities, and prior knowledge are closely intertwined with learning factors, and both affect learning success. Arranging learning in such a way that it suits the individual and matches his or her performance requirements and needs may not be a new idea, but it remains an enormous pedagogical, technical, and organizational challenge. Learning results will improve if personal factors are optimally combined with effective teaching and learning strategies. Vygotsky, one of the most influential psychologists to date, developed the idea of the "zone of proximal development" (ZPD) in the early 1930s. The idea of adaptive and thus personalized learning finds its origin in his sociocultural learning theory.16 Modern adaptive learning systems combine learning and training science as well as computer science to create the best possible learning experience. These systems enable trainees to individually study theoretical content based on their respective preferences, needs, and prior knowledge.17 Combining adaptive e-learning with practical hands-on training allows better use of resources (i.e., blended learning). During simulator training, more time can be spent on skills training as no time is needed for introductory lectures. Instructors can assume comparable theoretical knowledge among all trainees when starting the course. Additionally, compliance with theoretical learning can be enhanced among trainees with customized sessions, supporting individual learning styles accessible anywhere and anytime17 compared with conventional lectures.

At our institution, we developed a blended learning course for spinal injections combining an adaptive e-learning module and a practical hands-on simulator training module. The e-learning module was designed by learning engineers based on medical background information provided by clinicians (subject matter specialists). Clinicians were able to focus on what they wanted their trainees to learn, without needing to worry about how to transport the knowledge in a meaningful manner from a learning-psychology point of view. Yet, preparing the background information for nonmedical personnel required a thorough understanding of the specific task with all its steps, possible pitfalls, and how to manage them. As a result, instructors needed to use their reflective skills, which may have enhanced their teaching during the hands-on training. Collaboration through an online platform allowed for timely development of the course over a period of 22 weeks, including 12 weeks of content preparation.

Compared with other e-learning programs completed by participants previously, this exercise was rated 8/10. Yet, surgeons seem to prefer practical training over theoretical exercises, particularly because manual skills cannot be trained through theoretical courses. Aspects of the practical training including the enthusiasm of the instructor, group interaction, and individual rapport were rated as good or very good, whereas the assignment (e-learning) was assessed as average (Table 3). Additionally, usefulness, efficiency, and fun of the e-learning were rated significantly lower compared with the simulator training (Table 3). Despite the low rating from participants regarding usefulness and efficiency, the adaptive e-learning significantly improved their conscious competency and decreased the clinically dangerous subconscious incompetency.

According to Vygotsky, learners cannot acquire all knowledge independently. Outside of their comfort zone, learners need specific support from a teacher. ZPD has been defined as "the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance, or in collaboration with more capable peers."16 Individual growth only takes place within the ZPD. In the other two areas outside of the ZPD, the challenge is either too high or too low. Learning is most efficient and effective when the support matches the learner’s personality and specific needs. Ultimately, ZPD is the key concept on which all forms of adaptive learning are based. Because the learning outcome was achieved with a significant improvement of competencies prior to the practical course, it can be assumed that participants stepped out of their comfort zone for the most part and the successful ratings for the practical hands-on course may be partially due to solid theoretical knowledge, which enhanced the practical training. It is true, however, that some probes may have been too difficult, leading to frustration, while others may have been too easy. For the e-learning content to be most valuable, we need to evaluate the right time to take the course. All participants had some experience with the subject matter before and would have preferred completing the course earlier in their career.

The adaption in adaptive e-learning results from systematic data analysis, learner interaction with the learning system, and intelligent technologies in real time to provide an optimal learning experience. The learner receives the appropriate next learning content, which supports him or her precisely where it is needed (Fig. 2). An adaptive learning system thus replicates a teacher who supports the learner in a needs-based, focused, and goal-oriented manner within the ZPD. In this sense, adaptive learning is a form of deliberate practice as defined by Ericsson and Harwell,6 which has a high predictive power in terms of expertise. In our e-learning course, 9 learning probes were identified as being too difficult, because the average rate of correct answers was too low and participants spent more time than expected on the tasks. Thus, the automated data assessment can not only aid in enhancing the individual training, but also aid in further improving the learning experience by revising its content appropriately. In addition, feedback comments from multiple participants indicated that a greater variability for open-ended questions and use of common abbreviations would be preferred, while the length of the e-learning was only criticized by one trainee.

Overall, the two-part course was rated as very good (4.8/5) by its participants. Skills subjectively improved by approximately 20%, with a prior significant improvement of theoretical knowledge. When asked about the clinical implications of the training, participants responded that they had either learned something new that they plan to use in their daily routine (71.4%) or felt reassured in their clinical practice (28.6%). All participants would recommend the course to a colleague. Despite separate evaluation of both parts with significantly different results, the course should be viewed as a whole when assessing its effect.

Course evaluation using questionnaires and assessment of participant satisfaction are level 1 criteria of Kirkpatrick’s evaluation model of training programs (subjective evaluation). Analysis of objective, automated measures of the e-learning account for level 2. Assessments for this two-part course are favorable and can improve residents’ skills. However, the effect of the training on objective clinical outcomes is currently unknown. Despite the objective improvement in knowledge, the subjective improvement in skills, and the reported intention of participants to apply what they have learned during the training in their clinical routine, it remains to be evaluated whether these skills will be transferred actively into clinical practice where they can have an impact on patient outcome (level 3).18

With only a small number of participants so far, results need to be interpreted with caution. With greater numbers, the differences in ratings for the adaptive e-learning and the simulation training may vanish. The difference in rating may also disappear if the course is given at a more appropriate time during clinical training, such as before bedside training starts.

Conclusions

The development of a valuable blended training course consisting of adaptive e-learning and simulator training can be achieved relatively quickly and has a positive effect on skills. High-quality e-learning can be designed with a team of clinical experts and learning engineers. Simulator training is rated as highly valuable by surgical trainees. The impact on patient care needs to be investigated further.

Acknowledgments

We thank Catherine Golini Richards for proofreading and editing the manuscript.

Disclosures

The authors report no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper.

Author Contributions

Conception and design: Hickmann, Stienen, Ostendorp. Acquisition of data: Hickmann, Ferrari, Ostendorp. Analysis and interpretation of data: Hickmann, Ostendorp. Drafting the article: Hickmann. Critically revising the article: all authors. Reviewed submitted version of manuscript: all authors. Approved the final version of the manuscript on behalf of all authors: Hickmann. Statistical analysis: Hickmann. Administrative/technical/material support: Hickmann, Ferrari, Stienen, Ostendorp. Study supervision: Hickmann, Bozinov, Stienen, Ostendorp.

References

  • 1

    Cobb MI, Taekman JM, Zomorodi AR, Gonzalez LF, Turner DA. Simulation in neurosurgery—a brief review and commentary. World Neurosurg. 2016;89(583):586.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 2

    Pereira EA, Aziz TZ. Simulation in spinal surgery and the transition from novice to expert. World Neurosurg. 2015;84(6):15111512.

  • 3

    Birkmeyer JD, Finks JF, O’Reilly A, et al. Surgical skill and complication rates after bariatric surgery. N Engl J Med. 2013;369(15):14341442.

  • 4

    Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10)(suppl):S70S81.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 5

    Ericsson KA. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90(11):14711486.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 6

    Ericsson KA, Harwell KW. Deliberate practice and proposed limits on the effects of practice on the acquisition of expert performance: why the original definition matters and recommendations for future research. Front Psychol. 2019;10 2396.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 7

    Patel EA, Aydin A, Cearns M, Dasgupta P, Ahmed K. A systematic review of simulation-based training in neurosurgery, part 1: cranial neurosurgery. World Neurosurg. 2020;133(e850):e873.

    • Search Google Scholar
    • Export Citation
  • 8

    Patel EA, Aydin A, Cearns M, Dasgupta P, Ahmed K. A systematic review of simulation-based training in neurosurgery, part 2: spinal and pediatric surgery, neurointerventional radiology, and nontechnical skills. World Neurosurg. 2020;133(e874):e892.

    • Search Google Scholar
    • Export Citation
  • 9

    Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJ. Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance. Neurosurg Rev. 2021;44(4):18531867.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 10

    Stienen MN, Schaller K, Cock H, Lisnic V, Regli L, Thomson S. eLearning resources to supplement postgraduate neurosurgery training. Acta Neurochir (Wien). 2017;159(2):325337.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 11

    Dawe SR, Pena GN, Windsor JA, et al. Systematic review of skills transfer after surgical simulation-based training. Br J Surg. 2014;101(9):10631076.

  • 12

    McGaghie WC, Draycott TJ, Dunn WF, Lopez CM, Stefanidis D. Evaluating the impact of simulation on translational patient outcomes. Simul Healthc. 2011;6(suppl):S42S47.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 13

    Marsh HW. SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of university teaching. Br J Educ Psychol. 1982;52(1):7795.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 14

    Hatfield BD, Haufler AJ, Hung TM, Spalding TW. Electroencephalographic studies of skilled psychomotor performance. J Clin Neurophysiol. 2004;21(3):144156.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 15

    Kim W, Chang Y, Kim J, et al. An fMRI study of differences in brain activity among elite, expert, and novice archers at the moment of optimal aiming. Cogn Behav Neurol. 2014;27(4):173182.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 16

    Vygotsky LS, Cole M. Mind in Society: Development of Higher Psychological Processes. Harvard University Press;1978.

  • 17

    El-Sabagh HA. Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. Int J Educ Technol High Ed. 2021;18(1):53.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 18

    Bates R. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Eval Program Plann. 2004;27(3):341347.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    Workflow: development of adaptive e-learning.

  • View in gallery

    Individual learning paths of participants automatically recorded and generated through the learning platform. Left: Four individual learning paths of 4 different participants for the same learning objective. Each diagram depicts 1 trainee. Right: Four individual learning paths of 1 participant for 4 different learning objectives. Each diagram depicts one learning objective of the same participant. The gray circle indicates the starting point. Each colored dot represents one probe to be completed by the trainee. Larger dots indicate more time spent on the probe. Arrows show connections between different probes indicating that trainees did not follow a straightforward identical sequence of probes, but rather an individualized order based on their performance to enhance the learning experience.

  • View in gallery

    Simulator training module. A: Participant placing a needle under fluoroscopic guidance. B: Oblique radiograph of the simulator model (lumbar spine) localizing the injection target. C: Anteroposterior radiograph of the simulator model (sacrum) with the needle in the S1 foramen. D: Instructor explaining the radiographic images.

  • View in gallery

    Simulator training. Upper: Setup of the simulator training site. Lower: One-to-four instructor-to-participant ratio.

  • 1

    Cobb MI, Taekman JM, Zomorodi AR, Gonzalez LF, Turner DA. Simulation in neurosurgery—a brief review and commentary. World Neurosurg. 2016;89(583):586.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 2

    Pereira EA, Aziz TZ. Simulation in spinal surgery and the transition from novice to expert. World Neurosurg. 2015;84(6):15111512.

  • 3

    Birkmeyer JD, Finks JF, O’Reilly A, et al. Surgical skill and complication rates after bariatric surgery. N Engl J Med. 2013;369(15):14341442.

  • 4

    Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10)(suppl):S70S81.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 5

    Ericsson KA. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90(11):14711486.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 6

    Ericsson KA, Harwell KW. Deliberate practice and proposed limits on the effects of practice on the acquisition of expert performance: why the original definition matters and recommendations for future research. Front Psychol. 2019;10 2396.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 7

    Patel EA, Aydin A, Cearns M, Dasgupta P, Ahmed K. A systematic review of simulation-based training in neurosurgery, part 1: cranial neurosurgery. World Neurosurg. 2020;133(e850):e873.

    • Search Google Scholar
    • Export Citation
  • 8

    Patel EA, Aydin A, Cearns M, Dasgupta P, Ahmed K. A systematic review of simulation-based training in neurosurgery, part 2: spinal and pediatric surgery, neurointerventional radiology, and nontechnical skills. World Neurosurg. 2020;133(e874):e892.

    • Search Google Scholar
    • Export Citation
  • 9

    Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJ. Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance. Neurosurg Rev. 2021;44(4):18531867.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 10

    Stienen MN, Schaller K, Cock H, Lisnic V, Regli L, Thomson S. eLearning resources to supplement postgraduate neurosurgery training. Acta Neurochir (Wien). 2017;159(2):325337.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 11

    Dawe SR, Pena GN, Windsor JA, et al. Systematic review of skills transfer after surgical simulation-based training. Br J Surg. 2014;101(9):10631076.

  • 12

    McGaghie WC, Draycott TJ, Dunn WF, Lopez CM, Stefanidis D. Evaluating the impact of simulation on translational patient outcomes. Simul Healthc. 2011;6(suppl):S42S47.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 13

    Marsh HW. SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of university teaching. Br J Educ Psychol. 1982;52(1):7795.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 14

    Hatfield BD, Haufler AJ, Hung TM, Spalding TW. Electroencephalographic studies of skilled psychomotor performance. J Clin Neurophysiol. 2004;21(3):144156.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 15

    Kim W, Chang Y, Kim J, et al. An fMRI study of differences in brain activity among elite, expert, and novice archers at the moment of optimal aiming. Cogn Behav Neurol. 2014;27(4):173182.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 16

    Vygotsky LS, Cole M. Mind in Society: Development of Higher Psychological Processes. Harvard University Press;1978.

  • 17

    El-Sabagh HA. Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. Int J Educ Technol High Ed. 2021;18(1):53.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 18

    Bates R. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Eval Program Plann. 2004;27(3):341347.

    • Crossref
    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 169 169 169
PDF Downloads 301 301 301
EPUB Downloads 0 0 0