Application of an aviation model of incident reporting and investigation to the neurosurgical scenario: method and preliminary data

Free access

Object

Incident reporting systems are universally recognized as important tools for quality improvement in all complex adaptive systems, including the operating room. Nevertheless, introducing a safety culture among neurosurgeons is a slow process, and few studies are available in the literature regarding the implementation of an incident reporting system within a neurosurgical department. The authors describe the institution of an aviation model of incident reporting and investigation in neurosurgery, focusing on the method they have used and presenting some preliminary results.

Methods

In 2010, the Inpatient Safety On-Board project was developed through cooperation between a team of human factor and safety specialists with aviation backgrounds (DgSky team) and the general manager of the Fondazione Istituto Neurologico Carlo Besta. In 2011, after specific training in safety culture, the authors implemented an aviation-derived prototype of incident reporting within the Department of Neurosurgery. They then developed an experimental protocol to track, analyze, and categorize any near misses that happened in the operating room. This project officially started in January 2012, when a dedicated team of assessors was established. All members of the neurosurgical department were asked to report near misses on a voluntary, confidential, and protected form (Patient Incident Reporting System form, Besta Safety Management Programme). Reports were entered into an online database and analyzed by a dedicated team of assessors with the help of a facilitator, and an aviation-derived root cause analysis was performed.

Results

Since January 2012, 14 near misses were analyzed and classified. The near-miss contributing factors were mainly related to human factors (9 of 14 cases), technology (1 of 14 cases), organizational factors (3 of 14 cases), or procedural factors (1 of 14 cases).

Conclusions

Implementing an incident reporting system is quite demanding; the process should involve all of the people who work within the environment under study. Persistence and strong commitment are required to enact the culture change essential in shifting from a paradigm of infallible operators to the philosophy of errare humanum est. For this paradigm shift to be successful, contributions from aviation and human factor experts are critical.

Abbreviations used in this paper:EU = European Union; ISOB = Inpatient Safety On-Board; PIRS = Patient Incident Reporting System.

Object

Incident reporting systems are universally recognized as important tools for quality improvement in all complex adaptive systems, including the operating room. Nevertheless, introducing a safety culture among neurosurgeons is a slow process, and few studies are available in the literature regarding the implementation of an incident reporting system within a neurosurgical department. The authors describe the institution of an aviation model of incident reporting and investigation in neurosurgery, focusing on the method they have used and presenting some preliminary results.

Methods

In 2010, the Inpatient Safety On-Board project was developed through cooperation between a team of human factor and safety specialists with aviation backgrounds (DgSky team) and the general manager of the Fondazione Istituto Neurologico Carlo Besta. In 2011, after specific training in safety culture, the authors implemented an aviation-derived prototype of incident reporting within the Department of Neurosurgery. They then developed an experimental protocol to track, analyze, and categorize any near misses that happened in the operating room. This project officially started in January 2012, when a dedicated team of assessors was established. All members of the neurosurgical department were asked to report near misses on a voluntary, confidential, and protected form (Patient Incident Reporting System form, Besta Safety Management Programme). Reports were entered into an online database and analyzed by a dedicated team of assessors with the help of a facilitator, and an aviation-derived root cause analysis was performed.

Results

Since January 2012, 14 near misses were analyzed and classified. The near-miss contributing factors were mainly related to human factors (9 of 14 cases), technology (1 of 14 cases), organizational factors (3 of 14 cases), or procedural factors (1 of 14 cases).

Conclusions

Implementing an incident reporting system is quite demanding; the process should involve all of the people who work within the environment under study. Persistence and strong commitment are required to enact the culture change essential in shifting from a paradigm of infallible operators to the philosophy of errare humanum est. For this paradigm shift to be successful, contributions from aviation and human factor experts are critical.

Medical mistakes are traditionally poorly accepted; when they are recognized, it is generally for punitive purposes. This pattern was also true in the airline industry until the Dryden accident in 1989, the disaster that paved the way for a thorough review of both the educational and evaluative processes of aircraft pilots. This review led to a dramatic improvement in aviation safety, when experts understood for the first time the role that human factors play in determining the final outcome of errors.

In recent years, health care organizations have increasingly focused their attention on safety issues. Several national agencies have committed themselves to improving safety standards within their national health care systems (Italian Ministry of Health documentation: Sentinel Events Monitoring Protocol [July 2009]; Safety Manual in the Operation Room [July 2009]; Methods and Analysis for Managing Health Care Risks—Root Cause Analysis [September 2009]; Risk Management in Medicine—Errors Problem [March 2004]), and the patient safety approach presents strong similarities with risk management protocols in the aeronautic safety management system.16

Incident precursors may not be solely related to technical or nontechnical skills of the operators; therefore, their analysis should include the complex framework in which all organizational or institutional components are directly or indirectly evaluated.23 In the incident investigation process “latent failures” are normally referred to as “organizational layers”; therefore, it is very important to determine the relative contribution of frontline operators' honest mistakes (active failures) as compared with organizational errors.

The experience gained in the aviation safety system has demonstrated that an accident, the equivalent of a medical “sentinel event” that causes serious patient harm or death, is usually preceded by many weak signals, for example, near misses or errors that might have led to an adverse event but were fortuitously intercepted or did not cause any harmful consequences. These sequences of events are not always completely clear and straightforward; more importantly, they are not always promptly detected and analyzed so that they can be prevented in the future.14 Incident reporting systems can thus be important tools for quality improvement in complex adaptive systems, including the operating room.21 We describe the application of an aviation model of incident reporting and investigation to the neurosurgical scenario, providing an outline of the method we used as well as some preliminary results.

Methods

The Start-Up

In 2010, the ISOB project was developed through cooperation between a team of human factor and safety specialists with aviation backgrounds (DgSky team) and the general manager of the Fondazione Istituto Neurologico Carlo Besta. In 2011, after specific training in safety culture, we implemented an aviation-derived prototype of incident reporting within the Department of Neurosurgery. We then developed an experimental protocol to track, analyze, and categorize any near misses that happened in the operating room. This project officially started in January 2012, when a dedicated team of assessors was established.

Education and Recruitment of Reporting Staff

All of the people working within the operating room of our neurosurgical department were taught key issues in safety culture and human factors through dedicated crew resource management classes led by facilitators with aviation backgrounds. The crew resource management classes emphasized the importance of communication, teamwork, situation awareness, stress management, and decision making in relation to the working environment. The impact of incident reporting systems in aviation was also emphasized to highlight the role of human factors in all complex adaptive systems, including the operating room. Our aim in this project was to create an incident reporting database repository to monitor performance and trend analysis.

To ensure reporting compliance and protect data and staff members, the incident reporting system was launched under a confidential, voluntary model, following the “just culture” principles in accordance with our institution safety policy guidelines. (“Just culture” is defined as “a culture in which frontline operators or others are not punished for actions, omissions, or decisions taken by them that are commensurate with their experience and training, but in which gross negligence, willful violations, and destructive acts are not tolerated.” International Civil Aviation Organisation 2007.) The PIRS (Besta Safety Management Programme) was designed and repeatedly shown and explained to all staff members as an ad hoc, voluntary, confidential, protected, anonymous reporting form (Fig. 1). This form was made available online on our institute Intranet. To avoid any lexical confusion or misunderstandings, we reinforced the definition and classification of near misses (our main object of the project) to all staff members and instructed and encouraged all employees—in particular, senior staff members—to report near misses and incidents.

Fig. 1.
Fig. 1.

The PIRS form, a specifically designed double-sided incident reporting form for neurosurgery near-miss events.

Data Collection

Completed PIRS forms were only accessible by the coordinator of the team of assessors, and he (D.C.) was the sole person to process and transfer the forms to an Intranet-based reporting system. Assessors, bound by a confidentiality policy that protected both staff members and the content of the PIRS forms, evaluated the data.

The PIRS Form

We designed the data collection form specifically for reporting incidents within a neurosurgical operating room (Fig. 1). The resulting PIRS form is a very simple double-sided sheet that combines previously defined international best practices in health care incident reporting with expert recommendations from our aviation partners. Questions (some in multiple-choice form) about specific risk factors were inserted: When and Where (location and phase of surgery), What (description of the context and factors' sequence), How (how the incident was detected, and what remedies were put in place), Impact (self-perceived severity and self-estimation of the potential consequences/effects along with their estimated duration), Potential for recurrence (reasons for possible recurrence of the incident), and Suggestions (suggestions to improve safety measures and to prevent similar incidents from happening again). Special attention was paid to causal and contributory factors related to any human, technological, procedural, or organizational factors. The PIRS form was also designed to help the team of assessors by taking into account the reporters' feelings and suggestions.

Understanding Errors, Data Analysis, and Assessment and Countermeasures

A dedicated team of assessors led by the team coordinator and assisted by a facilitator, both with aviation backgrounds, met monthly to analyze any near-miss reports. Facilitation was tailored to reproduce a structured aviation-based root cause analysis brainstorming session. We decided to perform a retrospective analysis of every single event, starting from detecting the incident effects and then going backward, looking for any causal and contributing factors. This reverse process continued until the most remote, deep factors underlying that incident were postulated. Data clustering followed, and corrective action recommendations and proposals were made. The assessment team then used these interpretations to create a cause-effect flowchart.

Reported near misses were classified and rated by event severity and duration, and the severity score was double-checked by the team of assessors. An additional score subdivided events into 4 areas: human, technological, organizational, and procedural factors, which further defined the risk level. Those areas were further defined on subsequent versions of the PIRS form to help reporters and assessors better identify any weakness in the system. For instance, for any reported event, we investigated how such incidents were related to human, organizational, or behavioral problems (for example, lack of communication, teamwork, decision making, or leadership; or influence of time pressure, workload, misinterpretation, high/low expectation, multitasking activities, and so forth); technological issues (for example, fit-to-purpose technologies, training, performance standardization on human-machine interface issues, warnings-alarm management, spatial confusion, equipment setup and maintenance, and so forth); and procedures and protocols consistency (for example, checklist effectiveness, drug dosing, task allocations, and so forth). Errors and trigger events were rated and grouped, starting from the most frequent or severe errors down to the least severe. The adopted countermeasures were those suggested on the reporting forms, solutions found in the literature, or consensus strategies that came from the sessions.

Near-Miss Risk Stratification

Near-miss risk stratification resulted from the PIRS form analysis process described above during the periodic meetings of the team of assessors. Risk was stratified based on score evaluation and through 2-phase meetings led by the team coordinator (Fig. 2).

Fig. 2.
Fig. 2.

Example of a near-miss final report after evaluation by the team of assessors. A: The incident reporting dashboard summarizes the analysis performed by the team of assessors for a case of prolonged arterial hypotension due to a mistake in the drug pump setting. As far as risk stratification is concerned, this event fell into the middle risk zone, with a team score of 8 (event effect + human factor + technology + organizational factor + procedure = 2 + 2 + 1 + 1 + 2). B: Preventing actions and monitoring activities are also shown.

Phase 1

At the end of the incident analysis, the team of assessors was asked to score the consequences of the event in terms of their severity (none [0], low [1], medium [2], significant [3], high [4]) and effect duration (none [0], quick recovery [1], recovery only at the end of the operation [2], recovery within postoperative Day 1 [3], late or no recovery [4]). The total score (severity + duration) ranged from 0 to 8.

Phase 2

The team of assessors was also asked to assess and score the impact of the 4 systemic components (human factors, technology, organizational factors, and procedural factors). The maximum score that could be attributed to the systemic components was 8, and the team of assessors was allowed to score each single factor, assessing its relative importance in the context of that specific event. The sum of both the severity/duration score and the score of the impact of every systemic component was assumed to give an idea of the event-related risk and was arbitrarily stratified in 5 risk areas. An example of a near-miss final report is featured in Fig. 2.

This process allowed us to analyze the real relevance of the incident, keeping all of the components that were possibly involved in the account. Five levels of risk subcategories were introduced to generate a system that might provide a correlation between the level of risk and the need for countermeasures. In addition, wide and flexible categories allowed future fine-tuning of trend analysis within the same subcategories—for example, how much the classified risk is going toward the area of major risk—and therefore identifying an alert zone.

We decided to insert a specific field into the PIRS form for the reporter to describe his or her own risk perception to draw parallels with the results from the assessment team.

Results

The analysis results for the 14 cases we collected between January and May 2012 are illustrated in Table 1. Four areas of interest concerning both the system and the components can be identified: human factors, technology involved, organizational factors, and procedures. These areas were analyzed and visualized as a flowchart to help both the reporters and the team of assessors identify any weaknesses or issues within any reported event. The following contributing factors were found to recur in all of the 14 near misses we analyzed: 1) human factor: 9 of 14 cases—in particular, omission, lack of communication, late detection, lack of teamwork, problem setting, problem solving, situation awareness, multitasking, and time pressure; 2) technologies: 1 of 14 cases—in particular, problems with equipment allocation, instrument setup, interface similarities, and data interpretation; 3) organizational factors: 3 of 14 cases—in particular, task allocation/overlap, working conditions, and planning; and 4) procedural factors: 1 of 14 cases—in particular, checklist effectiveness.

TABLE 1:

Results of near-miss events collected between January and May 2012*

Case No.DescriptionContributing FactorType of FactorReporter's Risk ScoreTeam of Assessors' Risk ScoreTeam of Assessors' Risk ClassificationConsequencesAction Plan
1checklist omission to speed up proceduresteam working, time pressure, shortcuthuman35lowsevere hypertension during whole procedure requiring treatmentstronger impetus in strengthening safety culture: checklist
2remifentanil pump set w/ propofol dosagelack of communication & co-ordination, time pressure, shortcuthuman47mediumprolonged hypotension during induction of anesthesiadevelopment of procedure to control drugs into infusion pumps
3Mayfield headrest worn out & malfunctionedlack of communication & co-ordination, shortcuthuman87mediumnonedevelopment of strict maintenance procedure
4misidentification of surgical site levelworkload, time pressure, difficult interpretation of fluoroscopy imageshuman511significantwrong lumbar disc removal w/ epidural hemorrhage & unchanged pain at emergence; repeat operationacquisition of devices for intraop localization & navigation
5remifentanil pump set w/ propofol dosageworkload, time pressure, team working, shortcuthuman38mediumincomprehensible resistance to anesthesia drugsdevelopment of procedure to control drugs into infusion pumps
6wrong-site surgerylack of communication, workload, time pressure, information collectionhuman411significantimpossible histological diagnosisscheduling of muscle biopsy on operatory list, including indication of surgical site
7unavailability of catheters for ICP monitoring during surgeryorganization, information collection, team workingorganizational88mediumlack of ICP data to guide postop treatmentdifferentiation of lots & suppliers
8case cancelled because of latex allergyteam working, lack of communication, workload, time pressure, information collectionorganizational38mediumoperation postponedpreop surgical briefing
9surgery cancelled after induction of anesthesia because preop MRI not controlledteam working, lack of communication, workload, time pressure, information collection, shortcuthuman411significantineffective induction of anesthesia (intubation, CVC cannulation, bladder catheter)preop surgical briefing & checklist made only by first operator
10wrong midazolam dose administrationtime pressure, workload, organizational factor, procedureshuman712significantlack of patient coopera ion during checklistsafer storage of midazolam (5 mg/ml) along w/ other dangerous drugs (KCl)
11head movement during brain surgery due to kinking of connector btwn infusion pumps & intravenous cannulatechnology, workload, lack of attentiontechnology411significantintraop movements & intraop awarenesssuggest use of anti-kinking connector or wrist support & taping
12wrong patient identification on pathological samplerole definition, technologies, lack of attention, procedural factorsprocedure810mediumnonedevelopment of automatic filtering in case of incongruous request
13surgery w/o written patient consentworkload, time pressure, procedural factors, lack of attention, role definitionhuman06lownonestronger impetus in strengthening safety culture: checklist
14surgery in eloquent area w/o fMRI & DTI availability in navigation systemorganizational factors, technologies, proceduresorganizational510mediumseizure during prolonged cortical stimulationsurgical briefing the day before surgery

* CVC = central venous catheter; DTI = diffusion tensor imaging; fMRI = functional MRI; ICP = intracranial pressure.

As far as risk stratification was concerned, 2 near-miss events were classified by the team of assessors in the low-risk area, 7 in the middle-risk area, and 5 in the significant-risk area (Table 1).

Discussion

Incident reporting systems have significantly contributed to aviation safety and have also been found to positively influence medical care–related morbidity and mortality. Reporting systems have been implemented in several industrialized nations, such as the US,12,25,28 Germany,8,15,18,20 and Switzerland, which also introduced a critical national-incident reporting system.4,11,13 Safety culture first developed in anesthesiology5 and intensive care medicine,2,10,15,19,24,25,28 then in other subspecialties such as hospital pharmacy,11,22 internal medicine,3,18 psychiatry,27 obstetrics and gynecology,13 pediatrics,1,12 and ambulatory care.4 Interestingly, in this regard, surgical subspecialties are still somehow underrepresented.8,20 In performing a MEDLINE search with the terms “incident reporting AND neurosurgery” we retrieved only 1 study on this topic.17

There are difficulties in defining, identifying, and analyzing errors and responsibilities within neurosurgery; in fact, inside the medical community, we are all usually rigidly focused on the responsibility of the “end-of-the-line” operator rather than on the complex network of systemic factors that surround that operator and that may greatly obstruct safety. For instance, the operating room is a complex adaptive system in which a mix of professionals (with multiple interconnected skills and abilities) must cooperate while performing demanding technical tasks using complex technologies and techniques.6 The Reason Swiss-Cheese model (Fig. 3) and its evolution into a health care error proliferation model effectively illustrates how the complexity of such systems, when combined with human factors, can synergistically promote errors.10,22

Fig. 3.
Fig. 3.

Swiss-Cheese model of accident causation. Despite the presence of multiple layers of defenses, barriers, and safeguards, an error can still occur if the “holes” are all aligned.

Although this approach is widely accepted in aviation and some medical specialties, in surgery the question we usually ask is “Who is guilty?” rather than “Where has the system failed?” This attitude makes it difficult to foster a safety culture aimed at accident or incident prevention. Continuing to focus our attention on the end-of-the-line operator rather than trying to correct both systemic and human factors contributes to a self-perpetuating culture that has the paradoxical and unwanted effect of inviting neurosurgeons to repeat their mistakes. This was shown to be true within the aviation context, in which a “factor systems model” was instead highly effective in leading to understanding and correcting the factors that contributed to errors.14

At our institution, neurosurgeons and top managers embarked on the ISOB project, starting with error definition and describing error magnitude, for example, possible error typologies (honest mistakes vs violations or gross negligence) and the border between lapses, mistakes, and derogation. Before starting to implement a reporting system and with an institutional effort, we reinforced some concepts from the International Civil Aviation Organisation Annex 13—in particular, that “the sole objective of the investigation of an accident or incident shall be the prevention of accidents and incident” and that “it is not the purpose of this activity to apportion blame or liability.” Liability in reporting is a very sensitive matter. If this problem is not handled well, the entire voluntary incident reporting system policy is jeopardized, decreasing awareness as well as motivation.

To a certain extent, near-miss or close-call events are embedded in the so-called potential effects area without any consequence to patients. Therefore, they should be considered outside of the sentinel events area, in which liability and possible legal prosecution become more relevant. According to the international aviation legislation (EU Regulation No. 996/2010 referred to incident/accident prevention in aviation), there is a double-track reporting system for near-miss or close-call events (whose reporting is anonymous, voluntary, and confidential) and sentinel events (whose reporting is mandatory). Moreover, in both reporting systems, negligence and voluntary violations to standardized protocols are not tolerated; however, this is a delicate balance, because stressing liability in the near-miss voluntary reporting process might significantly reduce reported events, thereby hampering the disclosure of sequences of events that might prevent more dangerous ones. Spontaneous reporting of any “unintentional error” should be encouraged.

Cooperating with aviation and flight safety experts was extremely helpful in implementing our incident reporting system. In this regard, the EU issued a specific directive in 2003 (2003/42/CE, for inconvenience reporting in civil aviation) in which the EU encouraged all members to avoid any frontline prosecution if the people involved in an incident were willing to reveal any useful information that might prevent similar episodes. This directive represented a milestone for civil aviation: increasing motivation, awareness, and consensus in achieving a robust and evolved incident reporting system.

Despite the investments made to foster a safety culture within our institution, the rate of reported incidents in our preliminary experience was relatively low (14 events in 5 months). In the literature, this rate ranges between 7 and 12 reports per month.2,15,19,20 In a study by Kantelhardt et al.,17 reported incidents averaged 18 per month. We wonder if this discrepancy could be attributed to the fact that our form had to be downloaded from the Intranet, whereas the form in the Kantelhardt et al. study could also be filled out by hand in the operating room. Our data showed that in 64% of the cases, human factors played a significant role, confirming the trend from the literature in which human factors were involved in 51%–79% of reported incidents.2,7,9,26 In the series from Kantelhardt et al., human factors were involved in an even higher number of cases (86%), confirming that most incidents can probably be prevented with a proactive attitude from the whole institution.

Organizational factors were also involved in a significant number of cases (21.4% of cases). This result further reinforces the potentially important role of corrective actions and countermeasures developed through error analysis. Some comments regarding the Heinrich Pyramid Model should be made as regards the parallels between safety in aviation and that in the neurosurgery operating room. In fact, not all complex organizations are similar from a risk management standpoint. For instance, the stability of the system at the beginning of an aircraft flight is different from a neurosurgical operation. When a patient enters the operating room, he or she already presents some potentially critical situations because of the disease itself. Both the Reason and the Heinrich models seem to highlight the concept that usually the initial working condition in a complex organization is quite stable and that only at a later stage, in due course, criticalities might occur. Therefore, it is possible to infer that the risk of accidents in a surgical operating room must be handled at the same level of incident risk precursors without waiting for the “weak signals” to increase in frequency to anticipate the forthcoming real accident.

Neurosurgery might be an exception in which variability is much higher from the very beginning of an operation. This difference might prompt us to adopt a less conservative approach along with a traditional incident reporting system, one in which risks are investigated not only for their potential harm to patients. In such a context we also need to define our investigation framework in case we want to quickly investigate a real accident or sentinel event (that is, aviation style mortality and morbidity analysis?). First of all, it is important to set up an investigation team composed of the most experienced surgeons and representatives from all staffing constituent groups to make error analysis more accurate and faithful. It would also be important for the team to be independent, directly reporting their conclusions and suggestions for improvements to the risk manager. All team members should act within a well-defined confidentiality policy that should be double-checked and supported by a lawyer or legal expert. We are aware of the potential weakness of the whole aviation-style incident reporting apparatus that we implemented. The system surely requires validation with a higher number of analyzed cases, possibly in a multicenter or multiregion study.

Conclusions

Implementing an incident reporting system within a neurosurgery department is a complex task that should involve the entire institution, from top management to all employees. Introducing a paradigm shift away from blaming frontline operators toward an errare-humanum-est way of thinking takes time and requires a strong commitment at every institutional level. The experience and contribution from aviation safety management can be highly instrumental in fostering such a transition. Analyzing how errors happen within a specific context and what range of factors were involved in the error helps in the development of specific counter strategies to these honest mistakes (not considering intentional ones). This analysis also reinforces those barriers that will prevent most mistakes from happening again within the same system.

Disclosure

The authors report no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper. Capt. Alfonso Piro is a commercial civil aviation pilot instructor and human factor consultant member of the ISOB DgSky Team. Capt. Maurizio Scholtze is an aviation safety management consultant member of the ISOB DgSky Team.

Author contributions to the study and manuscript preparation include the following. Conception and design: Ferroli. Acquisition of data: Castiglione, Broggi. Analysis and interpretation of data: Caldiroli, Perin. Drafting the article: Ferroli, Acerbi, Scholtze, Piro. Critically revising the article: Ferroli, Schiariti. Reviewed submitted version of manuscript: all authors. Study supervision: DiMeco.

Acknowledgment

The authors thank Mrs. Antonietta Dessi for invaluable help in data collection.

References

If the inline PDF is not rendering correctly, you can download the PDF file here.

Article Information

Address correspondence to: Paolo Ferroli, M.D., Department of Neurosurgery, Fondazione Istituto Neurologico Carlo Besta, Via Celoria 11, 20133 Milano, Italy. email: pferroli@istituto-besta.it.

Please include this information when citing this paper: DOI: 10.3171/2012.9.FOCUS12252.

© AANS, except where prohibited by US copyright law.

Headings

Figures

  • View in gallery

    The PIRS form, a specifically designed double-sided incident reporting form for neurosurgery near-miss events.

  • View in gallery

    Example of a near-miss final report after evaluation by the team of assessors. A: The incident reporting dashboard summarizes the analysis performed by the team of assessors for a case of prolonged arterial hypotension due to a mistake in the drug pump setting. As far as risk stratification is concerned, this event fell into the middle risk zone, with a team score of 8 (event effect + human factor + technology + organizational factor + procedure = 2 + 2 + 1 + 1 + 2). B: Preventing actions and monitoring activities are also shown.

  • View in gallery

    Swiss-Cheese model of accident causation. Despite the presence of multiple layers of defenses, barriers, and safeguards, an error can still occur if the “holes” are all aligned.

References

  • 1

    Ahluwalia JMarriott L: Critical incident reporting systems. Semin Fetal Neonatal Med 10:31372005

  • 2

    Bartolomé Ruibal ADíaz-Canabate JISanta-Ursula Tolosa JAMarzal Baró JMGonzález Arévalo AGarcía Valle del Manzano S: [Application of a critical incident reporting and analysis system in an anesthesiology department.]. Rev Esp Anestesiol Reanim 53:4714782006. (Span)

  • 3

    Bowman LCarlstedt BCBlack CD: Incidence of adverse drug reactions in adult medical inpatients. Can J Hosp Pharm 47:2092161994

  • 4

    Brun A: [Preliminary results of an anonymous internet-based reporting system for critical incidents in ambulatory primary care.]. Ther Umsch 62:1751782005. (Ger)

  • 5

    Choy YC: Critical incident monitoring in anaesthesia. Med J Malaysia 61:5775852006

  • 6

    Cilliers P: Complexity and Postmodernism: Understanding Complex Systems New YorkRoutledge1998

  • 7

    Currie MMackay PMorgan CRunciman WBRussell WJSellen A: The Australian Incident Monitoring Study. The “wrong drug” problem in anaesthesia: an analysis of 2000 incident reports. Anaesth Intensive Care 21:5966011993

  • 8

    Domínguez Fernández EKolios GSchlosser KWissner WRothmund M: [Introduction of a critical incident reporting system in a surgical university clinic. What can be achieved in a short term?]. Dtsch Med Wochenschr 133:122912342008. (Ger)

  • 9

    Fox MAWebb RKSingleton RLudbrook GRunciman WB: The Australian Incident Monitoring Study. Problems with regional anaesthesia: an analysis of 2000 incident reports. Anaesth Intensive Care 21:6466491993

  • 10

    Freestone LBolsin SNColson MPatrick ACreati B: Voluntary incident reporting by anaesthetic trainees in an Australian hospital. Int J Qual Health Care 18:4524572006

  • 11

    Frey BBuettiker VHug MIWaldvogel KGessler PGhelfi D: Does critical incident reporting contribute to medication error prevention?. Eur J Pediatr 161:5945992002

  • 12

    Grant MJDonaldson AELarsen GY: The safety culture in a children's hospital. J Nurs Care Qual 21:2232292006

  • 13

    Haller UWelti SHaenggi DFink D: [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety.]. Gynakol Geburtshilfliche Rundsch 45:1471602005. (Ger)

  • 14

    Heinrich HW: Industrial Accident Prevention: A Scientific Approach New YorkMcGraw-Hill Book Company1931

  • 15

    Hübler MMöllemann AEberlein-Gonska MRegner MKoch T: [Anonymous critical incident reporting system in anaesthesiology. Results after 18 months.]. Anaesthesist 55:1331412006. (Ger)

  • 16

    International Civil Aviation Organization: Safety Management Manual (SMM): Doc 9859 AN/474 ed 2ICAO2009. (http://www.icao.int/safety/ism/Guidance%20Materials/DOC_9859_FULL_EN.pdf) [Accessed September 27 2012]

  • 17

    Kantelhardt PMüller MGiese ARohde VKantelhardt SR: Implementation of a critical incident reporting system in a neurosurgical department. Cent Eur Neurosurg 72:15212011

  • 18

    Köbberling J: [The critical incident reporting system (CIRS) as a measure to improve quality in medicine.]. Med Klin (Munich) 100:1431482005. (Ger)

  • 19

    Madzimbamuto FDChiware R: A critical incident reporting system in anaesthesia. Cent Afr J Med 47:2432472001

  • 20

    Missbach-Kroll ANussbaumer PKuenz MSommer CFurrer M: [First experience with a critical incident reporting system in surgery.]. Chirurg 76:8688752005. (Ger)

  • 21

    Molloy GJO'Boyle CA: The SHEL model: a useful tool for analyzing and teaching the contribution of Human Factors to medical error. Acad Med 80:1521552005

  • 22

    Reason JT: Human error: models and management. BMJ 320:7687702000

  • 23

    Reason JT: Managing the Risks of Organizational Accidents Aldershot, Hampshire, UKAshgate1997

  • 24

    Reason JTCarthey Jde Leval MR: Diagnosing “vulnerable system syndrome”: an essential prerequisite to effective risk management. Qual Health Care 10:Suppl 2ii21ii252001

  • 25

    Vogus TJSutcliffe KM: The impact of safety organizing, trusted leadership, and care pathways on reported medication errors in hospital nursing units. Med Care 45:99710022007

  • 26

    Webb RKCurrie MMorgan CAWilliamson JAMackay PRussell WJ: The Australian Incident Monitoring Study: an analysis of 2000 incident reports. Anaesth Intensive Care 21:5205281993

  • 27

    Wright MParker G: Incident monitoring in psychiatry. J Qual Clin Pract 18:2492611998

  • 28

    Wu AWPronovost PMorlock L: ICU incident reporting systems. J Crit Care 17:86942002

TrendMD

Cited By

Metrics

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 179 179 10
PDF Downloads 276 276 42
EPUB Downloads 0 0 0

PubMed

Google Scholar