Building and implementing an institutional registry for a data-driven national neurosurgical practice: experience from a multisite medical center

View More View Less
  • 1 Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota;
  • | 2 Department of Neurologic Surgery, Mayo Clinic, Jacksonville, Florida; and
  • | 3 Department of Neurologic Surgery, Mayo Clinic, Phoenix, Arizona
Free access

In an era when healthcare “value” remains a much-emphasized concept, measuring and reporting the quality of neurosurgical care and costs remains a challenge for large multisite health systems. Ensuring cohesion in outcomes across multiple sites is important to the development of a holistic competitive marketing strategy that seeks to promote “brand” performance characterized by a superior quality of patient care. This requires mechanisms for data collection and development of a single uniform outcomes measurement system site wide. Operationalizing a true multidisciplinary effort in this space requires intersection of a vast array of information technology and administrative resources along with the neurosurgeons who provide subject-matter expertise relevant to patient care. To measure neurosurgical quality and safety as well as improve payor contract negotiations, a practice analytics dashboard was created to allow summary visualization of operational indicators such as case volumes, quality outcomes, and relative value units and financial indicators such as total hospital costs and charges in order to provide a comprehensive overview of the “value” of surgical care. The current version of the dashboard summarizes these metrics by site, surgeon, and procedure for nearly 30,000 neurosurgical procedures that have been logged into the Mayo Clinic Enterprise Neurosurgery Registry since transition to the Epic electronic health record (EHR) system. In this article, the authors sought to review their experience in launching this EHR-linked data-driven neurosurgical practice initiative across a large, national multisite academic medical center.

ABBREVIATIONS

CAT = computerized adaptive testing; EHR = electronic health records; IT = information technology; PROM = patient-reported outcome measure; RVU = relative value unit.

In an era when healthcare “value” remains a much-emphasized concept, measuring and reporting the quality of neurosurgical care and costs remains a challenge for large multisite health systems. Ensuring cohesion in outcomes across multiple sites is important to the development of a holistic competitive marketing strategy that seeks to promote “brand” performance characterized by a superior quality of patient care. This requires mechanisms for data collection and development of a single uniform outcomes measurement system site wide. Operationalizing a true multidisciplinary effort in this space requires intersection of a vast array of information technology and administrative resources along with the neurosurgeons who provide subject-matter expertise relevant to patient care. To measure neurosurgical quality and safety as well as improve payor contract negotiations, a practice analytics dashboard was created to allow summary visualization of operational indicators such as case volumes, quality outcomes, and relative value units and financial indicators such as total hospital costs and charges in order to provide a comprehensive overview of the “value” of surgical care. The current version of the dashboard summarizes these metrics by site, surgeon, and procedure for nearly 30,000 neurosurgical procedures that have been logged into the Mayo Clinic Enterprise Neurosurgery Registry since transition to the Epic electronic health record (EHR) system. In this article, the authors sought to review their experience in launching this EHR-linked data-driven neurosurgical practice initiative across a large, national multisite academic medical center.

ABBREVIATIONS

CAT = computerized adaptive testing; EHR = electronic health records; IT = information technology; PROM = patient-reported outcome measure; RVU = relative value unit.

The current medical reform environment has signified a new era for healthcare delivery, physician accountability, and care optimization. The enactment of the Patient Protection and Affordable Care Act (PPACA) in 2010 established the foundation for the national standardization of healthcare delivery, ultimately leading to wide-ranging legislative oversight and determination of healthcare value.1 Therefore, real-world outcomes data are now necessary to establish the value of healthcare. Value in healthcare is often defined as the quality of healthcare divided by cost.

In this article, we sought to explore how an electronic health records? (EHR)–linked patient registry may help standardize outcomes for a multisite neurosurgical practice and also how it may be linked to internal cost data, in order to provide a comprehensive overview of this “value” equation. Large healthcare systems need to move in the direction of value-based medicine, requiring healthcare administrators and providers to demonstrate the quality and efficiency of their neurosurgical practices. Herein, we provide the Mayo Clinic experience of combining neurosurgical practice data across Minnesota, Florida, Arizona, and Wisconsin to define value, contain costs, and report outcomes which may be further used to leverage quality contracting from private payors.

Challenges With Multisite Health Systems

Efficient and effective reporting of surgical quality for large, devolved, geographically disseminated, and integrated healthcare delivery systems is complicated. Different hospitals within the same system might vary in their cost, productivity, and performance on aspects of patient satisfaction and quality metrics.

As a result, healthcare managers and providers have worked to develop tools that demonstrate the quality and efficiency of their surgical practices and ensure greater cohesion between groups. Leaders of integrated health systems need to develop a methodology and system that aligns organizational strategies with performance measurement and management.2 For certain domains, one site might specialize to a greater extent than another site, so it is reasonable to expect different levels of performance. Individual provider productivity, work schedules, and compensation might vary between sites because of unique local factors; however, advertising hospital brand performance with private payors might require standardization of quality reporting and ensuring somewhat similar risk-adjusted patient outcomes between each site. This is especially important for metrics that are directly tied to reimbursement for services provided.

The Era of Big Data in Healthcare

The term “big data” was originally coined by NASA scientists in 1997 while attempting to describe the difficulty of displaying data sets too large to be stored in a computer’s main memory, which limits analysis of the data set as a whole.3 Although numerous definitions of big data abound, presently it is regarded as a data set that is too large to easily manipulate and manage, including the activity of collecting, storing, analyzing, and repurposing large volumes of data. At its most fundamental, the concept of big data is relative to the available resources at a given point in time. Given the ongoing national debate centered on healthcare reform, as well as the rapid cost and growth of surgical procedures for a multitude of conditions, prospective clinical registries have emerged as promising sources of diverse data sets.

Registries represent sustainable solutions for characterizing real-world surgical care while identifying large-scale improvement opportunities. Secondary analyses of large data sets provide a mechanism for researchers to address high-impact questions that would otherwise be restrictively expensive and labor intensive to study. Furthermore, databases and registries that are based on real-world clinical practices can provide insights into the actual quality and cost of the patient care delivered, which are the two important components of the value equation. Key performance indicators of healthcare quality include outcomes such as readmissions and mortality, which have been routinely used to establish reimbursement differences between hospitals by the Centers for Medicare & Medicaid Services. Moreover, these metrics are now required to be publicly reported by hospitals to increase transparency, assist patients with assessing the quality of their care, and allow hospitals to engage in quality improvement efforts. For surgical care, other metrics, such as returns to the operating room, have been put forth as further indicators of surgical quality.4 In addition, patient-reported outcome measures (PROMs) are now being increasingly recognized as important quality metrics that might allow payors to evaluate novel treatments, determine treatment effectiveness, and evaluate provider performance.5 As a critical piece of the value equation, public reporting of healthcare costs has become increasingly prevalent to allow consumers to make prudent and cost-effective healthcare decisions. Hence, registry collection efforts strive to create a multifaceted outcomes evaluation system that is applicable to a wide spectrum of disorders along with patient-reported outcomes and cost data to enhance the overall value of healthcare.

The Mayo Clinic Strategy

The genesis of the Mayo Clinic is based on the surgical practice started by brothers William and Charles Mayo at St. Mary’s Hospital in Rochester, Minnesota, in 1889. It is the earliest example of a group surgical practice in the United States. The Mayo brothers soon recruited Dr. Henry Plummer to their practice, who established systems and principles of patient care that would define Mayo Clinic as it is known today. One of these very important contributions was the concept of a single unified patient medical record, which was originally no more than a paper dossier. However, having reliable and accessible medical records was a radical shift as it allowed physicians to study their cases and realize important clinical phenomena that would otherwise go unnoticed. This concept was one of the first applications of clinical data mining that would mark a natural transition to the current era of electronic medical records and big data.

Today, the Mayo Clinic is one of the world’s largest multispecialty institutions of medical and surgical practices with “destination” medical sites in Rochester, Minnesota; Phoenix, Arizona; and Jacksonville, Florida, as well as a primary care health system spread across parts of Minnesota, Wisconsin, and Iowa. Each of these sites has distinct medical and surgical practices that bear the same flagship mark of the Mayo Clinic and are built on a common culture of quality, safety, and reliability in patient care. We will, henceforth, discuss the strategy of the Department of Neurologic Surgery within this institution to meet the challenge of creating an integrated performance measurement and management system aimed at benchmarking and achieving surgical quality goals at each site.

Data Collection

In May 2018, the Mayo Clinic commenced a partnership with Epic Systems Corp. in a $1.2 billion deal to replace its already existing in-house electronic health records (EHR) system. Given the Epic EHR registry and dashboard capabilities, which were considered a major improvement over the previous system, development of multiple clinical registries by various departments was a part of this effort. The Department of Neurologic Surgery also highlighted its need for an EHR-linked patient registry aiming to capture quality outcomes such as postoperative readmissions, complications, returns to the operating room, mortality, length of hospital stay, and discharge disposition (Table 1) for every patient undergoing a neurosurgical procedure anywhere across the 6 Mayo Clinic sites with active neurosurgery services (i.e., Rochester, Minnesota; Jacksonville, Florida; Phoenix, Arizona; and 3 other sites based in La Crosse and Eau Claire, Wisconsin; and Mankato, Minnesota, which are part of the Mayo Clinic Health System). A system-wide national data collection mechanism was instituted to input relevant clinical variables into a patient registry. Most data variable extraction was automated by linking the patient registry directly to the EHR, with the exception of diagnoses at readmission which were manually input by a group of nurse data abstractors. These measures were chosen following a literature review and internal discussions about relevance to patient care and payors. In addition to these standardized quality outcomes, validated PROMs were also included. By linking directly to the EHR, a key component of the registry development process was its envisioned capability to almost fully automate data collection and minimize the need for manual patient chart review. The overarching goals of this effort were to support internal quality improvement and standardization of patient outcomes across a multisite surgical department, and to generate data for marketing purposes and negotiation of payor contracts, as well as to foster clinical outcomes research projects.

TABLE 1.

Key metrics used to evaluate surgical performance in the Mayo Clinic Enterprise Neurosurgery Registry

Quality outcome
 Case vol
 30-day readmission
 30-day reop (only related reops [e.g., wound revisions & hardware removal] are captured)
 30-day mortality
 30-day complications*
 Discharge disposition
 Length of stay
Cost-related metrics
 RVU
 Average total cost of surgical admission

All data are organized into a dashboard (see Fig. 3) that allows visualization by site, surgeon, and type of procedure.

Complications captured include CSF leaks, postoperative hemorrhage, acute renal failure, blood transfusion, deep venous thrombosis, pulmonary embolism, myocardial infarction, pneumonia, urinary tract infection, sepsis, seizure, stroke, wound hematoma, and wound infections.

To address the challenges of operationalizing such a patient registry to meet the above-mentioned goals, while simultaneously ensuring data relevance and fidelity, the initiative required the assembly of a multidisciplinary team of individuals that could provide 1) clinical expertise, 2) organizational and administrative support, and 3) EHR support. A list of variables broadly covering demographics as well as surgical codes and outcomes to be recorded for each patient was proposed and thoroughly discussed by team members to determine their appropriateness and the feasibility of collection. A project coordinator and systems engineer assisted with tracking team progress and provided administrative support, serving as an important link between the clinical and EHR teams. The collected data were reviewed periodically by an attending neurosurgeon and a postdoctoral research fellow with a medical background to identify any gaps in clinical relevance or fidelity that might be indicators of deficits in the collection mechanism.

Capturing PROMs

The computerized adaptive testing (CAT) versions of the following 8 domains of the Patient-Reported Outcomes Measurement Information System (PROMIS) were selected to be captured: physical function, pain interference, depression, anxiety, fatigue, upper-extremity function, social roles, and sleep disturbance. CAT provides an advantage over traditional methods of survey capture due to advances in item response theory that allow selection of only the most informative items from a question bank, leading to a decreased survey burden. These individual CAT domain questionnaires were built and integrated with Epic to allow scoring data to appear in the EHR, available immediately for the provider’s perusal during the patient’s outpatient visit. This integration with EHR allows the provider to measure and track patient quality of life temporally in relation to surgery (Fig. 1).

FIG. 1.
FIG. 1.

PROMIS-CAT collection integrated with the electronic medical record. An example of a patient’s chart demonstrating the domain scores over time.

Collection of PROMs required special attention, because while data could be easily extracted into the registry once a patient survey had been captured in the EHR, actual survey delivery to patients while ensuring a maximum response rate was a challenge. This required separate teams of individuals at every site assisting with patient enrollment into an online health portal, which would be used to deliver surveys, and making phone calls to encourage patients to complete their surveys on time. In each case, patients were encouraged to complete their surveys via their online health portal prior to arriving at the clinic. In the event that patients were unable to do so, they would be encouraged to complete them during their check-in process, before the face-to-face encounter, via specially provided Epic EHR tablet computers that would also allow patients to complete pre-visit screening questionnaires in the same session. Figure 2 provides the operational workflow for EHR-integrated survey capture for outpatient visits. Such a baseline survey is captured for all patients who present to the neurosurgery clinic. Following capture of a baseline survey, if a patient undergoes a neurosurgical procedure, the system automatically triggers a series of surveys with the date of surgery as the anchor point; the patient would be asked to complete postoperative follow-up surveys at 3 months and 1 year following surgery with subsequent annual surveys.

FIG. 2.
FIG. 2.

Operational workflow for completion of PROMIS-CAT during the initial outpatient visit.

Data Operationalization

From 2016 to 2018, we collected data from 17,000 patients. After the implementation of a new EHR system from May 2018 to October 2020, we collected a total of 20,000 surgical cases with their outcomes logged into the neurosurgery registry. Following optimization of data collection and storage mechanisms, discussions began to identify a methodological paradigm in order to actually use this ongoing collection of a significant volume of data to benchmark the state of neurosurgical outcomes across the enterprise and identify key areas of quality improvement. There was need for a summary performance measurement system that would combine surgical outcomes data from the registry and link it at the patient level to cost and relative value unit (RVU) data and provide a comprehensive overview of the “value” of neurosurgical care across the Mayo Clinic enterprise. Apart from supporting internal quality improvement efforts, goals for developing such a system were to measure clinical quality, to expand market share, and to enhance organizational agility in responding to market forces by creating a fast mechanism for visualization and tracking of key performance figures. A driving factor behind this requirement was the frequent request for outcomes information while negotiating payor contracts, and such a system would allow reliable and quick generation of requested metrics.

Therefore, a conceptual framework for performance indicators measurement known as a “dashboard report,” as proposed by Kaplan and Norton (1992), was adopted.2,6 It was envisioned that the neurosurgery practice analytics dashboard would allow descriptive summary visualization of both operational indicators such as case volumes, outcomes, and RVUs and financial indicators such as total hospital costs and charges for all operative neurosurgical admissions. To accomplish this goal, a multidisciplinary task force was created representing expertise in clinical, administrative, financial, and information technology (IT) support, similarly to the initial data collection efforts. The following were identified as important requirements for the dashboard. 1) It would allow data to be input from multiple sources (neurosurgery registry, and cost and RVU information from a separate organizational platform) on an automated basis and would update in a daily fashion. 2) It would allow data visualization as a summary of metrics by each site, surgeon, and procedure type. 3) It would allow data export when critical review of background data was required. All surgical codes were thoroughly reviewed by a neurosurgeon and a postdoctoral research fellow to classify operative procedures into various buckets deemed clinically relevant (Table 2).

TABLE 2.

Procedure classification

CranialSpinePeripheral NerveRevision
Open vascularAnterior cervical fusionNeurolysis or decompression or transposition (e.g., entrapments & carpal tunnel releases)Revision related to wound-related complication (e.g., infection & dehiscence)
EndovascularPosterior cervical fusionNerve biopsy or tumor excision (peripheral nerve tumors)Hardware or instrumentation removal/failure
Decompression*Cervical disc arthroplastyPrimary nerve repair, nerve grafting, or nerve transfer (peripheral nerve injury)Evacuation of postop hematoma
CranioplastiesPosterior cervical decompression
CSF diversion (shunt)Posterior cervical fusion
CSF diversion (no shunt)Posterior lumbar fusion
Functional & epilepsyAnterior lumbar fusion
PainThoracic decompression
Supratentorial tumorThoracic/thoracolumbar fusion
Infratentorial tumorSpinal tumor
Pituitary proceduresSpinal cord stimulator implantation
Stereotactic radiosurgeryEvacuation of epidural hematoma or abscess
Craniosynostosis
Chiari decompression

All cases that did not fit into the above categories were marked miscellaneous. Examples include skull bone tumors and halo tractions.

Includes all burr holes, craniotomies, and/or decompressive craniectomies for traumatic or nontraumatic intracranial hematomas.

Includes all rhizotomies and microvascular decompressions for trigeminal neuralgia.

The metrics included for visualization were case volumes, clinical outcomes (postoperative 30-day readmissions, complications, readmissions, returns to the operating room, length of hospital stay, and discharge disposition following surgery), PROMs, RVUs, and total hospital costs and charges (Fig. 3). The dashboard was continually reviewed to assess its operational capabilities, clinical appropriateness, and relevance to shaping market strategy.

FIG. 3.
FIG. 3.

Practice analytics dashboard depicting case volumes (A) and quality outcomes (B) by site, surgeon, and procedure. LOS = length of stay; MCHS = Mayo Clinic Health System; readmit = readmission; ROR = reoperation; svc = service.

In addition to descriptive analytics, a predictive analytics component was also created—a point-of-care neurosurgical risk calculator that would allow the neurosurgeon and patient to know the risk of major complications for a particular patient profile. This would enable shared decision-making and facilitation of outpatient preoperative discussions with patients as part of a holistic competitive market strategy. An advantage of this calculator is that it is based on real-time evidence from the Mayo Clinic practice and, therefore, encourages more realistic expectations for patients, as opposed to other predictive models that are typically trained using data captured in registries from several different institutions, and frequently, from different geographic and types of practice settings.

Current Version

The current version of the neurosurgery dashboard summarizes outcomes of nearly 30,000 procedures that have been recorded in the Epic neurosurgery registry from its inception. The platform allows the user to visualize the chosen performance metrics by date, site of the procedure (Rochester, Minnesota; Jacksonville, Florida; Phoenix, Arizona; and the Health System sites), procedure category, and individual surgeon. The dashboard is now identified as an important feedback mechanism for department chairs at each site and has facilitated multisite discussions about their respective ongoing operational performances through monthly generated outcomes and RVU reports. Comparison of these metrics by site allows the leadership to identify areas for quality improvement, thereby standardizing surgical performance. The average cost associated with admissions for different surgical categories can also be viewed. This allows cost comparisons for a given type of surgery across different sites. The initiative has also improved payor contract negotiations by allowing quick review of practice performance, since such negotiations increasingly involve requests for outcomes data. These data requests can now be easily completed to demonstrate and document clinical quality in a reliable fashion while allowing standardized comparisons across multiple sites. The data are also being used to support research projects led by interested clinician-investigators through requests for applications. IRB approval is generally necessary for these projects. A real-time operative risk calculator based on multivariable regression allows the user to compute the risk of operative complications following the two most commonly performed procedures across the enterprise: brain tumor resection and spinal fusion. Using real-time evidence from the practice, patients can now be informed about their projected risk of major complications following surgery at each individual site (Fig. 4). We expect this tool to be expanded to other procedures and more tools to be developed for other outcomes.

FIG. 4.
FIG. 4.

Practice analytics dashboard demonstrating a predictive model for risk of major complications following brain tumor resection.

Costs of Implementing an Institutional Quality Data Platform

Approximately $90,457 was spent to implement the practice analytics dashboard; $33,206 was required for IT support and the dashboard build, and $40,251 was required for project support staff who provided clinical and content expertise. The remaining amount was required for a data analyst to build the predictive model. Furthermore, $236,000 is spent yearly to cover the full-time equivalent (FTE) costs of a postdoctoral fellow, a registry coordinator, and three data abstractors who are together responsible for supporting the data collection platform/registry within Epic EHR. This puts the total cost of this data-driven surgical practice initiative at $326,457.

Lessons Learned

Some important lessons were learned during implementation of this initiative, which may not be relevant only to specialty surgical units like neurosurgery but to all healthcare specialties that seek to measure and improve the quality of patient care and obtain better payor contracts. Obtaining stakeholder buy-in is an important challenge to facilitating adequate funding for a data-driven quality improvement initiative. Therefore, periodic proof-of-concept runs with all stakeholders are important during the initial dashboard development to demonstrate the long-term value for such an effort. Regular check-ins with attending staff are important to ensure data fidelity and relevance, not only for patient care but also for payor contracting. It is important to have an in-depth understanding of quality metrics that are relevant during payor negotiations and how they are measured. Typical metrics evaluated include readmissions, reoperations, length of hospital stay, discharge disposition, and mortality. For instance, measurement of readmissions and reoperations needs to be detailed to allow evaluation of the cause of readmission and ensure that a hospital admission unrelated to the index procedure does not cause overestimation and subsequent penalization by payors. In our current dashboard, only neurosurgical reoperations within a 30-day period are captured, with an option of performing chart review to determine the reason for reoperation (complication-related, staged, or unrelated). Undue financial penalization may also be avoided by proper risk adjustment for the type of patients treated, which may be important to take into consideration, especially for a multisite enterprise with differences in the case mix at each individual site. Measurement of these clinical and surgical quality indicators also needs to be detailed to allow comparison between different surgeons and types of procedures. The process of both automated and human data extraction for all variables should be transparent along with a proper mechanism to ensure regular data validation. An important issue that future users of such a system should be aware of is an initial period of provider adjustment, which may consist of indignant responses to quality indicators from their own practice. In those cases, it is invaluable to have an internal mechanism for evaluation of such provider concerns, data fidelity, and risk adjustment. This can be facilitated, if possible, by directly exporting source data from the performance dashboard for auditing, which is a requirement that should be brought up before building such a system. In our experience, the utility of such automated systems typically only extends to the initial identification of cases with reoperations, readmissions, mortalities, or complications so that further chart review of these individual cases is usually required, especially for mortalities that may have occurred as a consequence of the natural history of the disease or completely unrelated causes. In such cases, it is important to avoid overestimation. Using artificial intelligence applications to identify only the relevant mortalities, or other outcomes which represent a direct consequence of the surgical care provided, would be welcome investigations for the future and might make tedious chart review unnecessary. Another important lesson was the need to carefully evaluate the methods used for accurate procedure classification, especially when classification is done in an automated fashion by the EHR using surgical codes or other means. Coding accuracy might be different between institutions; therefore, it is helpful to validate if procedures are being accurately classified using chart review during the initial period of implementation.

Conclusions

Health systems with the ability to exhibit clinical quality and cost-effectiveness will have an advantage in today’s competitive market environment. Registries and dashboards may represent a viable tool to stipulate quality of care to purchasers of healthcare services and allow internal assessment of key performance indicators.

Acknowledgments

Funding was provided in part by the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Healthcare Delivery. We would like to acknowledge the assistance from the Robert D. and Patricia E. Kern Center for the Science of Healthcare Delivery at Mayo Clinic, the Mayo Clinic Enterprise Analytics group, and Epic Systems Corp., which was crucial to the design and implementation of this institutional quality improvement project.

Disclosures

The authors report no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper.

Author Contributions

Conception and design: Bydon, Goyal, Canoy Illies. Acquisition of data: Bydon, Goyal, Canoy Illies. Analysis and interpretation of data: Bydon, Goyal. Drafting the article: Bydon, Goyal. Critically revising the article: all authors. Reviewed submitted version of manuscript: Bydon, Goyal, Canoy Illies, Paul, Ghaith, Bendok, Quiñones-Hinojosa, Spinner, Meyer. Approved the final version of the manuscript on behalf of all authors: Bydon. Administrative/technical/material support: Goyal, Biedermann, Canoy Illies, Paul. Study supervision: Bydon, Goyal, Biedermann, Canoy Illies, Bendok, Quiñones-Hinojosa, Spinner, Meyer.

References

  • 1

    Menger RP, Guthikonda B, Storey CM, Nanda A, McGirt M, Asher A. Neurosurgery value and quality in the context of the Affordable Care Act: a policy perspective. Neurosurg Focus. 2015;39(6):E5.

    • Search Google Scholar
    • Export Citation
  • 2

    Curtright JW, Stolp-Smith SC, Edell ES. Strategic performance management: development of a performance measurement system at the Mayo Clinic. J Healthc Manag. 2000;45(1):5868.

    • Search Google Scholar
    • Export Citation
  • 3

    Cox M, Ellsworth D. Application-controlled demand paging for out-of-core visualization. In: Proceedings of the 8th Conference on Visualization ’97. IEEE Computer Society Press; 1997: 235-ff.

    • Search Google Scholar
    • Export Citation
  • 4

    Kerezoudis P, Glasgow AE, Alvi MA, Spinner RJ, Meyer FB, Bydon M, Habermann EB. Returns to operating room after neurosurgical procedures in a tertiary care academic medical center: implications for health care policy and quality improvement. Neurosurgery. 2019;84(6):E392E401.

    • Search Google Scholar
    • Export Citation
  • 5

    Squitieri L, Bozic KJ, Pusic AL. The role of patient-reported outcome measures in value-based payment reform. Value Health. 2017;20(6):834836.

    • Search Google Scholar
    • Export Citation
  • 6

    Kaplan RS, Norton DP. The balanced scorecard—measures that drive performance. Harv Bus Rev. 1992 Jan-Feb;70(1):719.

  • View in gallery

    PROMIS-CAT collection integrated with the electronic medical record. An example of a patient’s chart demonstrating the domain scores over time.

  • View in gallery

    Operational workflow for completion of PROMIS-CAT during the initial outpatient visit.

  • View in gallery

    Practice analytics dashboard depicting case volumes (A) and quality outcomes (B) by site, surgeon, and procedure. LOS = length of stay; MCHS = Mayo Clinic Health System; readmit = readmission; ROR = reoperation; svc = service.

  • View in gallery

    Practice analytics dashboard demonstrating a predictive model for risk of major complications following brain tumor resection.

  • 1

    Menger RP, Guthikonda B, Storey CM, Nanda A, McGirt M, Asher A. Neurosurgery value and quality in the context of the Affordable Care Act: a policy perspective. Neurosurg Focus. 2015;39(6):E5.

    • Search Google Scholar
    • Export Citation
  • 2

    Curtright JW, Stolp-Smith SC, Edell ES. Strategic performance management: development of a performance measurement system at the Mayo Clinic. J Healthc Manag. 2000;45(1):5868.

    • Search Google Scholar
    • Export Citation
  • 3

    Cox M, Ellsworth D. Application-controlled demand paging for out-of-core visualization. In: Proceedings of the 8th Conference on Visualization ’97. IEEE Computer Society Press; 1997: 235-ff.

    • Search Google Scholar
    • Export Citation
  • 4

    Kerezoudis P, Glasgow AE, Alvi MA, Spinner RJ, Meyer FB, Bydon M, Habermann EB. Returns to operating room after neurosurgical procedures in a tertiary care academic medical center: implications for health care policy and quality improvement. Neurosurgery. 2019;84(6):E392E401.

    • Search Google Scholar
    • Export Citation
  • 5

    Squitieri L, Bozic KJ, Pusic AL. The role of patient-reported outcome measures in value-based payment reform. Value Health. 2017;20(6):834836.

    • Search Google Scholar
    • Export Citation
  • 6

    Kaplan RS, Norton DP. The balanced scorecard—measures that drive performance. Harv Bus Rev. 1992 Jan-Feb;70(1):719.

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 263 263 263
PDF Downloads 193 193 193
EPUB Downloads 0 0 0