Comparative effectiveness in neurosurgery: what it means, how it is measured, and why it matters

Full access

Comparative effectiveness research has recently been the subject of intense discussion. With congressional support, there has been increasing funding and publication of studies using comparative effectiveness and related methodology. The neurosurgical field has been relatively slow to accept and embrace this approach. The author outlines the procedures and rationale of comparative effectiveness, illustrates how it applies to neurosurgical topics, and explains its importance.

Abbreviations used in this paper:CAS = carotid artery stenting; CEA = carotid endarterectomy; CER = comparative effectiveness research; GOS = Glasgow Outcome Scale; ICER = incremental cost-effectiveness ratio; MI = myocardial infarction; QALY = quality-adjusted life year; QOL = quality of life; RCT randomized controlled trial; TBI = traumatic brain injury.

Comparative effectiveness research has recently been the subject of intense discussion. With congressional support, there has been increasing funding and publication of studies using comparative effectiveness and related methodology. The neurosurgical field has been relatively slow to accept and embrace this approach. The author outlines the procedures and rationale of comparative effectiveness, illustrates how it applies to neurosurgical topics, and explains its importance.

Abbreviations used in this paper:CAS = carotid artery stenting; CEA = carotid endarterectomy; CER = comparative effectiveness research; GOS = Glasgow Outcome Scale; ICER = incremental cost-effectiveness ratio; MI = myocardial infarction; QALY = quality-adjusted life year; QOL = quality of life; RCT randomized controlled trial; TBI = traumatic brain injury.

Medical science is heavily dependent on comparisons of treatments, diagnostic tests, devices, and management strategies. There is ample evidence, however, that medical practice often strays from choices that reflect the best scientific evidence, and this variability affects the cost of medical care (http://www.dartmouthatlas.org/). The standard scientific comparison involves the clinical trial, in which 2 or more groups are compared with respect to safety, efficacy, and/or costs. A hallmark of prospective trials is that the investigators choose one or more primary measures of efficacy prior to data collection; the success of a trial depends on how well the experimental group performs compared with controls. Attempts are made to reduce bias and ensure internal validity (that is, that any differences measured are solely due to the experimental or control therapies). To ensure this, most trials have exclusion criteria that limit baseline patient characteristics (for example, sex, age, and compliance) and disease characteristics (for example, duration, severity, and comorbidities), and often they dictate many details of treatment. The FDA requires proof of efficacy before it approves new drugs or medical devices.

Frequently, small clinical trials are unable to reach adequate statistical significance to distinguish between 2 approaches to patient care. Multiple randomized controlled trials on the same topic can frequently be combined to increase statistical power and aid in decision making. Meta-analysis is a statistical model that pools the ratio of efficacies (experimental vs control groups) from multiple studies.22

Effectiveness Versus Efficacy

The strict requirements for internal validity in a clinical trial may limit the generalizability (or external validity) of a trial's results and conclusions. Everyday care of sick patients presents challenges different from a clinical trial and may affect diagnostic and therapeutic choices. Geographic settings (urban vs rural), care delivery systems, factors such as patient or doctor preferences, or patient-doctor relationships can influence response and compliance. In short, effectiveness is how a particular health care approach fares in real world situations involving care of typical patients.

Greater generalizability usually requires gathering evidence from more diverse sources, both published and unpublished. The process usually begins with a systematic review, in which data are collected for analysis. Since systematic reviews are also performed for comparative efficacy research, Gartlehner and colleagues outlined several criteria that distinguish effectiveness from efficacy trials.12 Efficacy trials are usually done only in tertiary care settings; the effectiveness settings should reflect the initial care facilities available to a diverse population with the condition of interest. Effectiveness trials should reflect the full spectrum of disease encountered, including comorbidities, variable compliance rates, and use of other medications. Surrogate outcomes, such as symptom scores, laboratory data, or time to disease recurrence, are frequently used in efficacy trials; the primary outcome in an effectiveness trial should capture the net effect on a health outcome. Study duration in efficacy trials is often limited; the duration of an effectiveness trial should be long enough to reflect the clinical setting. Effectiveness studies should limit adverse events to those found to be relevant in prior efficacy and safety trials and should use objective scales to measure their impact on health. Sample size should be adequate to detect at least a minimally important difference on a health-related QOL scale. Efficacy trials usually exclude protocol violators; effectiveness trials should always be done on an intent-to-treat basis (for example, noncompliance is an important factor in determining effectiveness).

Measuring Effectiveness

Effectiveness studies use the same end points as do efficacy studies. These can be categorized in several ways. First, we must decide what we are measuring: mortality, frequency of complications, physical and/or psychosocial functioning (or impairment), overall QOL, satisfaction with health state, or cost of the disease and its care. The end point should reflect an outcome that is especially relevant in distinguishing between 2 management strategies. Some years ago, a multicenter RCT comparing carotid endarterectomy with medical care for carotid atherosclerosis was interpreted by many as failing to show a significant effect of surgery.10 Death was chosen as the primary outcome of the study. Although it is true that there was no significant difference between the 2 groups, most deaths were due to MI, a condition that carotid artery surgery cannot be expected to affect.

The outcome metric may involve disease-specific or global measures. A good example is measuring effectiveness in TBI. Our measurement may take a form such as the Glasgow Outcome Scale (GOS). This is an ordinal (the scores are ordered from worst to best outcomes) and functional scale, which is disease specific. Its weakness is that it is not parametric; we cannot compute the mean value of a series, and we cannot use simple statistics to compare one group to another. One way to overcome these limitations is to collapse the scale into a binary measure (for example, alive or dead; favorable vs adverse outcome) and compare the frequencies of the 2 groups. However, this risks loss of information and distortion of the true values measured.1 Advanced statistical techniques, such as sliding dichotomy17 and ordinal analysis21 have been suggested.

Using a global measure of health-related QOL has the advantage of incorporating additional aspects of the patient's health profile into the calculation, thus getting a more inclusive picture of disease impact, allowing comparisons across different disease states, and facilitating cost-effectiveness and other studies. The goal is to view outcomes from the perspective of the patient.24 These measures often involve a standardized questionnaire, the answers to which are used to calculate QOL in individual health domains or weighted to provide a single summary score. An example is the popular 36-item Short Form Health Survey, which has been proposed as a QOL scale for patients having suffered TBI.18 Quality of life can also be gauged by utility, a quantitative measure of how strongly a patient or potential patient prefers a given health outcome in the face of uncertainty. Utility can be measured directly22 or indirectly by using a questionnaire such as the EuroQOL. Utility scores are parametric; thus, they can be averaged and compared using standard statistical approaches. Conversion of ordinal scores, such as GOS2 and 36-item Short Form Health Survey,4 into utility values has been reported and may facilitate analysis.

As important as QOL is quantity of life, that is, how long a given health state lasts. Measures include expected longevity or years of life and QALYs. The latter represents both quality and quantity and is calculated by multiplying each year of expected life by the expected QOL that year, then summing the products. This gives the overall utility of one's health state.22 The converse of QALYs is disability-adjusted life years. This measure expresses the number of years lost due to ill health, disability, or early death and represents the overall disease burden on an individual or a society.

Critics have complained that QALYs reflect only health-related QOL and not all societal preferences, have the potential to discriminate against populations with illnesses that society believes are unimportant, and rely on preference weights that may not reflect every relevant population.19 Furthermore, the use of QALYs does not distinguish between a modest benefit to a large number and a dramatic benefit to a very few. However, sensitivity analyses can be used to test the validity and robustness of the assumptions, and many authorities agree that a preferred alternative has not yet been validated.19

Comparative Effectiveness Analysis

As with efficacy studies, data can be obtained from new trials or from systematic reviews. Comparative effectiveness studies can also use data collected for other purposes, such as databases and observational trials. Standard meta-analyses compare treatment effect or risk in 2 groups. They require each study used to measure the same outcome variable in both groups and can only look at one outcome variable at a time. This limitation is illustrated by a recent meta-analysis of RCTs comparing carotid endarterectomy (CEA) with angioplasty plus stent insertion (CAS) for carotid artery stenosis.8 The authors' results are inconclusive. They showed that, over the short term, CEA was associated with a higher rate of MI and cranial nerve injury, whereas CAS was associated with more strokes and a higher mortality rate. What they could not decide was which approach had the better overall results. Similar ambiguity was also seen with late outcomes.

Effectiveness studies allow greater analytical flexibility to address comparisons like this, in which outcomes have several dimensions (uneventful outcome, stroke, MI, death, or cranial nerve injury). Our stroke example can be expressed as an expected value decision tree,22 in which each branch represents a possible outcome of the decision to treat by CEA or CAS. If we know the probability and the utility of each branch, we can calculate the average utility of a patient undergoing CEA or CAS (Fig. 1), thereby conclusively comparing the 2 treatments. Of course, the model as illustrated is simplified, as it does not consider late outcomes. We can also supplement effectiveness data by using meta-analytical techniques to pool observational studies,7–9 thus improving the power of our analysis.

Fig. 1.
Fig. 1.

Decision tree to calculate hypothetical comparative effectiveness and costs of the average patient undergoing Treatment A or Treatment B.

Cost data can also be incorporated in comparisons. Many so-called “cost-effectiveness” studies use hospital charges as a surrogate for costs. This is misleading, as it represents costs only for uninsured patients, and probably very few of them. First, we must decide the perspective to take; in other words, cost to whom? Most commonly, cost-effectiveness studies use a societal perspective.13 This is certainly the appropriate perspective for policy makers to take when evaluating new technology. Increasingly, hospitals are quantifying costs for procedures and inpatient and outpatient services, making it possible for investigators to use a hospital perspective. For certain uses, the perspective of the insurer, the equipment supplier, or the patient is appropriate. Considering the recent trend toward more uninsured patients and higher deductibles and copays,5 a patient perspective may be preferable for some costs. For some analyses we also consider “indirect” costs, such as lost wages associated with illness.

Sometimes it is simply enough to compare the costs of 2 procedures, particularly if there is no difference in effectiveness. In business, when there is a need to decide whether a change in activity is worthwhile, a cost-benefit analysis is done. The expected costs of the new procedure are subtracted from the expected benefits; the decision depends on whether the change saves or loses money. In medicine we are uncomfortable valuing human life and health in monetary terms. Instead, most medical studies use a form of cost-effectiveness analysis, in which we quantify the cost for a given improvement in effectiveness. Any of the various outcomes can be used as a measure of effectiveness. However, cost utility analysis, in which utilities of all possible outcomes of therapy are calculated, is used most commonly.13 The absolute cost-effectiveness ratio (cost divided by effectiveness) is meaningless and should never be used in scientific publications. When comparing 2 procedures or management strategies, we must always calculate incremental cost-effectiveness to decide which approach is more cost-effective. If we are comparing treatment A with treatment B, the ICER is given by the following formula: ICER = (CostA – CostB)/(EffectivenessA – EffectivenessB).

By convention, we do not report negative ICER values. If, for example, treatment A is less expensive and more effective than treatment B, A is said to “dominate” B. We only have to decide whether one approach is more cost-effective than another is when it is both more effective and more expensive. There is considerable literature on society's willingness to pay for a given amount of effectiveness. Traditionally, the threshold for cost-effectiveness was considered between $50,000 and $60,000 for every QALY gained. Recent studies suggest the present threshold is actually much higher, perhaps in the range of $150,000–200,000/QALY.3,14 If we are using a patient's perspective for costs, we may want to use a “willingness to pay” analysis.20 This involves asking patients or potential patients how much they would be willing to pay for a given health outcome. It estimates both costs and effectiveness in financial terms. Although this approach simplifies calculations, many are offended by assigning monetary value to health states.19

A hypothetical example may help clarify some of these concepts. In a comparison of 2 hypothetical treatments for carotid artery stenosis, summarized in Fig. 1, I have given values (purely hypothetical) for the probability, utility, and costs associated with each possible outcome. These values are shown in Table 1. We assume that the short-term outcome extends to 1 year. Therefore, an uneventful outcome has a utility of 1 and a value of 1 QALY. Multiplying the probability of each outcome of Treatment A by its utility and adding the products (a process known as “folding back” the decision tree) gives the expected outcome for the average patient undergoing Treatment A. The same process calculates the expected costs, in which the model adds the proportional costs of complications to those of the initial procedure. These are reported in Table 2. As is evident Treatment A is more expensive than Treatment B, but also gives superior results. Applying the formula for the incremental cost-effectiveness of Treatment A: ICER = (CostA – CostB)/(EffectivenessA – EffectivenessB) = $37,500/QALY.

TABLE 1:

Probabilities and utilities for hypothetical comparison

OutcomeTreatment ATreatment BUtility
ProbabilityCost ($)ProbabilityCost ($)
uneventful0.86013,0000.90010,5001
cranial nerve injury0.055+2,0000+2,0000.85
MI0.025+13,5000.015+13,5000.75
stroke0.045+45,0000.065+45,0000.60
death0.015+5,0000.020+5,0000
TABLE 2:

Hypothetical comparative effectiveness and costs

Expected OutcomesTreatment ATreatment B
utility0.950.93
cost$14,500$13,750

This is well within the limits of cost-effectiveness, making Treatment A the more cost-effective treatment for carotid stenosis, at least in this hypothetical example.

Comparative Effectiveness Research in Neurosurgery

Comparative effectiveness research, as a research methodology, was stimulated by the Agency for Healthcare Research and Quality (AHRQ), a branch of the Department of Health and Human Services. There was concern about increasing medical costs, slow progress in research, and a perception that practitioners were not fully informed of the latest data on treatment. Accordingly, the Agency for Healthcare Research and Quality held a symposium in early 2006 to assess research methodology and dissemination of findings. The goals were to maximize benefits of care, while simultaneously minimizing harms and controlling costs. The following year the Congressional Budget Office published a white paper6 with the idea of funding research to review evidence that would support these goals, namely CER. In its American Recovery and Reinvestment Act of 2009, Congress authorized more than $400 million in challenge grants for medical research. The National Institutes of Health, which administered the grants, listed 213 specific challenge topics for which applications were being solicited. A recent search of MEDLINE showed exponential growth of publications on the subject since 2007.

Neurosurgery is an almost ideal field for CER. There are many unanswered questions. The volume of patients with most neurosurgical diseases is relatively small, making research projects both lengthy and costly. There are relatively few RCTs, and they are difficult to fund and are often unsuccessful. Patients often refuse randomization, and crossovers between treatment groups are common. Recently, an expert panel in TBI suggested that future research concentrate on comparative effectiveness studies.15 A recent comprehensive review of CER in neurosurgery summarizes its history, scope, and implications.16 Neurosurgery as a field has been slow to respond, in part because of a paucity of available funding. Of the 213 topics advanced by the National Institutes of Health for challenge grants, not a single one was primarily neurosurgical.23 The rigors of a structured literature search, the esoteric mathematical modeling and statistics needed for more comprehensive comparative effectiveness studies may also dampen interest. However, it is worth noting that efforts are being made to use research of this sort to control health care spending.11 If neurosurgeons do not participate in CER or at least understand it, we will not be in a position to challenge reports from outside our specialty that may endanger our freedom to choose the best treatment options for our patients.

Disclosure

The author reports no conflict of interest concerning the materials or methods used in this study or the findings specified in this paper.

References

  • 1

    Altman DGRoyston P: The cost of dichotomising continuous variables. BMJ 332:10802006

  • 2

    Aoki NKitahara TFukui TBeck JRSoma KYamamoto W: Management of unruptured intracranial aneurysm in Japan: a Markovian decision analysis with utility measurements based on the Glasgow Outcome Scale. Med Decis Making 18:3573641998

    • Search Google Scholar
    • Export Citation
  • 3

    Braithwaite RSMeltzer DOKing JT JrLeslie DRoberts MS: What does the value of modern medicine say about the $50,000 per quality-adjusted life-year decision rule?. Med Care 46:3493562008

    • Search Google Scholar
    • Export Citation
  • 4

    Brazier JRoberts JDeverill M: The estimation of a preference-based measure of health from the SF-36. J Health Econ 21:2712922002

    • Search Google Scholar
    • Export Citation
  • 5

    Claxton GLevitt L: The Economy and Medical Care Henry J. Kaiser Family Foundation2011. (http://healthreform.kff.org/notes-on-health-insurance-and-reform/2011/november/theeconomy-and-medical-care.aspx) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 6

    Congressional Budget Office: Research on the Comparative Effectiveness of Medical Treatments: Issues and Options for an Expanded Federal Role Washington, DCCongressional Budget Office2007. (http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/88xx/doc8891/12-18-comparativeeffectiveness.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 7

    Dreyer NATunis SRBerger MOllendorf DMattox PGliklich R: Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (Millwood) 29:181818252010

    • Search Google Scholar
    • Export Citation
  • 8

    Economopoulos KPSergentanis TNTsivgoulis GMariolis ADStefanadis C: Carotid artery stenting versus carotid endarterectomy: a comprehensive meta-analysis of short-term and long-term outcomes. Stroke 42:6876922011

    • Search Google Scholar
    • Export Citation
  • 9

    Einarson TR: Pharmacoeconomic applications of meta-analysis for single groups using antifungal onychomycosis lacquers as an example. Clin Ther 19:5595691997

    • Search Google Scholar
    • Export Citation
  • 10

    Fields WSMaslenikov VMeyer JSHass WKRemington RDMacdonald M: Joint study of extracranial arterial occlusion. V. Progress report of prognosis following surgery or nonsurgical treatment for transient cerebral ischemic attacks and cervical carotid artery lesions. JAMA 211:199320031970

    • Search Google Scholar
    • Export Citation
  • 11

    Garber AMSox HC: The role of costs in comparative effectiveness research. Health Aff (Millwood) 29:180518112010

  • 12

    Gartlehner GHansen RANissman DLohr KNCarey TS: Criteria for Distinguishing Effectiveness From Efficacy Trials in Systematic Reviews Rockville, MDAgency for Healthcare Research and Quality2006. (http://www.ahrq.gov/downloads/pub/evidence/pdf/efftrials/efftrials.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 13

    Gold MSiegel JRussell L: Cost-Effectiveness in Health and Medicine New YorkOxford University Press1996

  • 14

    Le QAHay JW: Cost-effectiveness analysis of lapatinib in HER-2-positive advanced breast cancer. Cancer 115:4894982009

  • 15

    Maas AIMenon DKLingsma HFPineda JASandel MEManley GT: Re-orientation of clinical research in traumatic brain injury: report of an international workshop on comparative effectiveness research. J Neurotrauma 29:32462012

    • Search Google Scholar
    • Export Citation
  • 16

    Marko NFWeil RJ: An introduction to comparative effectiveness research. Neurosurgery 70:4254342012

  • 17

    Murray GDBarer DChoi SFernandes HGregson BLees KR: Design and analysis of phase III trials with ordered outcome scales: the concept of the sliding dichotomy. J Neurotrauma 22:5115172005

    • Search Google Scholar
    • Export Citation
  • 18

    Neugebauer EBouillon BBullinger MWood-Dauphinée S: Quality of life after multiple trauma—summary and recommendations of the consensus conference. Restor Neurol Neurosci 20:1611672002

    • Search Google Scholar
    • Export Citation
  • 19

    Neumann PJGreenberg D: Is the United States ready for QALYs?. Health Aff (Millwood) 28:136613712009

  • 20

    Olsen JASmith RD: Theory versus practice: a review of ‘willingness-to-pay’ in health and health care. Health Econ 10:39522001

    • Search Google Scholar
    • Export Citation
  • 21

    Roozenbeek BLingsma HFPerel PEdwards PRoberts IMurray GD: The added value of ordinal analysis in clinical trials: an example in traumatic brain injury. Crit Care 15:R1272011

    • Search Google Scholar
    • Export Citation
  • 22

    Sox HC JrBlatt MAHiggins MCMarton KI: Medical Decision Making PhiladelphiaAmerican College of Physicians2007

  • 23

    US Department of Health and Human Services: American Recover and Reinvestment Act of 2009. Challenge Grant Applications: Omnibus of Broad Challenge Areas and Specific Topics Washington, DCUS Department Of Health And Human Services, National Institutes of Health2009. (http://grants.nih.gov/grants/funding/challenge_award/omnibus.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 24

    Wu AWSnyder CClancy CMSteinwachs DM: Adding the patient perspective to comparative effectiveness research. Health Aff (Millwood) 29:186318712010

    • Search Google Scholar
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

Article Information

Contributor Notes

Address correspondence to: Sherman C. Stein, M.D., Department of Neurosurgery, Hospital of the University of Pennsylvania, 310 Spruce Street, Philadelphia, Pennsylvania 19106. email: Sherman.stein@uphs.upenn.edu.Please include this information when citing this paper: DOI: 10.3171/2012.2.FOCUS1232.
Headings
Figures
  • View in gallery

    Decision tree to calculate hypothetical comparative effectiveness and costs of the average patient undergoing Treatment A or Treatment B.

References
  • 1

    Altman DGRoyston P: The cost of dichotomising continuous variables. BMJ 332:10802006

  • 2

    Aoki NKitahara TFukui TBeck JRSoma KYamamoto W: Management of unruptured intracranial aneurysm in Japan: a Markovian decision analysis with utility measurements based on the Glasgow Outcome Scale. Med Decis Making 18:3573641998

    • Search Google Scholar
    • Export Citation
  • 3

    Braithwaite RSMeltzer DOKing JT JrLeslie DRoberts MS: What does the value of modern medicine say about the $50,000 per quality-adjusted life-year decision rule?. Med Care 46:3493562008

    • Search Google Scholar
    • Export Citation
  • 4

    Brazier JRoberts JDeverill M: The estimation of a preference-based measure of health from the SF-36. J Health Econ 21:2712922002

    • Search Google Scholar
    • Export Citation
  • 5

    Claxton GLevitt L: The Economy and Medical Care Henry J. Kaiser Family Foundation2011. (http://healthreform.kff.org/notes-on-health-insurance-and-reform/2011/november/theeconomy-and-medical-care.aspx) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 6

    Congressional Budget Office: Research on the Comparative Effectiveness of Medical Treatments: Issues and Options for an Expanded Federal Role Washington, DCCongressional Budget Office2007. (http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/88xx/doc8891/12-18-comparativeeffectiveness.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 7

    Dreyer NATunis SRBerger MOllendorf DMattox PGliklich R: Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (Millwood) 29:181818252010

    • Search Google Scholar
    • Export Citation
  • 8

    Economopoulos KPSergentanis TNTsivgoulis GMariolis ADStefanadis C: Carotid artery stenting versus carotid endarterectomy: a comprehensive meta-analysis of short-term and long-term outcomes. Stroke 42:6876922011

    • Search Google Scholar
    • Export Citation
  • 9

    Einarson TR: Pharmacoeconomic applications of meta-analysis for single groups using antifungal onychomycosis lacquers as an example. Clin Ther 19:5595691997

    • Search Google Scholar
    • Export Citation
  • 10

    Fields WSMaslenikov VMeyer JSHass WKRemington RDMacdonald M: Joint study of extracranial arterial occlusion. V. Progress report of prognosis following surgery or nonsurgical treatment for transient cerebral ischemic attacks and cervical carotid artery lesions. JAMA 211:199320031970

    • Search Google Scholar
    • Export Citation
  • 11

    Garber AMSox HC: The role of costs in comparative effectiveness research. Health Aff (Millwood) 29:180518112010

  • 12

    Gartlehner GHansen RANissman DLohr KNCarey TS: Criteria for Distinguishing Effectiveness From Efficacy Trials in Systematic Reviews Rockville, MDAgency for Healthcare Research and Quality2006. (http://www.ahrq.gov/downloads/pub/evidence/pdf/efftrials/efftrials.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 13

    Gold MSiegel JRussell L: Cost-Effectiveness in Health and Medicine New YorkOxford University Press1996

  • 14

    Le QAHay JW: Cost-effectiveness analysis of lapatinib in HER-2-positive advanced breast cancer. Cancer 115:4894982009

  • 15

    Maas AIMenon DKLingsma HFPineda JASandel MEManley GT: Re-orientation of clinical research in traumatic brain injury: report of an international workshop on comparative effectiveness research. J Neurotrauma 29:32462012

    • Search Google Scholar
    • Export Citation
  • 16

    Marko NFWeil RJ: An introduction to comparative effectiveness research. Neurosurgery 70:4254342012

  • 17

    Murray GDBarer DChoi SFernandes HGregson BLees KR: Design and analysis of phase III trials with ordered outcome scales: the concept of the sliding dichotomy. J Neurotrauma 22:5115172005

    • Search Google Scholar
    • Export Citation
  • 18

    Neugebauer EBouillon BBullinger MWood-Dauphinée S: Quality of life after multiple trauma—summary and recommendations of the consensus conference. Restor Neurol Neurosci 20:1611672002

    • Search Google Scholar
    • Export Citation
  • 19

    Neumann PJGreenberg D: Is the United States ready for QALYs?. Health Aff (Millwood) 28:136613712009

  • 20

    Olsen JASmith RD: Theory versus practice: a review of ‘willingness-to-pay’ in health and health care. Health Econ 10:39522001

    • Search Google Scholar
    • Export Citation
  • 21

    Roozenbeek BLingsma HFPerel PEdwards PRoberts IMurray GD: The added value of ordinal analysis in clinical trials: an example in traumatic brain injury. Crit Care 15:R1272011

    • Search Google Scholar
    • Export Citation
  • 22

    Sox HC JrBlatt MAHiggins MCMarton KI: Medical Decision Making PhiladelphiaAmerican College of Physicians2007

  • 23

    US Department of Health and Human Services: American Recover and Reinvestment Act of 2009. Challenge Grant Applications: Omnibus of Broad Challenge Areas and Specific Topics Washington, DCUS Department Of Health And Human Services, National Institutes of Health2009. (http://grants.nih.gov/grants/funding/challenge_award/omnibus.pdf) [Accessed April 23 2012]

    • Search Google Scholar
    • Export Citation
  • 24

    Wu AWSnyder CClancy CMSteinwachs DM: Adding the patient perspective to comparative effectiveness research. Health Aff (Millwood) 29:186318712010

    • Search Google Scholar
    • Export Citation
TrendMD
Metrics

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 295 295 49
PDF Downloads 44 44 4
EPUB Downloads 0 0 0
PubMed
Google Scholar