Skip to main content

Incorporating multiple interventions in meta-analysis: an evaluation of the mixed treatment comparison with the adjusted indirect comparison

Abstract

Background

Comparing the effectiveness of interventions is now a requirement for regulatory approval in several countries. It also aids in clinical and public health decision-making. However, in the absence of head-to-head randomized trials (RCTs), determining the relative effectiveness of interventions is challenging. Several methodological options are now available. We aimed to determine the comparative validity of the adjusted indirect comparisons of RCTs with the mixed treatment comparison approach.

Methods

Using systematic searching, we identified all meta-analyses evaluating more than 3 interventions for a similar disease state with binary outcomes. We abstracted data on each clinical trial including population n and outcomes. We conducted fixed effects meta-analysis of each intervention versus mutual comparator and then applied the adjusted indirect comparison. We conducted a mixed treatment meta-analysis on all trials and compared the point estimates and 95% confidence/credible intervals (CIs/CrIs) to determine important differences.

Results

We included data from 7 reviews that met our inclusion criteria, allowing a total of 51 comparisons. According to the a priori consistency rule, we found 2 examples where the analytic comparisons were statistically significant using the mixed treatment comparison over the adjusted indirect comparisons and 1 example where this was vice versa. We found 6 examples where the direction of effect differed according to the indirect comparison method chosen and we found 9 examples where the confidence intervals were importantly different between approaches.

Conclusion

In most analyses, the adjusted indirect comparison yields estimates of relative effectiveness equal to the mixed treatment comparison. In less complex indirect comparisons, where all studies share a mutual comparator, both approaches yield similar benefits. As comparisons become more complex, the mixed treatment comparison may be favoured.

Peer Review reports

Background

Acknowledging their enormous value for health intervention decision-making, clinicians, drug manufacturers, regulatory agencies and the public are now requiring meta-analysis to identify the most effective intervention among a range of alternatives.[1] As meta-analysis grows in popularity, investigators have endeavoured to further enhance its usefulness by proposing extensions meant to accommodate a number of challenges. One important challenge is choosing from a number of potentially competing interventions, not all of which have been subject to direct comparison in properly conducted randomized trials; herein referred to as indirect comparisons.

Until recently, meta-analysis addressed indirect comparisons using flawed methods that examined only intervention groups and ignored control event rates.[2] In the last decades, methodological advances,[3] most notably, the adjusted indirect comparison, first reported in 1997,[4] and the mixed treatment comparison, first reported in 2003, [5] have provided more sophisticated methods for quantitatively addressing indirect comparisons.

The adjusted indirect comparison, first reported by Bucher et al.,[4] enables one to construct an indirect estimate of the relative effect of two interventions A and B, by using information from randomized trials comparing each of these interventions against a common comparator C (e.g., placebo or standard treatment). In this approach, direct estimates of the relative effects of A versus C and B versus C, together with appropriate measures of uncertainty, are obtained using standard pairwise meta-analysis. These estimates are then appropriately combined to produce an indirect estimate of the relative effect of A versus B. A suitable measure of uncertainty for the indirect estimate is also produced.

The multiple treatment approaches, based on developing methods by several investigators,[6, 7]most recently Lu and Ades,[8] is a generalization of standard pairwise meta-analysis for A versus B trials, to data structures that include, for example, A versus B, B versus C, and A versus C trials. This approach, which can only be applied to connected networks of randomised trials, has two important roles: (1) strengthening inference concerning the relative efficacy of two treatments, by including both direct and indirect comparisons of these treatments, and (2) facilitating simultaneous inference regarding all treatments, in order to simultaneously compare, or even rank, these treatments.[8]

The adjusted indirect comparison and the mixed treatment comparison approach can be implemented through a range of methods, including frequentist, Bayesian and various subspecies of each.[9]

The basic assumptions underlying the adjusted indirect comparison and mixed treatment comparison approaches are similar to but more complex than the assumptions concerning the standard meta-analysis approach. Just like standard meta-analysis, both approaches rely on the homogeneity assumption, which states that trials are sufficiently homogeneous to be quantitatively combined. In addition, both approaches require a similarity assumption - namely, that trials are similar for moderators of relative treatment effect. The mixed treatment comparison approach also requires a consistency assumption, which is needed to quantitatively combine direct and indirect evidence.[10]

Both adjusted indirect comparison and mixed treatment comparison approaches to evaluating the relative impact of multiple alternative treatments have strengths and weaknesses.[11] The multiple treatment comparison uses both direct and indirect evidence. The adjusted indirect method is comparatively simple and interpretable by users, but requires that an intervention can only be compared with another intervention when they share a mutual comparator (eg. placebo).[4] The mixed treatment comparison may be less intuitive as it can permit comparisons when interventions do not share a comparator as it creates a conceptual network[12, 13] as well as borrows power from trials that were not available for use in the adjusted indirect comparison approach.[14]

Meta-analysts, agencies, and readers are now attempting to gain further insight into the relative merits of the two approaches.[15] New US government initiatives to determine the comparative effectiveness of interventions require the use of indirect evidence, but do not provide guidance on what approach to use. Others, such as UK's National Institute for Clinical Excellence (NICE) provide advice on the particular use of mixed treatment comparisons and adjusted indirect comparisons.[15] To further elucidate the relative performance of the adjusted indirect comparison and mixed treatment comparison methods, we applied both approaches to different comparative studies that evaluated the effectiveness of multiple competing treatments for diverse health conditions. Our objective is to determine whether the adjusted indirect comparison approach generates results comparable to those produced by the mixed treatment comparison approach. We aim to determine if there are circumstances where one method is preferable.

Methods

Eligibility Criteria

We included systematic reviews of randomized clinical trials involving at least 4 different treatments (i.e., health interventions used for treatment or prevention of the same medical condition), as networks of three health interventions have already received considerable study.[2, 3, 16] If a treatment consisted of several doses, we considered all doses to be equivalent. We also considered no-treatment and placebo to be equivalent. Whenever present, we excluded cluster randomized trials from these systematic review along with crossover trials and trials reporting only continuous outcomes.

Search Strategy

We (EM, OE) searched independently, in duplicate, PubMed from inception to January 2008 using the following search strategy: "network AND meta-analysis," "mixed treatment AND meta-analysis," "indirect comparison," "indirect AND meta-analysis," and "mixed treatment AND meta-analysis." Our search was limited to English-language articles. We supplemented our search strategy and findings from a review of network geometry of studies[13] and from our own meta-analyses of multiple treatments (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished).[17]

Data Abstraction

We (EM, OE) abstracted independently, in duplicate, information addressing the systematic review aims, number of trials per comparison, number of individuals with each specific outcome and number of individuals randomised to each intervention.

Statistical analyses

We first plotted the geometric networks of comparisons to graphically display what indirect comparisons our analyses aimed to assess.

We conducted the mixed treatment comparisons using fixed effects models similar to those introduced by Lu and Ades.[8] Although several definitions exist, we interpret that the fixed effects approach assumes that there is a single true value underlying all the study results. That is, those studies would yield similar effects regardless of the particular population enrolled, the intervention chosen, and the strategy for measuring the outcome of interest. A fixed effect model aims to estimate the common-truth effect and the uncertainty around this estimate.[18] We considered separate models for each outcome category (i.e., mortality, response) using approximately non-informed priors. We used these models as a basis for deriving the odds ratio [OR] for each treatment comparison with 95% Credible Intervals (CrIs) - the Bayesian equivalent of a classical confidence interval.

We estimated the posterior densities for all unknown model parameters using MCMC (Markov chain Monte Carlo) simulation, as implemented in the software package WinBUGS Version 1.4. Specifically, we simulated two MCMC chains starting from different initial values of select unknown parameters. Each chain contained 20,000 burn-in iterations followed by 20,000 update iterations. We assessed convergence by visualizing the histories of the chains against the iteration number; overlapping histories, that appeared to mix with each other, provided an indication of convergence. We based our inferences on the (convergence) posterior distributions of the relevant parameters. In particular, we estimated the OR for a given treatment comparison by exponentiating the mean of the posterior distribution of the log OR, and constructed the corresponding 95% CrI by exponentiating the 2.5th and 97.5th percentiles of the posterior distribution of the log OR. Other parameters were estimated as means of corresponding posterior distributions.

We measured the goodness of fit of our models to the data by calculating the residual deviance. Residual deviance was defined as the difference between the deviance for the fitted model and the deviance for the saturated model, where the deviance uses the likelihood function to measure the fit of the model to the data. Under the null hypothesis that the model provides an adequate fit to the data, the residual deviance is expected to have a mean equal to the number of unconstrained data points.

For our relative effect sizes used in the adjusted indirect comparison analyses, we used the same data as for the mixed treatment comparison analyses. We conducted multiple meta-analyses of head-to-head comparisons to obtain ORs and 95% Confidence Intervals [95% CIs]. As with the mixed treatment analyses, we applied the fixed effects method. Once we obtained the summary estimates of pooled head-to-head evaluations with CIs, we applied the adjusted indirect comparison approach.[4]

For each systematic review, we determined if there were important inconsistencies between the adjusted indirect comparison and mixed treatment comparison approaches by comparing the 95% CrI produced by the former approach against the 95% CI produced by the latter approach for the OR of each feasible treatment comparison. We diagnosed inconsistency by assessing departures from an a priori determined consistency rule stating that the lower and upper endpoints of the two types of intervals should not differ by more than 0.25 and 0.75, respectively, and the estimated ORs should not differ by more than 0.5. EM and IG performed all statistical analyses.

Results

We identified 44 potentially relevant systematic reviews of the effectiveness of multiple treatments for different health conditions, including two of our own reviews that were ongoing during the search period (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished).[17] We narrowed down the scope of our search by excluding 13 reviews that incorporated fewer than 4 treatments, 9 reviews that excluded eligible data for comparisons, 3 reviews that did not create a network of comparisons, and 12 reviews that did not provide data on individual outcomes in each study. In total, we included seven systematic reviews in our analyses (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished) [4, 17, 19–22]with three different types of network structures: (I) star-network, having a common comparator and containing no loops (figures 1, 2, 3), (II) single-loop network (figures 4 and 5), containing only one loop, and (III) multi-loop network, containing two or more loops (figures 6 and 7). All seven reviews were published between the years 1997 to present.

Figure 1
figure 1

Star-network of evidence formed by the seven stent treatments on target lesion revascularization event rates, together with information on the number of trials, number of patients and number of events per (direct) treatment comparison. Each treatment is a node in the network. The links between nodes are trials or pairs of trial arms. The numbers along the link lines indicate the number of trials or pairs of trial arms for that link in the network.

Figure 2
figure 2

Star-network of evidence formed by the treatments Placebo, Ketoprofen, Ibuprofen, Felbinac, Piroxicam, Indomethacin and Other NSAID, together with information on the number of trials, number of patients and number of events per (direct) comparison.

Figure 3
figure 3

Star-network of evidence formed by the four statin treatments and the placebo treatment in primary prevention of cardiovascular mortality, together with information on the number of trials, number of patients and number of events per (direct) comparison.

Figure 4
figure 4

Single-loop network of evidence formed by the four antibiotic and antiseptic treatments, together with information on the number of trials, number of patients and number of events per (direct) treatment comparison.

Figure 5
figure 5

Single-loop network of evidence formed by five antifungal treatments, together with information on the number of trials, number of patients and number of events per (direct) treatment comparison.

Figure 6
figure 6

Multi-loop network of evidence formed by the four treatments for prevention of Pneumocystis carinii pneumonia, together with information on the number of trials, number of patients and number of events per (direct) treatment comparison.

Figure 7
figure 7

Multi-loop network of evidence formed by the eight antifungal treatments, together with information on the number of trials, number of patients and number of events per (direct) treatment comparison.

Number of comparisons

The seven systematic reviews retained in our analyses included between 4 and 8 treatments. Four reviews did not have a no-treatment control intervention (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished). [4, 20, 22] The number of trials included in the seven systematic reviews ranged from 10 to 29. Two reviews had insufficient mutual comparator arms to allow the adjusted indirect comparison evaluation on each intervention (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished). [21] There were no three or greater-armed trials found in any of the seven systematic reviews.

Analyses 1-3 (figures 1, 2, 3) represent star-shaped comparisons whereby each intervention shares a mutual comparator. Analysis 4 and 5 (figures 4 and 5) are networks with a single loop demonstrating that multiple interventions have been compared, but do not necessarily have a mutual comparator across treatment. Analyses 6 and 7 (figures 6 and 7) are multi-loop comparisons whereby more treatments exist that have not had mutual comparators.

Analysis 1. Drug-eluting stents compared to bare-metal stents on target lesion revascularization event rates[22]

We evaluated the impact of drug-eluting stents compared to bare-metal stents on the outcome of target lesion revascularization event rates on the basis of 18 2-arm randomised trials comparing 7 different treatments. Figure 1 displays the network of evidence available from these trials. Table 1 shows the results of the pairwise treatment comparisons when using direct, head-to-head data (in bold), the mixed treatment approach and the adjusted indirect comparison approach. In a single instance, the mixed treatment comparison approach found a significant difference between the effects of two treatments when the adjusted indirect comparison approach did not. According to the a priori consistency rule, the estimated ORs and associated uncertainty intervals were importantly different between the two approaches for only four pairwise treatment comparisons.

Table 1 Drug-eluting stents compared to bare-metal stents on revascularization status[22].

Analysis 2. NSAIDS for acute pain[19]

We evaluated the effects of 7 different interventions for acute pain from 29 trials that included 58 trial arms, for a possible 21 comparisons. See Figure 2 and Table 2. We found no important distinctions between the adjusted indirect comparison and mixed treatment comparison approaches.

Table 2 NSAIDS for acute pain[19].

Analysis 3. Statins for the primary prevention of cardiovascular mortality[17]

We evaluated the role of 4 statin interventions compared to placebo/standard care for the prevention of cardiovascular mortality in primary prevention of cardiovascular disease populations. See Figure 3 and Table 3. There were 18 trials included, from 38 arms, allowing for a possible 10 comparisons. We found no major discrepancies between the two comparative approaches.

Table 3 Statins for the prevention of cardiovascular mortality[17].

Analysis 4. Topical treatment for treatment of ear discharge at 1 and 2 weeks [21]

We evaluated the role of topical antibiotics for the prevention of ear discharge for patients with eardrum perforations using 18 2-arm randomised trials comparing 4 different treatments. Figure 4 displays the network of evidence available from these trials. The results of the 2 pair-wise treatment comparisons performed via the adjusted indirect comparison approach and 6 pair-wise treatment comparisons performed via the mixed treatment comparison approach are shown in Table 4. In one circumstance, the mixed treatment comparison approach found a statistically significant difference between the effects of two treatments, when the adjusted indirect comparison approach did not.

Table 4 Topical treatment for treatment of ear discharge at 1 and 2 weeks[21].

Analysis 5. Antifungal agents for preventing mortality in solid organ transplant recipients[20]

We evaluated the role of antifungal agents for preventing mortality in solid organ transplant recipients on the basis of 10 2-arm randomised trials comparing 5 different treatments. The network of evidence for these trials is shown in Figure 5. The results for the 5 possible pair-wise treatment comparisons using the adjusted indirect comparison approach and 10 comparisons using the mixed treatment comparison are shown in Table 5. In a single case, the mixed treatment comparison approach found a different direction of effect than the adjusted indirect comparison approach. The estimated ORs and associated uncertainty intervals produced by the two approaches were importantly different for three pair-wise treatment comparisons.

Table 5 Antifungal agents for preventing mortality in solid organ transplant recipients[20].

Analysis 6. Prophylactic treatments against pneumocystis carinii pneumonia and toxoplasma encephalitis in HIV-infected patients[4]

We evaluated 4 different interventions from 22 trials with 44 trial arms, allowing a possible 6 comparisons. See Figure 6 and Table 6. In this example, the adjusted indirect comparison was only required for one comparison but differed importantly from the mixed treatment method.

Table 6 Prophylactic treatments against pneumocystis carinii pneumonia and toxoplasma encephalitis in HIV-infected patients[4].

Analysis 7. Antifungal agents for the prevention of mortality among patients with invasive candidemia

(Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished.)

We evaluated the effectiveness of 8 different treatments from 19 trials, allowing 38 arms, for a possible 28 comparisons. See Figure 7 and Table 7. For 9 comparisons we were unable to conduct the adjusted indirect evaluation, as no suitable mutual comparator existed. The direction of effect differed between the two approaches in 4 studies. In one circumstance, the adjusted indirect approach found significant treatment effect while the mixed treatment method did not.

Table 7 Antifungal agents for the prevention of mortality among patients with invasive candidemia (Perri D, O'Regan C, Cooper C, Nachega JB, Wu P, Tleyjeh I, Philips P, Mills EJ: Antifungal treatment for systemtic candida infectons: A mixed treatment comparison meta-analysis. Unpublished)

Discussion

Our paper presents important evidence on the relative performance of the adjusted indirect comparison and mixed treatment comparison approaches to evaluating multiple health interventions in the absence of sufficient direct evidence.

For the 3 star-networks considered in this paper, we found that both approaches led to similar results, as they could use all the available information in the data. In general, some slight difference may exist between the results produced by the two approaches for this type of network since the adjusted indirect comparison approach uses (approximate) normal likelihood while the mixed treatment comparison approach uses (exact) binomial likelihood. If one chooses to ignore such a slight difference, the adjusted indirect comparison approach is easier to use for star-networks than the mixed treatment comparison approach.

For the 2 single-loop networks included in this paper, we found that the adjusted indirect comparison and mixed treatment comparison approached yielded comparable estimates of relative treatment effectiveness. However, the two approaches will be expected to yield different results for general single-loop networks, simply because the mixed treatment comparison approach is based on all available information in the data but the adjusted indirect comparison approach is not.

Finally, we found that both the adjusted indirect comparison and the mixed treatment comparison approach produced comparable estimates of relative treatment effectiveness for the two multi-loop networks considered in this paper. As pointed out by one of the referees during peer-review, in general, the adjusted indirect comparison approach may be difficult, if not impossible, to apply for this type of network. As an illustration, suppose we are interested in the indirect estimate for the OR of the pairwise comparison of treatments C and D in Figure 7, where there is no direct comparison between these two treatments. But, through the network of evidence, there are three ways to perform the adjusted indirect comparison of treatments C and D: (1) using comparisons C versus E and D versus E; (2) using comparisons C versus B and D versus B; (3) using comparisons C versus B, B versus F, and F versus D. Clearly, these comparisons will lead to different results. One possible way to deal with this problem is to apply the adjusted indirect comparison approach three times to these data sets respectively and then combine them together to get a pooled estimate. But, crucially, these three routes to the estimate of the OR of the pair-wise comparison of treatments C and D are not in this case statistically independent. As a result, the resulting estimates cannot be pooled by a simple weighted average. The mixed treatment comparisons approach, however, will combine this information simultaneously and produce a coherent set of estimates for all the treatment contrasts, based on all the data.

The adjusted indirect comparison approach may be preferred for star-networks, as it is typically easier to implement than the mixed treatment comparison approach and provides similar results. For single-loop networks, one could use either approach, though the results produced by the two approaches might generally be different, reflecting the fact that the mixed treatment comparison approach relies on all of the information available in the data but the adjusted indirect comparison approach does not. For multi-loop networks, it might be difficult, if not impossible, to implement the adjusted indirect comparison approach in some situations, rendering the mixed treatment comparison as the preferred choice for this type of network.

There are strengths and limitations that should be considered when interpreting this manuscript. Strengths include our extensive searching of systematic reviews and inclusion of unpublished systematic reviews. It is possible that we missed systematic reviews that may have met our inclusion criteria, however our searches were extensive, were supplemented with others' systematic reviews.[2, 13] and were conducted in duplicate to minimize bias. We applied the fixed effects method for both the adjusted indirect comparison and mixed treatment comparison approaches. Our goodness of fit checks indicated that the fixed effects mixed treatment comparison approach was sensible for nearly all of the seven systematic reviews. Further sensitivity analyses performed for this approach confirmed the robustness of the overall conclusions to the exclusion of discrepant trials. It is possible that we would have found slight differences if we had employed the random effects method. More often than not however, these methods yield comparable estimates of relative treatment effects.[18] Some have argued that the fixed effects method should now be preferred over a random effects method as it places a greater weight on larger studies, thus studies may have reduced bias.[23] Finally, we were unable to compare the adjusted indirect comparison approach with the head-to-head evaluations as, in this set of systematic reviews, there was an insufficient number of trials with more than one comparator.

Salanti and others have discussed the merits and challenges of the mixed treatment approach.[12, 23, 24] The mixed treatment comparison is a resource intensive approach to conducting analyses as it requires knowledge of Bayesian principles and working abilities with WinBUGS, a somewhat user-unfriendly software for those unfamiliar with it. However, the mixed treatment approach also provides interesting additional information that may be useful to some readers. Additional information includes probabilities of a ranking order of the effectiveness of interventions. For the sake of clarity, we haven't presented the probabilities associated with each analysis. Probabilities may be difficult to interpret though, particularly when there are not clear differences amongst them. A further additional source of information is that this analysis provides indirect comparisons without requiring a mutual comparator, a possible strength over the adjusted indirect approach. However, we cannot know whether this estimate is reliable or similar to an adjusted indirect approach until further trials become available. Finally, some have argued that the mixed treatment comparison is a 'black-box,' as it may be difficult or impossible to determine where an analysis has gone incorrectly.[25] Future validations of the analytic manner performed in this manuscript may yield insights into the transparency of this method. Finally, no reporting guidelines exist for the mixed treatment approach. A step forward may be the development of minimum reporting criteria for this approach.[11, 12]

For less complex analyses, such as star-shaped networks, the adjusted indirect comparison may be easier for meta-analysts to apply in their general practice. One of us (GG) was involved in the development of this approach.[4] The adjusted indirect comparison is limited in more complex evaluations, as compared to the mixed treatment comparison, as it requires the utilization of a mutual comparator when performing indirect comparisons. However, as discussed above, the validity of indirect comparisons without mutual comparators that are performed via the mixed treatment comparison approach may be reasonably questioned. The adjusted indirect comparison approach requires the knowledge of standard meta-analysis techniques and working knowledge of programmable software such as R, S-Plus, Stata or SAS, so is arguably also resource intensive. A recent free download of a simple software may make this approach accessible for non-statisticians.[25, 26]

There is also concern that both the adjusted indirect comparison and mixed treatment comparison approaches will have less power than the direct approach and may sometimes lead to indeterminate results, in the form of wide uncertainty intervals for relative intervention effects. Inferences based on such findings may therefore be limited. In addition, it is not clear yet how to interpret results that differ substantially between the two approaches. Finally, although the choice of approach may differ only marginally in treatment effect estimates, the impact of small differences may affect future analyses based on study findings, such as cost-effectiveness models. There is a clear need to evaluate whether one method may importantly impact cost-effectiveness projections over another.[11]

Conclusion

In conclusion, both the mixed treatment comparison approach and the adjusted indirect comparison approach provide compelling inferences about the relative effectiveness of interventions. In less complex indirect comparisons, where a mutual comparator exists, the adjusted indirect comparison may be favourable due to its simplicity. In more complex models, the mixed treatment comparison appears to offer benefits for comparisons that other methods cannot.

Abbreviations

AES:

actinomycin-d-eluting stent

BMS:

bare-metal stents, PES

EES:

everolimus-eluting stent

MES:

micophenolate-eluting stent

PES:

paclitaxel-eluting stent

SES:

sirolimus-eluting stent

RCT:

randomized clinical trial

OR:

odds ratio

CI:

confidence interval

CrI:

credible interval

D:

Dapsone

D/P:

dapsone/pyrimethamine

AP:

aerosolized pentamidine

TMP-SMX:

trimethoprim-sulfamethoxazole.

References

  1. Guyatt G, Schunemann H, Cook D, Jaeschke R, Pauker S, Bucher H: Grades of recommendation for antithrombotic agents. Chest. 2001, 119: 3S-7S. 10.1378/chest.119.1_suppl.3S.

    Article  CAS  PubMed  Google Scholar 

  2. Song F, Altman DG, Glenny AM, Deeks JJ: Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses. BMJ (Clinical research ed). 2003, 326: 472-10.1136/bmj.326.7387.472.

    Article  Google Scholar 

  3. Glenny AM, Altman DG, Song F, Sakarovitch C, Deeks JJ, D'Amico R, Bradburn M, Eastwood AJ: Indirect comparisons of competing interventions. Health technology assessment (Winchester, England). 2005, 9: 1-134.

    CAS  Google Scholar 

  4. Bucher HC, Guyatt GH, Griffith LE, Walter SD: The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol. 1997, 50: 683-691. 10.1016/S0895-4356(97)00049-8.

    Article  CAS  PubMed  Google Scholar 

  5. Ades AE: A chain of evidence with mixed comparisons: models for multi-parameter synthesis and consistency of evidence. Stat Med. 2003, 22: 2995-3016. 10.1002/sim.1566.

    Article  CAS  PubMed  Google Scholar 

  6. Hasselblad V: Meta-analysis of multitreatment studies. Med Decis Making. 1998, 18: 37-43. 10.1177/0272989X9801800110.

    Article  CAS  PubMed  Google Scholar 

  7. Lumley T: Network meta-analysis for indirect treatment comparisons. Stat Med. 2002, 21: 2313-2324. 10.1002/sim.1201.

    Article  PubMed  Google Scholar 

  8. Lu G, Ades AE: Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004, 23: 3105-3124. 10.1002/sim.1875.

    Article  CAS  PubMed  Google Scholar 

  9. Salanti G, Marinho V, Higgins JP: A case study of multiple-treatments meta-analysis demonstrates that covariates should be considered. J Clin Epidemiol. 2009, 62: 857-864. 10.1016/j.jclinepi.2008.10.001.

    Article  PubMed  Google Scholar 

  10. Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG: Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ (Clinical research ed). 2009, 338: b1147-10.1136/bmj.b1147.

    Article  Google Scholar 

  11. Sutton A, Ades AE, Cooper N, Abrams K: Use of indirect and mixed treatment comparisons for technology assessment. PharmacoEconomics. 2008, 26: 753-767. 10.2165/00019053-200826090-00006.

    Article  PubMed  Google Scholar 

  12. Salanti G, Higgins JP, Ades AE, Ioannidis JP: Evaluation of networks of randomized trials. Statistical methods in medical research. 2008, 17: 279-301. 10.1177/0962280207080643.

    Article  PubMed  Google Scholar 

  13. Salanti G, Kavvoura FK, Ioannidis JP: Exploring the geometry of treatment networks. Annals of internal medicine. 2008, 148: 544-553.

    Article  PubMed  Google Scholar 

  14. Higgins JP, Whitehead A: Borrowing strength from external trials in a meta-analysis. Stat Med. 1996, 15: 2733-2749. 10.1002/(SICI)1097-0258(19961230)15:24<2733::AID-SIM562>3.0.CO;2-0.

    Article  CAS  PubMed  Google Scholar 

  15. NICE: Updated guide to the methods of technology appraisal - June 2008. 2008, [http://www.nice.org.uk/media/B52/A7/TAMethodsGuideUpdatedJune2008.pdf]

    Google Scholar 

  16. Song F, Glenny AM, Altman DG: Indirect comparison in evaluating relative efficacy illustrated by antimicrobial prophylaxis in colorectal surgery. Controlled clinical trials. 2000, 21: 488-497. 10.1016/S0197-2456(00)00055-6.

    Article  CAS  PubMed  Google Scholar 

  17. Mills EJ, Rachlis B, Wu P, Devereaux PJ, Arora P, Perri D: Primary prevention of cardiovascular mortality and events with statin treatments: a network meta-analysis involving more than 65,000 patients. Journal of the American College of Cardiology. 2008, 52: 1769-1781. 10.1016/j.jacc.2008.08.039.

    Article  CAS  PubMed  Google Scholar 

  18. Guyatt GH, Rennie D: Users Guides to the Medical Literature. 2007, JAMA Press, Chicago, 556.

    Google Scholar 

  19. Mason L, Moore RA, Edwards JE, Derry S, McQuay HJ: Topical NSAIDs for acute pain: a meta-analysis. BMC family practice. 2004, 5: 10-10.1186/1471-2296-5-10.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Playford EG, Webster AC, Sorell TC, Craig JC: Antifungal agents for preventing fungal infections in solid organ transplant recipients. Cochrane database of systematic reviews (Online). 2004, CD004291-3

  21. Macfadyen CA, Acuin JM, Gamble C: Topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. Cochrane database of systematic reviews (Online). 2005, CD004618-4

  22. Biondi-Zoccai GG, Agostini P, Abbate A, Testa L, Burzotta F, Lotrionte M: Adjusted indirect comparison of intracoronary drug-eluting stents: evidence from a meta-analysis of randomized bare-metal-stent-controlled trials. Int J Cardiol. 2005, 100: 119-123. 10.1016/j.ijcard.2004.11.001.

    Article  PubMed  Google Scholar 

  23. Pocock SJ: Safety of drug-eluting stents: demystifying network meta-analysis. Lancet. 2007, 370: 2099-2100. 10.1016/S0140-6736(07)61898-4.

    Article  PubMed  Google Scholar 

  24. Caldwell DM, Ades AE, Higgins JP: Simultaneous comparison of multiple treatments: combining direct and indirect evidence. BMJ (Clinical research ed). 2005, 331: 897-900. 10.1136/bmj.331.7521.897.

    Article  Google Scholar 

  25. Wells GA, Sultan SA, Chen L, Khan M, Coyle D: Indirect evidence: indirect treatment comparisons in meta-analysis. 2009, Ottawa: Canadian Agency for Drugs and Technologies in Health

    Google Scholar 

  26. Wells GA, Sultan SA, Chen L, Khan M, Coyle D: Indirect treatment comparison [computer program]. Version 1.0. 2009, Ottawa: Canadian Agency for Drugs and Technologies in Health

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edward J Mills.

Additional information

Competing interests

None declared. COR has previously been employed by Pfizer Ltd. and is currently employed by Merck, Sharpe & Dohme (MSD) Ltd. MSD had no role in the development, execution or publication of the paper. EM has consulted to Pfizer Ltd. and received unrestricted research grants from Pfizer Ltd., GG has received unrestricted research grants from several for-profit companies. IG runs a statistical consulting firm.

Authors' contributions

COR, EM, IG and GG were responsible for the study concept. COR, EM and OE were responsible for the study searches. COR, EM, EO, IG and GG were responsible for study extraction and analysis. COR, EM, IG and GG were responsible for study writing. COR, IG, EO, GG and EM approved the submitted manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

O'Regan, C., Ghement, I., Eyawo, O. et al. Incorporating multiple interventions in meta-analysis: an evaluation of the mixed treatment comparison with the adjusted indirect comparison. Trials 10, 86 (2009). https://0-doi-org.brum.beds.ac.uk/10.1186/1745-6215-10-86

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1745-6215-10-86

Keywords