This article has Open Peer Review reports available.
Designs for clinical trials with timetoevent outcomes based on stopping guidelines for lack of benefit
 Patrick Royston^{1}Email author,
 Friederike MS Barthel^{1},
 Mahesh KB Parmar^{1},
 Babak ChoodariOskooei^{1} and
 Valerie Isham^{2}
https://doi.org/10.1186/174562151281
© Royston et al; licensee BioMed Central Ltd. 2011
Received: 2 August 2010
Accepted: 18 March 2011
Published: 18 March 2011
Abstract
background
The pace of novel medical treatments and approaches to therapy has accelerated in recent years. Unfortunately, many potential therapeutic advances do not fulfil their promise when subjected to randomized controlled trials. It is therefore highly desirable to speed up the process of evaluating new treatment options, particularly in phase II and phase III trials. To help realize such an aim, in 2003, Royston and colleagues proposed a class of multiarm, twostage trial designs intended to eliminate poorly performing contenders at a first stage (point in time). Only treatments showing a predefined degree of advantage against a control treatment were allowed through to a second stage. Arms that survived the firststage comparison on an intermediate outcome measure entered a second stage of patient accrual, culminating in comparisons against control on the definitive outcome measure. The intermediate outcome is typically on the causal pathway to the definitive outcome (i.e. the features that cause an intermediate event also tend to cause a definitive event), an example in cancer being progressionfree and overall survival. Although the 2003 paper alluded to multiarm trials, most of the essential design features concerned only twoarm trials. Here, we extend the twoarm designs to allow an arbitrary number of stages, thereby increasing flexibility by building in several 'looks' at the accumulating data. Such trials can terminate at any of the intermediate stages or the final stage.
Methods
We describe the trial design and the mathematics required to obtain the timing of the 'looks' and the overall significance level and power of the design. We support our results by extensive simulation studies. As an example, we discuss the design of the STAMPEDE trial in prostate cancer.
Results
The mathematical results on significance level and power are confirmed by the computer simulations. Our approach compares favourably with methodology based on beta spending functions and on monitoring only a primary outcome measure for lack of benefit of the new treatment.
Conclusions
The new designs are practical and are supported by theory. They hold considerable promise for speeding up the evaluation of new treatments in phase II and III trials.
Keywords
1 Introduction
The ongoing developments in molecular sciences have increased our understanding of many serious diseases, including cancer, HIV and heart disease, resulting in many potential new therapies. However, the US Food and Drug Administration has identified a slowdown, rather than an expected acceleration, in innovative medical therapies actually reaching patients [1]. There are probably two primary reasons for this. First, most new treatments show no clear advantage, or at best have a modest effect, when compared with the current standard of care. Second, the large number of such potential therapies requires a corresponding number of large and often lengthy clinical trials. The FDA called for a 'productdevelopment toolkit' to speed up the evaluation of potential treatments, including novel clinical trial designs. As many therapies are shown not to be effective, one component of the toolkit is methods in which a trial is stopped 'early' for lack of benefit or futility.
Several methodologies have been proposed in the past to deal with stopping for futility or lack of benefit, including conditional power and spending functions. With the futility approach, assumptions are made about the distribution of trial data yet to be seen, given the data so far. At certain points during the trial, the conditional power is computed, the aim being to quantify the chance of a statistically significant final result given the data available so far. The procedure is also known as stochastic curtailment. As a sensitivity analysis, the calculations may be carried out under different assumptions about the data that could be seen if the trial were continued [2]. For example, treatment effects of different magnitudes might be investigated under the alternative hypothesis of a nonnull treatment effect.
Alphaspending functions were initially proposed by Armitage et al. [3] and extensions to the shape of these functions were suggested by several authors including Lan & DeMets [4] and O'Brien & Fleming [5]. In essence, the approach suggests a functional form for 'spending' the type 1 error rate at several interim analyses such that the overall type 1 error is preserved, usually at 5%. The aim is to assess whether there is evidence that the experimental treatment is superior to control at one of the interim analyses. Pampallona et al. [6] extended the idea to beta or type 2 error spending functions, potentially allowing the trial to be stopped early for lack of benefit of the experimental treatment.
In the context of stopping for lack of benefit, Royston et al. [7] proposed a design for studies with a timetoevent outcome that employs an intermediate outcome in the first stage of a twostage trial with multiple research arms. The main aims are quickly and reliably to reject new therapies unlikely to provide a predefined advantage over control and to identify those more likely to be better than control in terms of a definitive outcome measure. An experimental treatment is eliminated at the first stage if it does not show a predefined degree of advantage (e.g. a sufficiently small hazard ratio) over the control treatment. In the first stage, an experimental arm is compared with the control arm on an intermediate outcome measure, typically using a relaxed significance level and high power. The relaxed significance level allows the first stage to end relatively early in the trial timeline, and high power guards against incorrectly discarding an effective treatment. Arms which survive the comparison enter a further stage of patient accrual, culminating at the end of the second stage in a comparison against control based on the definitive outcome.
A multiarm, twostage design was used in GOG182/ICON5 [8], the first such trial ever run. Early termination indeed occurred for all the experimental arms. The trial, which compared four treatments for advanced ovarian cancer against control, was conducted by the Gynecologic Oncology Group in the USA and the MRC Clinical Trials Unit, London, and investigators in Italy and Australia. The trial was planned to run in two stages, but after the firststage analysis, the Independent Data Monitoring Committee saw no justification to continue accrual to any of the treatment arms based on the intermediate outcome of progressionfree survival. Early stopping allowed resources to be concentrated on other trials, hypothetically saving about 20 years of trial time compared with running four twoarm trials one after the other with overall survival as the primary outcome measure.
Here, we show how a parallel group, twoarm, twostage design may be extended to three or more stages, thus providing stopping guidelines at every stage. Designs with more than two arms involve several pairwise comparisons with control rather than just one; apart from the multiplicity issue, the multiarm designs are identical to the twoarm designs. In the present paper, section 2 describes the designs and the methodology underlying our approach, including choice of outcome measure and sample size calculation. Section 3 briefly compares our approach with designs based on betaspending functions. In section 4, we present simulation studies to assess the operating characteristics of the designs in particular situations. In section 5, we describe a real example, the ongoing MRC STAMPEDE [9] randomized trial in prostate cancer, which has six arms and is planned to run in 5 stages. The needs of STAMPEDE prompted extension of the original methodology to more than two stages. Further design issues are discussed in section 6.
2 Methods
2.1 Choosing an intermediate outcome measure
Appropriate choices of an intermediate outcome measure (I) and definitive outcome measure (D) are key to the design of our multistage trials. Without ambiguity, we use the letters I and D to mean either an outcome measure (i.e. time to a relevant event) or an outcome (an event itself), for example I = (time to) disease progression, D = (time to) death. The 'treatment effect' on I is not required to be a surrogate for the treatment effect on D. The basic assumptions for I in our design are that it occurs no later than D, more frequently than D and is on the causal pathway to D. If the null hypothesis is true for I, it must also hold for D.
Crucially, it is not necessary that a true alternative hypothesis for I translate into a true alternative hypothesis for D. However, the converse must hold  a true alternative hypothesis for D must imply a true alternative hypothesis for I. Experience tells us that it is common for the magnitude of the treatment effect on I to exceed that on D.
As an example, consider the case mentioned above, common in cancer, in which I = time to progression or death, D = time to death. It is quite conceivable for a treatment to slow down or temporarily halt tumour growth, but not ultimately to delay death. It would of course be a problem if the reverse occurred and went unrecognised, since the power to detect the treatment effect on I in the early stages of one of our trials would be compromised, leading to a larger probability of stopping the trial for apparent lack of benefit. In practice, we typically make the conservative assumption that the size of the treatment effect is the same on the I and D outcomes.
In the latter case, a rational choice of I might be D itself. The case I = D is also relevant to other practical situations, for example the absence of an obvious choice for I, and is a special case of the methodology presented here.
The treatment effects, i.e. (log) hazard ratios, on I and D do not need to be highly correlated, although in practice they often are. We refer here to the correlation between treatment effects on I and D within the trial, not across cognate trials. When I and D are timetoevent outcome measures, the correlation of the (log) hazard ratios is timedependent. Specifically, the correlation depends on the accumulated numbers of events at different times, as discussed in section 2.7.
Examples of intermediate and primary outcome measures are progressionfree (or diseasefree) survival and overall survival for many cancer trials, and CD4 count and diseasespecific survival for HIV trials.
2.2 Design and sample size
Our multiarm, multistage (MAMS) designs involve the pairwise comparison of each of several experimental arms with control. In essence, we view MAMS designs as a combination of twoarm, multistage (TAMS) trials; that is, we are primarily interested in comparing each of the experimental arms with the control arm. Apart from the obvious issue of multiple treatment comparisons, methodological aspects are similar in MAMS and TAMS trials. In this paper, therefore, we restrict attention to TAMS trials with just one experimental arm, E, and a control arm, C.
Assume that the definitive outcome measure, D, in a randomized controlled trial is a time and diseaserelated event. In many trials, D would be death. As just discussed, in our multistage trial design we also require a timerelated intermediate outcome, I, which is assumed to precede D.
A TAMS design has s > 1 stages. The first s  1 stages include a comparison between E and C on the intermediate outcome, I, and the s th stage a comparison between E and C on the definitive outcome, D. Let Δ _{ i } be the true hazard ratio for comparing E with C on I at the i th stage (i < s), and let Δ _{ s } be the true hazard ratio for comparing E with C on D at the s th stage. We assume proportional hazards holds for all treatment comparisons.
The primary null and alternative hypotheses, H _{0} (stage s) and H _{1} (stage s), concern Δ_{ s }, with the hypotheses at stage i (i < s) playing a subsidiary role. Nevertheless, it is necessary to supply design values for all the hypotheses. In practice, the are almost always taken as 1 and the as some fixed value < 1 for all i = 1, ..., s; in cancer trials, = 0.75 is a often reasonable choice. Note, however, that taking for all i < s is a conservative choice; the design allows for . For example, in cancer, if I is progressionfree survival and D is death it may be realistic and efficient to take, say, = 0.75 and = 0.7 for i < s. In what follows, when the interpretation is clear we omit the (stage i) qualifier and refer simply to H _{0} and H _{1}.
If E is better than C then for all i. Let be the estimated hazard ratio comparing E with C on outcome I for all patients recruited up to and including stage i, and be the estimated hazard ratio comparing E with C on D for all patients at stage s (i.e. at the time of the analysis of the definitive outcome).
The allocation ratio, i.e. the number of patients allocated to E for every patient allocated to C, is assumed to be A, with A = 1 representing equal allocation, A < 1 relatively fewer patients allocated to E and A > 1 relatively more patients allocated to E.
The trial design with a maximum of s stages screens E for 'lack of benefit' at each stage, as follows:
 1.
For stage i, specify a significance level α _{ i } and power ω _{ i } together with hazard ratios and , as described above.
 2.
Using the above four values, we can calculate e _{ i } , the cumulative number of events to be observed in the control arm during stages 1 through i. Consequently, given the accrual rate, r _{ i } , and the hazard rate, λ _{ I } , for the Ioutcome in the control arm, we can calculate n _{ i } , the number of patients to be entered in the control arm during stage i, and An _{ i } , the corresponding number of patients in the experimental arm. We can also calculate the (calendar) time, t _{ i } , of the end of stage i.
 3.
Given the above values, we can also calculate a critical value, δ _{ i } , for rejecting H _{0} = Δ _{ i } = . We discuss the determination of δ _{ i } in detail in section 2.3.
 4.
At stage i, we stop the trial for lack of benefit of E over C if the estimated hazard ratio, , exceeds the critical value, δ _{ i } . Otherwise we continue to the next stage of recruitment.
Stage s:
The same principles apply to stage s as to stages 1 to s  1, with the obvious difference that e _{ s } , the required number of control arm events (cumulative over all stages), and λ_{ D }, the hazard rate, apply to D rather than I.
If the experimental arm survives all of the s  1 tests at step 4 above, the trial proceeds to the final stage, otherwise recruitment is terminated early.
To limit the total number of patients in the trial, an option is to stop recruitment at a predefined time, t*, during the final stage. Stopping recruitment early increases the length of the final stage. See Appendix A for further details.
To implement such a design in practice, we require values for δ _{ i } , e _{ i } , n _{ i } for stages i = 1, ..., s. To plan the trial timelines, we also need t _{1}, ..., t _{s}, the endpoints of each stage. We now consider how these values are determined.
2.3 Determining the critical values δ _{1}, ..., δ _{ s }
To obtain the critical values, δ _{ i } , it is necessary to provide values of the significance level, α _{ i } , and power, ω _{ i } , for every stage. We discuss the choice of these quantities in section 2.6.
where is the number of events in the experimental arm under H _{1} by the end of stage i when there are e _{ i } events in the control arm and the allocation ratio is A. (Note that A is implicitly taken into account in .) An algorithm to calculate e _{ i } , and the corresponding t _{ i } is described next.
2.4 Algorithm to determine number of events and duration of stages
 1.
Use eqn. (4) to calculate an initial estimate of e _{ i } , the number of events required in the control arm.
 2.
 3.
Calculate t _{ i } , the time at which stage i ends.
 4.
Calculate under H _{1} the numbers of events expected in the control arm (e _{ i } ) and experimental arm ( ) by time t _{ i } .
 5.
 6.
Details of two subsidiary algorithms required to implement steps 3 and 4 are given in Appendix A.
Note that the above algorithm requires only the proportional hazards assumption in all calculations except that for the stage endtimes, t _{ i } , where we assume that times to I and to D events are exponentially distributed. The exponential assumption is clearly restrictive, but if it is breached, the effect is only to reduce the accuracy of the t _{ i } . The key design quantities, the numbers (e _{ i } and ) of events required at each stage, are unaffected.
2.5 Determining the required numbers of patients
A key parameter of the TAMS design is the anticipated patient recruitment (or accrual) rate. Let r _{ i } be the number of patients entering the control arm per unit time during stage i. Accrual is assumed to occur at a uniform rate in a given stage. In practice, r _{ i } tends to increase with i as recruitment typically picks up gradually during a trial's life cycle. Let t _{0} = 0, and let d _{ i } = t _{ i }  t _{ i } _{ 1} (i = 1, ..., s) be the duration of the i th stage. The number of patients recruited to the control arm during stage i is n _{ i } = r _{ i } d _{ i } , and to the experimental arm it is An _{ i } . Provided that E 'survives' all s  1 intermediate stages, the total number of patients recruited to the trial is .
where d* = t*  t _{ s1}and t* is taken as t _{ s } if recruitment continues to the end of stage s.
2.6 Setting the significance level and power for each stage
Reaching the end of stage i (i < s) of a TAMS trial triggers an interim analysis of the accumulated trial data, the outcome of which is a decision to continue recruitment or to terminate the trial for lack of benefit. The choice of values for each α _{ i } and ω _{ i } at the design stage is guided by two considerations.
First, we believe it is essential to maintain a high overall power (ω) of the trial. The implication is that for testing the treatment effect on the intermediate outcome, the power ω _{ i } (i < s) should be high, e.g. at least 0.95. For testing the treatment effect on the definitive outcome, the power at the s th stage, ω _{ s } , should also be high, perhaps of the order of at least 0.9. The main cost of using a larger number of stages is a reduction in overall power.
Second, given the ω _{ i } , the values chosen for the α _{ i } largely govern the numbers of events required to be seen at each stage and the stage durations. Here we consider largerthantraditional values of α _{ i } , because we want to make decisions on dropping arms reasonably early, i.e. when a relatively small number of events has accrued. Given the magnitude of the targeted treatment effect and our requirement for high power, we are free to change only the α _{ i } . It is necessary to use descending values of α _{ i }, otherwise some of the stages become redundant. For practical purposes, a design might be planned to have roughly equally spaced numbers of events occurring at roughly equally spaced times. For example, total (i.e. control + experimental arm) events at stage i might be of the order of 100i. A geometric descending sequence of α _{ i } values starting at α _{1} = 0.5 very broadly achieves these aims. As a reasonable starting point for trials with up to 6 stages, we suggest considering α _{ i } = 0.5 ^{ i } (i < s) and α _{ s } = 0.025. The latter mimics the conventional 0.05 twosided significance level for tests on the Doutcome. More than 6 stages will rarely be needed as they are unlikely to be of practical value.
Suggested significance level and power at each stage of a TAMS design with four stages and an allocation ratio of either 1 or 0.5.
Allocation Ratio  Stage  Significance level (1sided)  Power  Number of events  Time  

Control arm  Total  
A  i  α _{ i }  ω _{ i }  e _{ i }  t _{ i }  
1  1  0.5  0.95  73  133  1.7 
2  0.25  0.95  139  256  2.6  
3  0.125  0.95  198  369  3.3  
4  0.025  0.9  264  486  5.0  
0.5  1  0.5  0.95  113  160  1.9 
2  0.25  0.95  211  301  2.8  
3  0.125  0.95  301  432  3.6  
4  0.025  0.9  399  568  5.4 
2.7 Determining the overall significance level and power
where Φ_{ s }(.;R) denotes the standard sdimensional multivariate normal distribution function with correlation matrix R.
Exact calculation of the correlation R _{ is } between the log hazard ratios on the I and Doutcomes appears intractable. It depends on the interval between t _{ i } and t _{ s } and on how strongly related the treatment effects on the I and D outcomes are. If I is a composite event which includes D as a subevent (for example, I = progression or death, D = death), the correlation could be quite high. In section 2.7.1 we suggest an approach to determining R _{ is } heuristically.
The minima occur when R _{ is } = 1 for all i (i.e. 100% correlation between and ), and the maxima when R _{ is } = 0 for all i (no correlation between and ).
Note that unlike for standard trials in which α and ω play a primary role, neither α nor ω is required to realize a TAMS design. However, they still provide important design information, as their calculated values may lead one to change the α _{ i } and/or the ω _{ i } .
2.7.1 Determining R _{ is }
In practice, values of R _{ is } are unlikely to lie close to either 0 or 1. One option, as described in Reference [7], is to estimate R _{ is } by bootstrapping relevant existing trial data after the appropriate numbers of Ievents or Devents have been observed at the end of the stages of interest. The approach is impractical as a general solution, for example for implementation in software.
where c is a constant independent of the stage, i. We speculate that c is related to , the correlation between the estimated log hazard ratios on the two outcomes at a fixed timepoint in the evolution of the trial. Under the assumption of proportional hazards of the treatment effect on both outcomes, the expectation of is independent of time, and can be estimated by bootstrapping suitable trial data [7].
Note that if the I and Doutcomes are identical then c = 1 and eqn. (8) reduces to eqn. (7). If they are different, the correlation must be smaller and c < 1 is an attenuation factor.
Estimation of the attenuation factor, c, required to compute the correlations, R _{ is }, between hazard ratios on the Ioutcome and Doutcome.
Acc rate  α _{1}, α _{2} , α _{3}  Under H _{1}  Under H _{0}  

R _{ 13 }  c  R _{ 23 }  c  R _{ 13 }  c  R _{ 23 }  c  
250  0.5, 0.25, 0.025  0.526  0.728  0.361  0.69  0.493  0.68  0.367  0.70  0.504  0.69 
0.2, 0.1, 0.025  0.776  0.907  0.529  0.68  0.594  0.66  0.529  0.68  0.598  0.66  
500  0.5, 0.25, 0.025  0.527  0.728  0.369  0.70  0.476  0.64  0.383  0.73  0.487  0.67 
0.2, 0.1, 0.025  0.778  0.909  0.505  0.65  0.575  0.63  0.512  0.66  0.577  0.63 
The correlation between and at the end of stage 1 and at the end of stage 2 was approximately 0.6, i.e. about 10 percent smaller than c. As a rule of thumb, we suggest using eqn. (8) with c ≃ 1.1 when an estimate of the correlation is available. In the absence of such knowledge, we suggest performing a sensitivity analysis of α and ω to c over a sensible range, for example ; see Table Seven for an example.
2.8 Determining 'stagewise' significance level and power
The significance level or power at stage i is conditional on the experimental arm E having passed stage i  1. Let α _{ ii1}be the probability under H _{0} of rejecting H _{0} at stage i, given that E has passed stage i  1. Similarly, let ω _{ ii1}be the 'stagewise' power, that is the probability under H _{1} of rejecting H _{0} at significance level α _{ i } at stage i, given that E has passed stage i  1. Passing stage i  1 implies having passed earlier stages i2, i3, ..., 1 as well. The motivation for calculating theoretical values of α _{ ii1}and ω _{ ii1}is to enable comparison with their empirical values in simulation studies.
where R ^{(i)}denotes the matrix comprising the first i rows and columns of R. R ^{(1)} is redundant; when i = 2, the denominators of (9) for α _{21} and ω _{21} are α _{1} and ω _{1} respectively.
For example, suppose that s = 2, α _{1} = 0.25, α _{2} = 0.025, ω _{1} = 0.95, ω _{2} = 0.90, = 0.6; then α _{21} = 0.081, ω _{21} = 0.920.
3 Comments on other approaches
3.1 Beta spending functions
Pampallona et al. [6] propose beta spending functions which allow for early stopping in favour of the null hypothesis, i.e. for lack of benefit. The beta spending functions and their corresponding critical values are derived together with alpha spending functions and hence allow stopping for benefit or futility in the same trial. An upper and a lower critical value for the hazard ratio are applied at each interim analysis. The approach is implemented in EAST5 (see http://www.cytel.com/software/east.aspx). The method may also be applied to designs which allow stopping only for lack of benefit, which is closest in spirit to our approach.
The main difference between our approach and beta spending functions lies in the specification of the critical hazard ratio, δ _{ i } , at the i th stage. If a treatment is as good as specified in the alternative hypothesis, we want a high probability that it will proceed to the next stage of accrual—hence the need for high power (e.g. 95%) in the intermediate stages. The only way to increase power with a given number of patients is to increase the significance level. A higher than usual significance level (α _{ i } ) is justifiable because an 'error' of continuing to the next stage when the treatment arm should fail the test on δ _{ i } is less severe than stopping recruitment to an effective treatment.
Critical values for beta spending functions are determined by the shape of the spending function as information accumulates. Pampallona et al. [6]'s beta spending functions, allowing for early stopping only in favour of the null hypothesis, maintain reasonable overall power. However, a stringent significance level operates at the earlier stages, implying that the critical value for each stage is far away from a hazard ratio of 1 (the null hypothesis). Regardless of the shape of the chosen beta spending function, analyses of the intermediate outcome are conducted at a later point in time, that is, when more events have accrued, than with our approach for comparable designs.
The available range of spending functions with known properties does not allow the same power (or α) to be specified at two or more analyses [11]. Specifying the same power at each intermediate stage, an option in a TAMS design, is appealing because it allows the same low probability of inappropriately rejecting an effective treatment to be maintained at all stages.
3.2 Interim monitoring rules for lack of benefit
Recently, Freidlin et al. [12] proposed the following rule: stop for lack of benefit if at any point during the trial the approximate 95% confidence interval for the hazard ratio excludes the design hazard ratio under H _{1}. They modify the rule (i) to start monitoring at a minimum cumulative fraction of information (i.e. the ratio of the cumulative number of events so far observed to the designed number), and (ii) to prevent the implicit hazardratio cutoff, δ, being too far below 1. (They suggest applying a similar rule to monitor for harm, that is, for the treatment effect being in the 'wrong' direction.) They state that the cost of their scheme in terms of reduced power is small, of the order of 1%.
For example, consider a trial design with Δ^{1} = 0.75, onesided α = 0.025 and power ω = 0.9 or 0.8. In their Tables 3 and 4, Freidlin et al. [12] report that on average their monitoring rule with 3 looks stops such trials for lack of benefit under H _{0} at 64% or 70% of information, respectively. The information values are claimed to be lower (i.e. better) than those from competing methods they consider. For comparison, we computed the average information fractions in simulations of TAMS designs. We studied stopping under H _{0} in fourstage (i.e. 3 looks) TAMS trials with α values of 0.5, 0.25, 0.1 and 0.025, and power 0.95 in the first 3 stages and 0.9 in the final stage. With an accrual rate of 250 pts/year, we found the mean information fractions on stopping to be 49% for designs with I = D and 21% with I ≠ D. In the latter case, the hazard for I outcomes was twice that for D outcomes, resulting in greater than a halving of the information fraction at stopping compared with I = D.
As seen in the above example, a critical advantage of our design, not available with beta spending function methodology or with Freidlin's monitoring schemes, is the use of a suitable intermediate outcome measure to shorten the time needed to detect ineffective treatments. Even in the I = D case, our designs are still highly competitive and have many appealing aspects.
4 Simulation studies
4.1 Simulating realistic intermediate and definitive outcome measures
Simulations were conducted to assess the accuracy of the calculated power and significance level at each stage of a TAMS design and overall. We aimed to simulate time to disease progression (X) and time to death (Y) in an acceptably realistic way. The intermediate outcome measure of time to disease progression or death is then defined as Z = min (X, Y). Thus Z mimics the time to an Ievent and Y the time to a Devent. Note that X, the time to progression, could in theory occur 'after death' (i.e. X > Y); in practice, cancer patients sometimes die before disease progression has been clinically detected, so that the outcome Z = min (X, Y) = Y in such cases is perfectly reasonable.
where Φ is the standard normal distribution function and λ _{1} and λ _{2} are the hazards of the (correlated) exponential distributions X and Y, for which the median survival times are ln (2)/λ _{1} and ln (2)/λ _{2}, respectively. Although it is well known that min (X, Y) is an exponentially distributed random variable when X and Y are independent exponentials, the same result does not hold in general for correlated exponentials.
First, it was necessary to approximate the hazard, λ _{3}, of Z as a function of λ _{1}, λ _{2} and ρ _{ U,V }. The approximation was done empirically by using simulation and smoothing, taking the hazard of the distribution of Z as the reciprocal of its sample mean. In practice, since X is not always observable, one would specify the hazards (or median survival times) of Z and Y, not of X and Y; the final step, therefore, was to use numerical methods to obtain λ _{1} given λ _{2}, λ _{3} and ρ _{ U,V }.
Second, the distribution of Z turned out to be close to, but slightly different from exponential. A correction was applied by modelling the distribution of W = Φ^{1} [exp (λ _{3} Z)] (i.e. a variate that would be distributed as N (0, 1) if Z were exponential with hazard λ _{3}) and finally backtransforming W to Z', its equivalent on the exponential scale. The distribution of W was approximated using a threeparameter exponentialnormal model [13]. Except at very low values of Z, we found that Z' < Z, so the correction (which was small) tended to bring the Ievent forward a little in time.
4.2 Singlestage trials
Type 1 error and power for various singlestage trial designs with onesided significance level α _{1} and power ω _{1}.
Sig. Level  ω _{1} = 0.9  ω _{1} = 0.95  ω _{1} = 0.99  

α _{ 1 }  
0.5  0.516  0.918  0.506  0.960  0.503  0.993 
0.25  0.256  0.908  0.257  0.956  0.250  0.992 
0.1  0.105  0.906  0.104  0.955  0.104  0.992 
0.05  0.054  0.906  0.054  0.954  0.053  0.991 
0.025  0.029  0.903  0.028  0.954  0.027  0.991 
The causes of the inaccuracies in α _{1} and ω _{1} are explored in Appendix C. The principal reason for the discrepancy in the type 1 error rate is that the estimate of the variance of the log hazard ratio under H _{0} given in equation (3) is biased downwards by up to about 1 to 3 percent. Regarding the power, the estimate of the variance of the log hazard ratio under H _{1} given in equation (5) is biased upwards by up to about 4 percent. For practical purposes, however, we consider that the accuracy levels are acceptable, and we have not attempted to further correct the estimated variances.
4.3 Multistage trials
4.3.1 Design
We consider only designs for TAMS trials with 3 stages. We report the actual stagewise and overall significance level and power, comparing them with theoretical values derived from multivariate normal distribution as given in eqns. (6) and (9). Actual significance levels were estimated from simulations run under H _{0} with hazard ratio = 1 (i = 1, ..., s). Power was estimated from simulations run under H _{1} with hazard ratio = 0.75 (i = 1, ..., s). Other design parameter values were based on those used in the GOG182/ICON5 twostage trial, taking median survival for the Ioutcome, progressionfree survival, of 1 yr (hazard λ _{1} = 0.693), and for the Doutcome, survival, of 2 yr (hazard λ _{2} = 0.347). Correlations among hazard ratios at the intermediate stages, R _{ ij } , were computed from eqn. (7) for i, j < s. Values of R _{ is } (i = 1, ..., s1) were estimated as the empirical correlations between and in an independent set of simulations of the relevant design scenarios. Three designs were used: α _{ i } = {0.5, 0.25, 0.025}, {0.2, 0.1, 0.025}, {0.1, 0.05, 0.025} with ω _{ i } = {0.95, 0.95, 0.9} in each case.
Simulations were performed in Stata using 50,000 replications of each design. Pseudorandom times to event X, Y and Z' were generated as described in section 4.1.
4.3.2 Results
Simulation results (50,000 replicates) for 3 threestage trial designs with accrual rates (r _{ i }) of (a) 250 and (b) 500 patients per year.
Design  Stage  α _{ i }  ω _{ i }  δ _{ i }  e _{ i }  t _{ i }  N _{ i }  α _{ ii1}  ω _{ ii1}  

(a) r _{ i } = 250  
1  1  0.50  0.95  1.000  73  1.53  191  0.500  0.495  0.950  0.957 
2  0.25  0.95  0.923  140  0.74  283  0.441  0.452  0.969  0.971  
3  0.025  0.90  0.843  264  2.10  545  0.074  0.084  0.918  0.923  
2  1  0.2  0.95  0.910  159  2.45  306  0.200  0.204  0.950  0.955 
2  0.1  0.95  0.885  217  0.55  375  0.427  0.432  0.976  0.978  
3  0.025  0.90  0.844  264  1.36  545  0.144  0.158  0.924  0.930  
3  1  0.1  0.95  0.885  217  3.00  375  0.100  0.104  0.950  0.953 
2  0.05  0.95  0.869  272  0.49  436  0.423  0.431  0.980  0.981  
3  0.025  0.90  0.844  264  0.87  545  0.221  0.243  0.926  0.932  
(b) r _{ i } = 500  
1  1  0.50  0.95  1.000  74  1.03  259  0.500  0.503  0.950  0.957 
2  0.25  0.95  0.923  141  0.46  374  0.441  0.447  0.969  0.971  
3  0.025  0.90  0.844  266  1.40  722  0.074  0.084  0.918  0.925  
2  1  0.2  0.95  0.910  161  1.62  404  0.200  0.203  0.950  0.954 
2  0.1  0.95  0.885  220  0.33  487  0.427  0.439  0.976  0.979  
3  0.025  0.90  0.844  266  0.94  722  0.144  0.150  0.924  0.927  
3  1  0.1  0.95  0.885  220  1.95  487  0.100  0.103  0.950  0.954 
2  0.05  0.95  0.869  275  0.29  559  0.423  0.433  0.980  0.982  
3  0.025  0.90  0.844  266  0.65  722  0.221  0.224  0.926  0.929 
Only the columns labelled and are estimates from simulation. The remaining quantities are either primary design parameters (r _{ i } , α _{ i } , ω _{ i } ) or secondary design parameters (δ _{ i } , e _{ i } , t _{ i } , N _{ i } ). The latter are derived from the former according to the methods described in section 2, additionally with . Note that by convention α _{10} = α _{1} and ω _{10} = ω _{1}, the corresponding estimates being, respectively, the empirical significance level and power at stage 1. Monte Carlo standard errors for underlying probabilities of {0.95, 0.90, 0.5, 0.25, 0.10, 0.05} with 50,000 replications are approximately {0.00097, 0.0013, 0.0022, 0.0019, 0.0013, 0.00097}. The results show good agreement between nominal and simulation values of and , but again with a small and unimportant tendency for the simulation values to exceed the nominal ones.
The same tendencies are seen as in the earlier tables. The calculated values of the overall significance level and power both slightly underestimate the actual values.
5 Example in prostate cancer: the STAMPEDE trial
STAMPEDE is a MAMS trial conducted at the MRC Clinical Trials Unit in men with prostate cancer. The aim is to assess 3 alternative classes of treatments in men starting androgen suppression. In a fourstage design, five experimental arms with compounds shown to be safe to administer are compared with a control arm regimen of androgen suppression alone. Stages 1 to 3 utilize an Ioutcome of failurefree survival (FFS). The primary analysis is carried out at stage 4, with overall survival (OS) as the Doutcome.
As we have already stated, the main difference between a MAMS and a TAMS design is that the former has multiple experimental arms, each compared pairwise with control, whereas the latter has only one experimental arm. The design parameters for MAMS and TAMS trials are therefore the same.
STAMPEDE design parameters.
Stage (i)  Outcome  α _{ i }  ω _{ i }  δ _{ i }  e _{ i }  t _{ i } 

1  FFS  0.5  0.95  1.00  113  3.0 
2  FFS  0.25  0.95  0.92  213  4.4 
3  FFS  0.1  0.95  0.89  331  5.8 
4  OS  0.025  0.9  0.84  403  8.0 
Overall  0.017  0.84*  
0.012  0.83** 
Sensitivity of the overall significance level (α) and power (ω) of pairwise comparisons with the control arm in the STAMPEDE design to the choice of the constant c.
c  α  ω 

0.4  0.0067  0.822 
0.5  0.0084  0.826 
0.6  0.0104  0.830 
0.7  0.0127  0.835 
0.8  0.0153  0.841 
As a general rule, the values in Table 7 suggest that it may be better to underestimate rather than overestimate c as this would lead to conservative estimates of the overall power.
As illustrated in Table 6, larger significance levels α _{ i } were chosen for stages 13 than would routinely be considered in a traditional trial design. The aim was to avoid rejecting a potentially promising treatment arm too early in the trial, while at the same time maintaining a reasonable chance of rejecting treatments with hazard ratio worse than (i.e. higher than) the critical value δ _{ i } .
6 Discussion
The methodology presented in this paper aims to address the pressing need for new additions to the 'product development toolkit' [1] for clinical trials to achieve reliable results more quickly. The approach compares a new treatment against a control treatment on an intermediate outcome measure at several stages, allowing early stopping for lack of benefit. The intermediate outcome measure does not need to be a surrogate for the primary outcome measure in the sense of Prentice [14]. It does need to be related in the sense that if a new treatment has little or no effect on the intermediate outcome measure then it will probably have little or no effect on the primary outcome measure. However, the relationship does not need to work in the other direction; it is not stipulated that because an effect has been observed on the intermediate outcome measure, an effect will also be seen on the primary outcome measure. A good example of an intermediate outcome is progressionfree survival in cancer, when overall survival is the definitive outcome. Such a design, in two stages only, was proposed by Royston et al. [7] in the setting of a multiarm trial. In the present paper, we have extended the design to more than two stages, developing and generalizing the mathematics as necessary.
In the sample size calculations presented here, times to event are assumed to be exponentially distributed. Such an assumption is not realistic in general. In the TAMS design, an incorrect assumption of exponential timetoevent affects the timelines of the stages, but under proportional hazards of the treatment effect, it has no effect on the numbers of events required at each stage. A possible option for extending the method to nonexponential survival is to assume piecewise exponential distributions. The implementation of this methodology for the case of parallel group trials was described by Barthel et al. [15]. Further work is required to incorporate it into the multistage framework.
Another option is to allow the user to supply the baseline (control arm) survival distribution seen in previous trial(s). By transforming the timetoevent into an estimate of the baseline cumulative hazard function, which has a unit exponential distribution, essentially the same sample size calculations can be made, regardless of the form of the actual distribution. 'Real' timelines for the stages of the trial can be obtained by backtransformation, using flexible parametric survival modelling [16] implemented in Stata routines [17, 18] The only problem is that the patient accrual rate, assumed constant (per stage) on the original time scale, is not constant on the transformed time scale; it is a continuous function of the latter. The expression for the expected event rate e (t) given in eqn. (10) is therefore no longer valid, and further extension of the mathematics in Appendix A is needed. This is another topic for further research.
We used simulation to assess the operating characteristics of TAMS trials based on a bivariate exponential distribution, obtained by transforming a standard bivariate normal distribution. The simulation results confirm the design calculations in terms of the significance level and power actually attained. They show that overall power is maintained at an acceptable level when adding further stages.
Multistage trials and the use of intermediate outcomes are not new ideas. Trials with several interim analyses and stopping rules have been suggested in the context of alpha and beta spending functions. Posch et al. [19] have reviewed the ideas. One of the main differences between other approaches and ours is the method of calculation of the critical value for the hazard ratio at each stage or interim analysis, as discussed in section 3. With the error spendingfunction approach, the critical value is driven by the shape chosen for the function. In our approach, it is based on being unable to reject H _{0} at modest significance levels.
Our approach differs from that of calculating conditional power for futility. In the latter type of interim analysis, the conditional probability of whether a particular clinical trial is likely to yield a significant result in the future is assessed, given the data available so far [2]. Zscore boundaries are plotted based on conditional power and on the information fraction at each point in time. These values must be exceeded for the trial to stop early for futility. In contrast, we base the critical value at each stage not on what may happen in the future, but rather on the data gathered so far.
We note that further theoretical development of TAMS designs is required. Questions to be addressed include the following. (1) How do we specify the stagewise significance levels (α _{ i } ) and power (ω _{ i } ) to achieve efficient designs (e.g. in terms of minimizing the expected number of patients)? We have made some tentative suggestions in section 2.6, but a more systematic approach is desirable. (2) Given the uncertainty of the correlation structure of the treatment effects on the different types of outcome measure (see section 2.7.1), what are the implications for the overall significance level and power?
In the meantime, multiarm versions of TAMS trials have been implemented in the real world, and new ones are being planned. We believe that they offer a valuable way forward in the struggle efficiently to identify and evaluate the many potentially exciting new treatments now becoming available. Further theoretical developments will follow as practical issues arise.
7 Conclusions
We describe a new class of multistage trial designs incorporating repeated tests for lack of additional efficacy of a new treatment compared with a control regimen. Importantly, the stages include testing for lack of benefit with respect to an intermediate outcome measure at a relaxed significance level. If carefully selected, such an intermediate outcome measure can provide more power and consequently a markedly increased lead time. We demonstrate the mathematical calculation of the operating characteristics of the designs, and verify the calculations through computer simulations. We believe these designs represent a significant step forward in the potential for speeding up the evaluation of new treatment regimens in phase III trials.
8 Appendix A. Further details of algorithms for sample size Calculations
As noted in section 2.4, two subsidiary algorithms are needed in the sample size calculations for a TAMS trial. We adopt the following notation and assumptions:

Calendar time is denoted by t. The start of the trial (i.e. beginning of recruitment) occurs at t = 0.

No patient drops out or is lost to followup

Stages 1, ..., s start at t _{0}, ..., t _{ s1}and end at t _{1}, ..., t _{ s }timeunits (e.g. years), respectively. We assume that t _{0} = 0 and t _{ i1}< t _{ i }(i = 1, ..., s).

Duration of stage i is d _{ i }= t _{ i } t _{ i1}timeunits.

Recruitment occurs at a uniform rate in each stage, but the rate may vary between stages. The number of patients recruited to the control arm during stage i is r _{ i }.

Number of events expected in interval (0, t] = e(t).

Survival function is S (t) and distribution function is F (t) = 1  S (t)

Number of patients at risk of an event at time t = N(t), with N (0) = 0
8.1 Determining the numbers of events from the stage times
for i = 1, ..., s. Equations (11) and (12) enable the calculation of the number of patients at risk and number of events at the end of any stage for a memoryless survival distribution under the assumption of a constant recruitment rate in each stage.
8.2 Calculating times from cumulative events
Step 3 of section 2.4 involves computing the stage endpoints given the number of events occurring in each stage. This may be done by using a straightforward NewtonRaphson iterative scheme.
Consider a function g (x). We wish to find a root x such that g (x) ≈ 0. The NewtonRaphson scheme requires a starting guess, x ^{(0)}. The next guess is given by x ^{(1)} = x ^{(0)}  g (x ^{(0)})/g' (x ^{(0)}). The process continues until some i is found such that x ^{(i)} x ^{(i1)} is sufficiently small. In wellbehaved problems, convergence is fast (quadratic) and unique.
A reasonable starting value for t _{ i } is t _{ i1}+ 0.5× median survival time. Updates of t _{ i } are performed in routine fashion using the NewtonRaphson scheme. Adequate convergence usually occurs within about 8 iterations.
8.3 Stopping recruitment before the end of stage s
We turn to the situation where recruitment is stopped at some time t* < t _{ s } , and all recruited patients are followed up for events until t _{ s } . This may be a good option when recruitment is slow, at the cost of increasing the length of the trial. Let a ∈ {0, 1, ..., s  1} be the stage immediately preceding the time t*, that is, t* occurs during stage t _{ a+1}so that t* ∈ (t _{ a } , t _{ a+1}]. If a = 0, for example, recruitment ceases before the end of stage 1. We assume that the recruitment rate is r _{ a+1}between t _{ a } and t* and zero between t* and t _{ a+1}. Let d* = t*  t _{ a } be the duration of recruitment during stage a + 1. In practice, as explained in section 2.5, we restrict the application of these formulae to the case a + 1 = s.
We now consider the extension of the calculations to allow early stopping of recruitment for the cases in steps 4 and 3 of the sample size algorithm described in section 2.4.
8.3.1 Step 4: Determining the number of events from the stage times
In fact, e (t*) is the expected number of events at an arbitrary timepoint t* ∈ (0, t _{ s } ). The total number of patients recruited to the trial is .
8.3.2 Step 3: Calculating times from cumulative events
where N (t*) and e (t*) are as given in eqns. (13) and (14).
The iterative scheme may be applied as in section 8.2 to solve for t _{ a+1}.
9 Appendix B. Determining the correlation matrix (R _{ ij } )
9.1 Approximate results
We assume that the arrivals of patients into the trial follow independent homogeneous Poisson processes with rates r in the control arm and Ar in the experimental arm, where A is the allocation ratio. This is equivalent to patients entering the trial in a Poisson process of rate (1 + A)r and being assigned independently to E (the experimental arm) with probability p = A/(1 + A) or to C (the control arm) with probability 1  p = 1/(1 + A).
If, for each arm, the intervals between entry of the patient into the trial and the event of interest (analysis times) are independent and identically distributed, and if we ignore the effect of initial conditions (the start of the trial at t = 0) so that the process of events occurring in each arm is in equilibrium, these events occur in Poisson processes with rates r and Ar in the two arms. If, additionally the two sequences of intervals are independent, then the two Poisson processes are also independent. Note that there is no requirement here that the analysis times (i.e. the intervals between patient entries and eventtimes) have the same distribution for patients in both arms of the trial.
In the following discussion in this section, we consider the equilibrium case under the above assumptions. The transient case is deferred to section 9.2.
We begin observing events in each arm at t = 0. We await m _{1} events in the control arm at time T _{1} (stage 1), a further m _{2} events during the subsequent time period of length T _{2} (stage 2), and so on up to stage s. Thus we await e _{ i } = m _{1} +m _{2} + ... +m _{ i } controlarm events by time t _{ i } = T _{1} +T _{2} + ... +T _{ i } (stage i). Quantities m _{ i } (i = 1, ..., s) are fixed whereas {T _{ i } , i = 1, ..., s} are mutually independent random variables, where T _{ i } has a gamma distribution, Γ (m _{ i } , r), with index m _{ i } and scale parameter r.
Let the number of events observed in the experimental arm at T _{1} be O _{1} and the incremental numbers of events observed in the experimental arm during the subsequent time periods of lengths T _{2}, ..., T _{ s } be O _{2}, ...,O _{ s } respectively. Given {T _{ i } , i = 1, ..., s}, the variables {O _{ i } } are mutually independent, where O _{ i } has a Poisson distribution with rate Ar and mean ArT _{ i } . Since the {T _{ i } } are mutually independent, the same is true of the {O _{ i } } unconditionally.
as correlations are invariant under linear transformations of the variables.
Equation (15) gives the correlation between the hazard ratios when it is assumed that the processes of events in the two arms are in equilibrium. In the next section, we show that the equilibrium result given in equation (15) holds exactly in the nonequilibrium case when the distributions of the intervals between trial entry and event are the same for the two arms of the trial. In this case, the result is easily derived under the more general assumption that the Poisson process of trial entries is nonstationary. In section 9.3, a comparison is made with exact correlations estimated by simulation for a typical example.
9.2 Exact results
We now suppose that the trial begins at t = 0, with no entries into either arm before that time. For simplicity of notation, we will focus on s = 2; the extension to larger values of s is straightforward. We assume that entries into the trial form a Poisson process with rate (1 + A)r(t)(t > 0) and, as before, are independently allocated to the experimental and control arms with probabilities p = A/(1 + A) and 1  p respectively.
Similarly, O _{1} and O _{2} are independent Poisson variables and O _{1} + O _{2} has a Poisson distribution with mean Aθ _{ e } (T _{1} + T _{2}).
then the mean numbers of events in (0, T _{1}] and (0, T _{1} + T _{2}] are θ _{ c } (T _{1}) and θ _{ c } (T _{1} + T _{2}).
where var(O) = E(Aθ _{ e } (T))+var(Aθ _{ e } (T)), and O denotes the observed number of events in the experimental arm in an arbitrary time T.
and therefore that the random variable θ _{ c } (T) has a gamma distribution Γ(m, 1) with index m and scale parameter 1. Note that, by transforming the time scale from t to θ _{ c } (t) we are transforming to operational time (see Cox and Isham [20], section 4.2), in which events in the control arm occur in a Poisson process of unit rate. The method works here because the transformed time scales are, up to the constant A, assumed to be the same in the two arms of the trial.
9.3 Example
The example is loosely based on the design of the MRC STAMPEDE trial [9] in prostate cancer. We consider s = 4 stages and a single eventtype (i.e. no intermediate eventtype). We wish to compare {R _{ ij } } for i, j = 1, ..., s from simulation with the values derived from equation (15). At the i th stage, whose timing is determined by the predefined significance level α _{ i } and power ω _{ i } , the hazard ratio between the experimental and control arms is calculated and compared with a cutoff value, δ _{ i } , calculated as described in section 2.3. In practice, the number of events e _{ i } required in the control arm at the i th stage is computed and the analysis is performed when that number has been observed. The (onesided) significance levels, α _{ i } , at the four stages were chosen to be 0.5, 0.25, 0.1, 0.025 and the power values, ω _{ i } , to be 0.95, 0.95, 0.95, 0.9. The allocation ratio was taken as A = 1. The accrual rate was assumed to be 1000 patients per year, with a median time to event (analysis time) of 4 years.
Parameters of the fourstage trial design used in the simulation study. See text for details
Stage(i)  α _{ i }  ω _{ i }  δ _{ i }  e _{ i } 

1  0.5  0.95  1.000  73 
2  0.25  0.95  0.923  140 
3  0.1  0.95  0.884  217 
4  0.025  0.9  0.843  262 
Estimates of correlations R _{ ij }. Lower triangle (in italics), based on equation (15); upper triangle, estimates based on simulation under Δ = 1, 5000 replications
R _{ ij }  i= 1  i= 2  i= 3  i= 4 

j = 1  1  0.721  0.575  0.519 
j = 2  0.722  1  0.799  0.722 
j = 3  0.579  0.802  1  0.909 
j = 4  0.529  0.733  0.914  1 
Estimates of correlations R _{ ij }.
R _{ ij }  i= 1  i= 2  i= 3  i= 4 

j = 1  1  0.715  0.569  0.512 
j = 2  0.722  1  0.793  0.717 
j = 3  0.579  0.802  1  0.904 
j = 4  0.529  0.733  0.914  1 
Further simulations were performed with Δ = 0.50 and Δ = 0.35. The results (not shown) confirmed that equation (15) provides an excellent approximation.
10 Appendix C. How do the inaccuracies in power and significance level arise?
should have mean , variance 1, skewness 0 and kurtosis 3. If the estimate is biased, the means of A _{ i } and B _{ i } in simulation studies will differ from and under H _{0} and H _{1}, respectively. If there is bias in the estimates of and , the SDs of simulated values of A _{ i } and B _{ i } will differ from and under H _{0} and H _{1}, respectively. The direction of the bias of the SD will be the opposite to that in the estimators of and .
Means and SDs of random variable A _{1} for the simulations in Table 3, computed under H _{0}
Means and SDs of random variable B _{1} for the simulations in Table 3, computed under H _{1}
Declarations
Acknowledgements
PR, BCO and MKBP were supported by the UK Medical Research Council. FMB was supported by GlaxoSmithKline plc, and VI by University College London.
Authors’ Affiliations
References
 US Food and Drug Administration: Innovation or Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products. US Dept of Health and Human Services. 2004Google Scholar
 Proschan MA, Lan KKG, Wittes J: Statistical Monitoring of Clinical Trials  A Unified Approach. 2006, New York: SpringerGoogle Scholar
 Armitage P, McPherson CK, Rowe BC: Repeated significance tests on accumulating data. Journal of the Royal Statistical Society, Series A. 1969, 132: 235244. 10.2307/2343787.View ArticleGoogle Scholar
 Lan K, DeMets D: Discrete sequential boundaries for clinical trials. Biometrika. 1983, 70: 659663. 10.2307/2336502.View ArticleGoogle Scholar
 O'Brien PC, Fleming TR: A multiple testing procedure for clinical trials. Biometrics. 1979, 35: 549556.View ArticlePubMedGoogle Scholar
 Pampallona S, Tsiatis A, Kim KM: Interim monitoring of group sequential trials using spending functions for the type I and II error probabilities. Drug Information Journal. 2001, 35: 11131121.View ArticleGoogle Scholar
 Royston P, Parmar MKB, Qian W: Novel designs for multiarm clinical trials with survival outcomes, with an application in ovarian cancer. Statistics in Medicine. 2003, 22: 22392256. 10.1002/sim.1430.View ArticlePubMedGoogle Scholar
 Bookman MA, Brady MF, McGuire WP, Harper PG, Alberts DS, Friedlander M, Colombo N, Fowler JM, Argenta PA, Geest KD, Mutch DG, Burger RA, Swart AM, Trimble EL, AccarioWinslow C, Roth LM: Evaluation of New PlatinumBased Treatment Regimens in AdvancedStage Ovarian Cancer: A Phase III Trial of the Gynecologic Cancer InterGroup. Journal of Clinical Oncology. 2009, 27: 14191425. 10.1200/JCO.2008.19.1684.View ArticlePubMedPubMed CentralGoogle Scholar
 James ND, Sydes MR, Clarke NW, Mason MD, Dearnaley DP, Anderson J, Popert RJ, Sanders K, Morgan RC, Stansfeld J, Dwyer J, Masters J, Parmar MKB: STAMPEDE: Systemic Therapy for Advancing or Metastatic Prostate Cancer  A MultiArm MultiStage Randomised Controlled Trial. Clinical Oncology. 2008, 20: 577581. 10.1016/j.clon.2008.07.002.View ArticlePubMedGoogle Scholar
 Tsiatis AA: The asymptotic joint distribution of the efficient scores test for the propor tional hazards model calculated over time. Biometrika. 1981, 68: 311315. 10.1093/biomet/68.1.311.View ArticleGoogle Scholar
 Betensky R: Construction of a continuous stopping boundary from an alpha spending function. Biometrics. 1998, 54: 10611071. 10.2307/2533857.View ArticlePubMedGoogle Scholar
 Freidlin B, Korn EL, Gray R: A general inefficacy interim monitoring rule for randomized clinical trials. Clinical Trials. 2010, 7: 197208. 10.1177/1740774510369019.View ArticlePubMedGoogle Scholar
 Royston P, Wright EM: A method for estimating agespecific reference intervals ("normal ranges") based on fractional polynomials and exponential transformation. Journal of the Royal Statistical Society, Series A. 1998, 161: 79101.View ArticleGoogle Scholar
 Prentice RL: Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine. 1989, 8: 431440. 10.1002/sim.4780080407.View ArticlePubMedGoogle Scholar
 Barthel FMS, Babiker A, Royston P, Parmar MKB: Evaluation of sample size and power for multiarm survival trials allowing for nonuniform accrual, nonproportional hazards, loss to followup and crossover. Statistics in Medicine. 2006, 25: 25212542. 10.1002/sim.2517.View ArticlePubMedGoogle Scholar
 Royston P, Parmar MKB: Flexible Parametric ProportionalHazards and ProportionalOdds Models for Censored Survival Data, with Application to Prognostic Modelling and Estimation of Treatment Effects. Statistics in Medicine. 2002, 21: 21752197. 10.1002/sim.1203.View ArticlePubMedGoogle Scholar
 Royston P: Flexible parametric alternatives to the Cox model, and more. Stata Journal. 2001, 1: 128.Google Scholar
 Lambert PC, Royston P: Further development of flexible parametric models for survival analysis. Stata Journal. 2009, 9: 265290.Google Scholar
 Posch M, Bauer P, Brannath W: Issues in Designing Flexible Trials. Statistics in Medicine. 2003, 22: 953969. 10.1002/sim.1455.View ArticlePubMedGoogle Scholar
 Cox DR, Isham V: Point Processes. 1980, London: Chapman and HallGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.