Thomas J. Leininger and Dr. C. Shane Reese, Statistics
The Food and Drug Administration is responsible for testing and screening clinical drugs before market entry. The screening process involves different phases which determine a drug’s potency, recommended dosage, and potential side effects. Current statistical designs used to model the efficacy of a drug at varying dosage levels are robust yet inefficient.
The purpose of this project is twofold. The first objective is to apply a Bayesian paradigm and Markov chain Monte Carlo (MCMC) methods into dose-response modeling. Secondly, we want to use this Bayesian model to explore alternate approaches to clinical design, looking for ways to reduce the inefficiencies inherent in current designs.
The model assumes that an individual’s response to the drug is allowed to vary in a smooth but flexible fashion, allowing a continuous spectrum of responses. We employ MCMC methods to calculate each dosage level’s Bayesian posterior probability distribution, based on the observed responses and the Gaussian a priori distribution. Sampling from the posterior distribution allows for probabilistic inferences about the drug’s effect at each dose level, including individual prediction intervals.
Adaptive Design
The traditional approach to FDA clinical trials is to randomly assign a fixed number of patients to each dose. Our approach is to direct a more substantial portion of patients to more effective doses. We consider patients arriving in batches and continually estimate the dose-response relationship as outlined in the previous section. With this updated information about dose efficacy, we determine the number of patients assigned to each dose. The details of this assignment follow.
After an initial allocation of twelve patients to each dose for a baseline assessment of the response at each dose, we calculate a vector p containing the probabilities that a certain dose receives a new patient, where the probability for the ‘i’th dose is proportional to the probability that the dose is more effective than the control dose multiplied by the standard deviation of the mean response at that dose. Next we randomly assign a new batch of ten patients to the treatments, with three patients receiving the control dose and the remaining seven being assigned to the other treatments by sampling from p. In this manner more patients are focused on the more interesting doses, providing the patients with better treatments and the clinicians with more useful information.
Stopping Rules
Due to the adaptive design nature of our trial, we are also able to implement stopping rules to terminate a trial once we have accumulated enough information to make an accurate judgment of the drug’s effectiveness. At the arrival of each batch, we consider four possible decisions: stopping the trial upon having reached “success,” “futility,” or the “cap” on the maximum number of affordable patients, or continuing the trial because it has “potential”.
Stopping the trial and declaring the trial a “success” occurs when any dose has at least a 95% probability of being more effective than the control. If this rule is not met, we check the “futility” condition: whether any dose is showing enough merit to continue the trial. Again we utilize the posterior distributions, stopping the trial when all treatments have less than a 25% probability of being more effective than the control. This eliminates unnecessary testing when the drug appears to be ineffective. The third stopping condition is to stop once the patient “cap” has been achieved. If none of these conditions are met, the trial still has “potential;” we therefore randomly assign ten more patients to treatments and continue the trial.
These stopping rules further enhanced the performance of our design by stopping the experiment once we have conclusive evidence as to the drug’s effectiveness or total lack thereof. The classical design cannot provide this feature, leading to unnecessary testing.
Results
We applied our design to three different scenarios in order to determine the flexibility and accuracy of our model in various dose-response relationships. The three relationships we chose (among many possible) were a slowly increasing relationship, a null-effect relationship, and a nonmonotone relationship. The slowly increasing dose-response relationship implies that as the dosage increases, the drug’s effect moderately increases along all doses. The null-effect relationship represents a drug that has no effect at any dose, making the response at any dose the same as the effect at the control dose (0 mg). The nonmonotone relationship is indicative of a drug that has a positive effect up to a certain point, after which the drug becomes toxic and has a negative effect.
The data for each relationship was generated from known response means at certain dose levels, permitting an evaluation of the accuracy of our model against the known truth. Figure 1 shows that for the slowly increasing case, the means calculated by our model (averaged over 100 trials) were able to accurately t the true response curve. A 95% credible interval is also given for the mean response to the drug at each dose, allowing for probability statements about the mean response.
The following table shows the average patients used per trial by our adaptive design. The numbers in the table show substantial savings compared to the classical approach, which would automatically employ 600 patients in each trial.
Consider a conservative clinical trial where the average spending per patient is $10,000. Given the results above, a drug company could reasonably expect to save $4.3 million dollars using this design. Further applying this framework across the aggregate of clinical drug trials would result in hundreds of millions of dollars in savings every year, as well as hastening the entrance of effective drugs to the market.
Future research
Opportunities for future research include increasing the number of simulations, expanding experiments to include other shapes of dose-response curves, and analyzing the accuracy of early stopping decisions.