Friday, July 14, 2006

Size of trials by status (S or P) - Some 2004 FDA and Parexel data compared

by James Packard Love
This note is a follow-up to discussions stimulated by Michael Palmedo's note on 2004 FDA NME drug approvals. In particular, it follows discussions on ip-health by Joe DiMasi and myself on a fairly narrow question -- are clinical trials trials larger for Standard (S) FDA NME drug approvals than for Priority (S) approvals?

The following table reports the size of clinical trials for 5 priority and 8 standard FDA NME drug approvals. The products are the union of those reported by Michael Palmedo for 2004 FDA approvals, and data from Parexel. These are the data that Joe DiMasi referred to in his June 14 post to ip-health.

Drug Rating Size, FDA letter Size, Parexel
Clolar P 66 138
Lyrica P 1,508 9,100
Prialt P 1,434 1,634
Sensipar P 1,146 2,000
Tarceva P 1,837 6,000
Apidra S 2,467 4,093
Cymbalta S 1,850 6,100
Enablex S 1,454 8,830
Fosrenol S 2,357 2,697
Ketek S 2,016 5,900
Lunesta S 2,100 2,909
Spiriva S 2,663 3,168
VESIcare S 3,027 3,327

Below are the mean and median , for the FDA and Parexel data, reported by (P) and (S) drugs.


Mean-FDA Median-FDA Mean-Parexel Median-Parexel

Standard Approvals 2,242 2,229 4,628 3,710
Priority Approvals 1,198 1,481 3,774 2,000
Difference 1,044 854 795 1,710
% larger 87% 55% 23% 86%


Two quick points. First, Parexcel reports more patients for every trial. Second, the number of data points is pretty small (5 P and 8 S drugs), so one has be careful about drawing conclusions.

Joe notes that when you look at means from the Parexel data, the trials for Standard approvals (S) are only 23 percent larger than for the Priority products. Joe notes that by comparison, when looking at the FDA data, the mean size of the trials for Standard approvals were 87 percent higher, suggesting a possible bias when looking at FDA data.

However, I would add, that when looking at the MEDIANS of the Parexel data (for the 13 products), the differences between the size of standard and priority drug trials are quite pronounced. For the Parexel data, the median size of trials for the Standard drugs is 86 percent larger than the size of the median trial for the priority drugs -- actually higher than the 55 percent difference (in medians) that Michael reported, looking at FDA data for the same drugs.

Ultimately, this is too small a sample to say that much. We'll take a look at a larger sample, and report that. But before doing so, it is also interesting to look at the differences between the FDA data and the Parexel data. Parexcel always reported more patients in the trials than did Palemedo, looking at the FDA approval letters. In some cases, much more. Pfizer's Lyrica, for example, was reported by Palmedo as 1,508, and Parexcel as 9,100. Enablex, reported by Palmedo as 1,454, is reported by Parexel is 8,830. In looking further at this issue, we will also look closer on these differences. One person suggested the initial FDA approvals may not report parallel trials in the works for other indications (Lyrica is now approved for 3 indications, for example). Another comment is that some of the "trials" reported by Parexel may be of lesser scientific importance (possibly having value for marketing purposes), or may be un-reported by the FDA other reasons. People may speculate or offer some evidence on these points in the comments to this note.

This issue has generated some debate with Joe DiMasi, because we have questioned his repeated finding (in 1991 and 2001/2003) that priority products are more costly than standard drugs, at least in terms of the important area of clinical trials. Our reviews of the data, on a couple of different occasions, have suggested that priority drugs consistently have smaller clinical trials than do standard approvals (findings borne out here again). If priority drugs have smaller trials and quicker approvals, they would seem to be less expensive, all other things being equal. Joe's comments have been informative and constructive, and we will revisit the issue, incorporating both a broader analysis of the Parexel data, and a closer look at the differences between the FDA and Parexel data, as well as other evidence on this topic.

Finally, we remind people that neither the Parexel nor the earlier (2001, 2001/2003) DiMasi et all data claim to present data for all drug approvals. Most importantly, DiMasi has said that "It should also be noted that our study was based on the R&D experiences of major traditional pharmaceutical firms," in contrast to "small biotech and niche pharmaceutical firms." This is not a criticism of the DiMasi studies, as any analysis is going to be limited in some way. It is rather a reminder that some of the estimates provided by DiMasi are based upon particular samples that may not be representative of other drug development efforts. Indeed, DiMasi's 2001/2003 paper, which is now so widely quoted, drew important conclusions about relative investments in priority and non-priority drugs from just 10 priority products and 14 standard products (DiMasi 2003 page 172). His estimates of out-of-pocket outlays were also more than twice as high as the previous PERI study involving 117 drug development projects (Project Management in Pharmaceutical Industry: A survey of Perceived Success Factors 1995-1996, PERI), raising some questions about the nature of the sample he studied. To deepen the understanding of these issues, people have to look at more data, and do some modeling of their own.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home