August 15, 2000
Conflict of interest can be defined as "A conflict between the private interests and official responsibilities of a person in a position of trust."1 In clinical drug trials, conflict of interest usually refers to the situation in which an investigator has a financial relationship (often research funding) with a company whose product the investigator is studying. There is nothing intrinsically wrong with conflicts of interest; they are virtually ubiquitous in clinical drug trials because so many trials are funded by the manufacturer of the product being studied. The problem is less conflict of interest itself; the problem is that conflict of interest may be a risk factor for scientific misconduct.
As we all know, smoking is a risk factor for coronary artery disease (CAD). Not everyone who smokes has CAD. Not everyone with CAD smokes. But people who smoke have a greater chance of having CAD. Similarly, not all conflict of interest situations create scientific misconduct. Not all cases of scientific misconduct are associated with conflict of interest. But conflict of interest situations may increase the chance of scientific misconduct.
Scientific misconduct is a weighty term. But if there are problems with the clinical drug process, these problems can impact the practice of tens of thousands of physicians and the lives of millions of patients. That's weighty, and perhaps a weighty term is appropriate to describe those problems.
Let's look at the spectrum of scientific conduct. Investigators who perform research that is free from bias and error are contributing to objective science. Investigators whose work has been marred by unintentional bias or error can be considered to practice imperfect science, which is common given how difficult it is to design and carry out clinical trials. But investigators who intentionally allow bias or error to infect their work are -- in my opinion -- practicing scientific misconduct. That includes such things as designing studies to ensure a desired result, making statements not justified by the evidence, publishing only part of the evidence, suppression of research findings, and -- worst -- outright fraud with fabrication of evidence.
The bulk of this presentation will attempt to show that conflict of interest in clinical drug trials is indeed a risk factor for scientific misconduct; that is, conflict of interest is associated with intentional bias in the conduct and publication of drug trials.
Before we proceed I want to say something very clearly. Many clinical trials are well-designed, well-executed, correctly analyzed, and appropriately written for publication. But we don't need to talk about these. We need to address the problems.
Company-funded research tends to favor that company's products
I want to start with some general evidence that demonstrates a bias in industry-funded drug trials. What is the bias? It is that company-funded research tends to favor that company's products.
In an analysis of 107 drug trials in 5 leading medical journals, Davidson found that 89% of company-funded trials favored the new drug compared with traditional therapies, whereas 61% of trials not funded by industry favored the new drug. He concluded there is a significant association between industry funding and outcome of the study.2
Stelfox et al. analyzed 70 articles concerned with the safety of calcium channel blockers. They reported that 96% of authors who were supportive of these drugs had financial ties to calcium channel blocker manufacturers, whereas only 37% of authors who were critical of calcium channel blockers had such financial ties. The results show an association between authors' opinions about the safety of calcium channel blockers and financial relationships with the drugs' manufacturers.3
An study of 44 articles regarding economic analyses of oncology drugs found that only 5% funded by drug companies had conclusions unfavorable to the companies' products, whereas 38% of those without industry funding made unfavorable conclusions.4
Cho and Bero reported that 98% of company-sponsored drug studies published in peer-reviewed journals or in symposium proceedings between 1980 and 1989 favored the company's drug.5 Can you imagine an election in which someone gets 98% of the vote? I guess if you pay for the votes, it's possible. It seems likely that some of these trials were designed to favor the sponsor, or the data were analyzed to favor the sponsor, or the trials were written to favor the sponsor, or the trials that didn't favor the sponsor were not published.
Rochon looked at randomized controlled trials of NSAIDs between 1987 and 1990. She found that virtually all were funded by the drugs' manufacturer; that the efficacy of the manufacturer-associated drug was comparable or superior to that of the comparison drug in 100% of the trials; and that the manufacturer-associated drug was safer than the comparison drug in 86% of the trials.6 Again, the funder of the trials wins the election hands down.
The take-home message of these studies is that company-funded trials have a high likelihood of favoring the company's products. The editor of the BMJ has written that such studies "begin to build a solid case that conflict of interest has an impact on the conclusions reached by papers in medical journals."7 Is this scientific misconduct? It is certainly something we should think about.
I want to move to more specific examples. Let's break down the drug trial process into its component parts: which trials to conduct, how the trials are designed, how the patients are enrolled, who analyzes the data, whether the results are published, who writes the publications, and what is the content of the publications.
Which trials are funded?
Which trials to fund? Hillman et al. looked at pharmacoeconomic studies and concluded that pharmaceutical companies "fund projects with a high likelihood of producing favorable results."8 Former FDA Commissioner Kessler wrote about phase 4 trials that "Some company-sponsored trials of approved drugs appear to serve little or not scientific purpose they are, in fact, thinly veiled attempts to entice doctors to prescrie a new drug being marketed by the company." 9
How are trials designed?
Let's move on to the design of clinical trials. Levy has written: "Almost all clinical investigations of the comparative efficacy and safety of medicinal agents require financial sponsorship, mainly by the pharmaceutical industry. Many, perhaps the majority, of such investigations are actually designed and initiated by medical or clinical pharmacology departments of pharmaceutical companies."10
Let's look at a very important example of trial design. Johansen and Gotzsche performed a meta-analysis of the effect of fluconazole vs. amphotericin B on total mortality in patients with cancer complicated by neutropenia. Both these drugs can prevent and treat systemic fungal infections. 92% of the patients were enrolled in trials funded by Pfizer, which makes fluconazole. 79% of patients received amphotericin B orally even though the drug is poorly absorbed and not used in its oral form used for systemic infections.11 I've been in practice for 30 years and have never seen oral amphotericin B used for systemic infections; it's an intravenous drug. This design virtually guarantees that fluconazole -- the funding company's drug -- will produce better results. Unless there's something I don't know about these trials, this design qualifies as scientific misconduct.
In another example regarding trial design, Rochon found that in 54% of company-sponsored NSAID trials, the doses of the funding company's drug were higher than those of the comparison drug.6 Clearly, if the dose is higher, the drug will be more effective -- a biased design.
How are patients enrolled in trials?
How are patients enrolled in clinical trials? Private physicians may be paid between $1000 and $5000 for each patient enrolled. This incentive may cause physicians to stretch inclusion and exclusion criteria for the trial in order to enroll as many patients as possible, thereby compromising the validity of the trial. Physicians have been reported to enroll patients who do not even have the disease being studied. Physicians with no knowledge of the disease being studied are participating in trials, resulting in data not always being accurately collected.12,13
Hutchins and associates reported that 63% of people in the U.S. with cancer are over 65 but only 25% of patients in cancer therapy trials are over 65.14 Elderly people may have less response to chemotherapy and may tolerate chemotherapy poorly. If so, excluding the elderly from trials makes the drugs seem better than they really are in the population most likely to use them.
Who analyzes the data?
Let's move on to data analysis. I was surprised to learn was that the data usually belongs to the sponsoring company and the company decides who sees how much of the data. Even investigators involved in a multicenter trial may not see all the data from the trial. Does that matter?
Lauritsen and colleagues, investigating a new drug compared with ranitidine for treatment of gastric ulcers, write in Lancet that the new drug was inferior to ranitidine at one trial site, that healing rates between the two drugs were similar at most sites, and the new drug was superior at only one site. The site with results favoring the manufacturer's drug submitted its results for publication separately, which made the sponsor's drug look good. Had all the results of the different sites been merged, the sponsor's drug would not have looked as good. Two years after the trial concluded, unfavorable data were still "being considered in the company's headquarters."15 The take home message: to get the results you want, keep control over the data.
Industry may also design trials with multiple endpoints. By controlling data analysis, companies can publish those endpoints favorable to their product and bury data on less favorable endpoints.16
Are the results published?
It's time to move to the final component in the drug trial process: publishing -- or not publishing -- the results. In 1996, a pharmaceutical firm threatened Canadian investigator Nancy Olivieri with legal action if she published research findings unfavorable to the company's product which Dr. Olivieri had been studying. In 1990, a company which was funding University of California researcher Betty Dong refused to allow Dr. Dong to publish her findings. We don't know how often such outright suppression takes place.17
Levy states that companies "frequently control the data and can decide whether or not to publish and what to publish."10 Sponsors generally have the right to prepublication review of investigators' writings, and may demand so many changes that the investigator loses interest. Dickersin found that 21% of trials studied went unpublished. In 28% of those unpublished trials, the reason for no publication was that the results did not favor the drug being tested.18
Chalmers wrote: "Substantial numbers of clinical trials are never reported in print Failure to publish an adequate account of a well-designed clinical trial is a form of scientific misconduct that can lead those caring for patients to make inappropriate treatment decisions."19
Does it matter that trials go unpublished? Indeed it does. Simes found that combination chemotherapy for advanced ovarian cancer was superior to the use of alkylating agents in published trials, but this advantage disappeared when all registered trials -- published and unpublished -- were reviewed.20 If you were a physician treating ovarian cancer in 1986, reading only published trials, you might give your patients highly toxic chemotherapy with little or no benefit.
Let's look at a fascinating, and almost ridiculous aspect of clinical drug trials: who writes the articles? We are entering the world of guests and ghosts. The Lancet commented on this phenomenon in 1993: "A typical sequence of events begins with a publisher [medical communications company] agreeing to prepare a review article for a drug company A staff writer prepares the review to the sponsor's satisfaction, whereupon the publishing house contacts a doctor with a special interest in the relevant topic to inquire whether he or she would like to be the guest author, subject to approval of the content, for an honorarium. The pinnacle of success, presumably, is to sign up a prominent academic The final version [of the article] may contain no clues about its origins."21
Rennie writes that "Ghost authorship occurs when those who wrote the article, or contributed in important ways to its production, are not named as coauthors. Known instances are becoming common, as is the practice of paying big names to appear on the byline in place of the ghosts, though they contributed nothing except their prestige."22
Flanagin et al. surveyed the corresponding authors of 809 articles published in six major journals in 1996. 19% of the articles had guest authors, 11% had ghost authors, and 29% had guest authors, ghost authors or both.23 Not all these guest authors were recruited by industry; but these figures show that guests and ghosts are not endangered species in the publishing world.
Two associate editors of JAMA commented in 1994 that "Ghosts have apparently been hired by pharmaceutical firms to write scholarly reviews about new drugs, and prominent physicians have then been hired to allow their names to be attached as 'authors' before the reviews were submitted to learned journals But editors like us have had the disheartening experience of telephoning the senior 'author,' only to be switched back and forth from the scientist, who had no idea what had been written, to the writer, who did not understand the science."24
Brennan, in his commentary "Buying Editorials," provides the following vignette: "I was recently surprised to receive a call from a representative of Edelman Medical Communications, a public-relations firm in New York, asking whether I would be interested in writing an editorial for a medical journal The caller said I would not really have to do much work on the project they would have a professional writer compose the editorial, which I could modify as I saw fit. I would earn $2,500 The entire project would be funded by a pharmaceutical manufacturer."25
What is the content of the publications?
Does it matter who writes the articles? Yes it does. Several authors recently refused to place their names on a study whose results were unfavorable to the sponsoring company because the sponsor was "attempting to wield undue influence on the nature of the final paper. This effort was so oppressive that we felt it inhibited academic freedom."26
In her study of randomized controlled trials of NSAIDS, Rochon et al. looked at manufacturer-sponsored trials that claimed less side effects of the manufacturer's drug. 45% of these claims were not supported by statistical trial data. "These data raise concerns about selective publication or biased interpretation of results in manufacturer-associated trials."6
In another study of 196 NSAID trials, Gotzsche found that doubtful or invalid statements were made in the conclusion or abstract in 76% of articles reviewed, with these statements overwhelmingly favoring the new drug rather than the control drug.27 These last two examples demonstrate that articles can be written with a spin not justified by the data.
Writers can use different statistical measures to make drugs appear more efficacious. Bobbio et al. gave physicians differently worded summaries of the Helsinki Heart Study, comparing cardiac endpoints and total mortality between gemfibrozil and placebo. When the results were presented as 34% relative risk reduction, 77% of physicians said they would prescribe gemfibrozil, but when the same results were expressed as 1.4% absolute risk reduction, only 24% were willing to prescribe.28
The Helsinki Heart Study made the surprising finding that the reduction in cardiac events associated with gemfibrozil was accompanied by an increase in overall mortality in the gemfibrozil arm of the trial. Whereas the original report of the study included the important information on overall mortality, another article describing the same study omitted the overall mortality data -- data unfavorable to gemfibrozil's manufacturer but essential for physicians to know.28
What should be done?
To summarize, in many aspects of drug trials, biases can be -- and have been -- intentionally introduced that favor the company funding the study. How often this takes place, we do not know. The evidence I have shown makes a reasonable case that scientific misconduct does take place in clinical drug trials, that conflict of interest is a risk factor for scientific misconduct, and that something must be done about it. As one clinician recently lamented, "The science has been lost in the rush for money. We've lost our way. We've terribly, terribly lost our way." 29
What to do? Pharmaceutical companies -- with a little help from the friends in academia and the NIH -- have created products of great benefit to the world. This activity must continue. But to reduce the risk of scientific misconduct, investigators and authors need greater independence from their funders. Trials should be designed by expert investigators outside the walls of the sponsoring company. Data should be collected by those investigators, not by the sponsoring company, and the data should be publicly available. Publications should be prepared by the investigators who conduct the trial outside the walls of the sponsoring company. Pre-publication review of manuscripts by the industry sponsor should be eliminated; other mechanisms should be developed to protect reasonable patent and intellectual property rights. The use of industry-paid ghostwriters and guest authors should be prohibited. The public-private balance between clinical drug trial moneys should be tilted toward more NIH funding so that clinical investigators are less dependent on industry funds to pursue their careers. Ideally, a pharmaceutical firm with a new drug ready for clinical trials would transfer to the NIH a budgeted sum of money for the trial, and the NIH would sponsor the trial without industry influence.
"The right to search for truth implies also a duty: one must not conceal any part of what one has recognized to be true." - Albert Einstein
1. Council on Scientific Affairs and Council on Ethical and Judicial Affairs. Conflicts of interest in medical center/industry research relationships. JAMA 1990;263:2790-2793.
2. Davidson RA. Source of funding and outcome of clinical trials. J Gen Intern Med 1986;1:155-158.
3. Stelfox HT et al. Conflict of interest in the debate over calcium-channel antagonists. N Engl J Med 1998;338:101-106.
4. Friedberg M et al. Evaluation of conflict of interest in economic analyses of new drugs used in oncology. JAMA 1999;282:1453-1457.
5. Cho MK, Bero LA. The quality of drug studies published in symposium proceedings. Ann Intern Med 1996;124:485-489.
6. Rochon PA et al. A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 1994;154:157-163.
7. Smith R. Beyond conflict of interest. BMJ 1998;317:291-292.
8. Hillman AL et al. Avoiding bias in the conduct and reporting of cost-effectiveness research sponsored by pharmaceutical companies. N Engl J Med 1991;324:1362-1365
9. Kessler DA et al. Therapeutic class wars -- drug promotion in a competitive marketplace. N Engl J Med 1994;331:1350-1353.
10. Levy G. Publication bias: its implications for clinical pharmacology. Clin Pharmacol Therap 1992;52:115-119.
11. Johansen HK, Gotzsche PC. Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. JAMA 1999; 282:1752-1759.
12. Maguire P. Community-based trials under scrutiny. ACP-ASIM Observer, July/August 1999.
13. Larkin M. Clinical trials: what price progress? Lancet 1999;354:1534.
14. Hutchins LF et al. Underrepresentation of patients 65 years of age or older in cancer-treatment trials. N Engl J Med 1999;341:2061-2067.
15. Lauritsen K et al. Withholding unfavourable results in drug company sponsored clinical trials. Lancet 1987;1:1091.
16. Bero LA, Rennie D. Influences on the quality of published drug studies. Int'l J Technol Assessment in Health Care 1996;12:209-237.
17. Bodenheimer T. Uneasy alliance: clinical investigators and the pharmaceutical industry. N Engl J Med 2000;342:1539-1544.
18. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990;263:1385-1389.
19. Chalmers I. Underreporting research is scientific misconduct. JAMA 1990;263:1405-1408.
20. Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol 1986;4:1529-1541.
21. Ghost with a chance in publishing undergrowth. Lancet 1993;342:1498-1499.
22. Rennie D et al. When authorship fails. JAMA 1997;278:579-585.
23. Flanagin A et al. Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. JAMA 1998;280:222-224.
24. Rennie D, Flanagin A. Authorship! Authorship! Guests, ghosts, grafters, and the two-sided coin. JAMA 1994;271:469-471.
25. Brennan TA. Buying editorials. N Engl J Med 1994;331:673-675.
26. Applegate WB et al. The multicenter isradipine diuretic atherosclerosis study. JAMA 1997;277:297.
27. Gotzsche PC. Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. Controlled Clin Trials 1989;10:31-56.
28. Bobbio M et al. Completeness of reporting trial results: effect on physicians' willingness to prescribe. Lancet 1994;343:1209-1211.
29. Eichenwald K, Kolata G. When physicians double as entrepreneurs. New York Times, November 30, 1999, p. 1.