• Text Resize A A A
  • Print Print
  • Share Share Share Share

Norman Fost

Oral History of the Belmont Report and the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research

Interview with
Norman Fost, M.D.
Professor of Pediatrics
University of Wisconsin Medical School
Madison, WI

May 13, 2004

Belmont Oral History Project


May 13, 2004

Interviewer: Patricia C. El-Hinnawy, Office for Human Research Subject Protections staff. 

INTERVIEWER:  Dr. Fost, just to begin, I want to clarify that instead of saying "National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research," we'll just say, "The Commission."

DR. FOST: Very good.  It's not as bad as "The President's Commission for the Study of Ethical Problems in Medicine and Biomedical Research.”  So we should be grateful for small things.

INTERVIEWER:  Your work has always blended ethics and pediatrics.  How were you selected to work on the Commission?

DR. FOST: Actually, I met Charles Lowe, who is a very distinguished researcher at NIH, who was involved in the creation of the Commission.  And shortly after I moved to the University of Wisconsin in 1973, Dr. Lowe called me and asked me if I was interested in being Executive Director of the Commission.  And it was a very critical in my life. Needless to say, it was something I very much would have wanted to do.  It was the biggest thing that had happened, I think, in bioethics up to that time. But I had just arrived here [Wisconsin] with a commitment to change the residency training program to start a bioethics program here, and I just felt it was wrong to walk out on the University of Wisconsin before I had even arrived.

So I reluctantly said no, but said I would obviously be delighted to participate and be involved in any way I can.  So I had a chance to be at the center of it, and turned that down.  But later I got appointed to the fetal research study group, which was really the major reason for the creation of the commission.

INTERVIEWER:  And why was that?

DR. FOST:  There were some explosive studies involving fetal research that attracted the attention of Senator Kennedy, and became the focus of Congressional hearings.  And the Commission was actually created--as best I remember it--mainly to address fetal research, because of the controversy there.  And then somebody said, well, if we're going to get a bunch of erudite people together to create rules for fetal research, why don't we extend it and look at the whole picture?

The other critical factor, of course, was the publicity surrounding the Tuskegee study.  And I was involved with that in an interesting way, also.  In 1972-'73, I went to Harvard to do a fellowship in law, medicine and ethics with the Kennedy program at Harvard.  And one of my fellow Fellows was a man named Jim Jones, an historian of medicine.  And I remember--like I remember where I was when Kennedy was shot--I remember where I was sitting--I could show you the room, probably at Harvard--when Jim--had just come back from Washington having stumbled on these Tuskegee archives, and started telling me this remarkable story which later led to his book "Bad Blood."  And I'm actually acknowledged in the front of that book, because Jim and I had long conversations over that year about the Tuskegee study and all its implications.

And, as a result of that, and as a result of getting to know the Kennedy family--and the Shrivers, in particular--I was at a meeting with Senator Kennedy, with Sargent Shriver--who was the Vice Presidential nominee for the Democratic party that year--and others, at which plans were discussed for these hearings that the Senator was going to hold, and the creation of this commission.

So, I had really an up-close-and-personal view of that – one of the inciting events that led to the creation of the Commission and to the discussions about the legislation that created the Commission itself.

INTERVIEWER:  When the work of the Commission began, what was your role--specifically?

DR. FOST: The only real official role was as part of this fetal research study group.  So I was a member of that, and attended those hearings, and participated in the drafting of guidelines for fetal research.

INTERVIEWER:  What do you believe was your most important contribution to the work of the Commission?

DR. FOST: That's hard to say.  I'd like to say I had a small role in the development of the hearings that led to the creation of the Commission, and very minor and peripheral role--and whatever contributions I made to the thinking and the writing of the fetal research study group.  And then, because of my close association with Dr. Robert E. Cook, who was also here, and one of the--he was the most important mentor in my life, so he and I talked a great deal about the other issues that the Commission was facing.  And I attended many of the Commission's hearings and stayed in close touch with that, and with Dr. Cooke, and tried to influence his thinking--or at least, collaboratively with him, on the development of specific issues before the Commission.

INTERVIEWER:  What do you believe was the Commission's most important contribution to human subject protections?

DR. FOST: Well, certainly, the drafting of the Common Rule.  I mean, this was a commission unlike any other in history of this country, as far as I know, in that it had this statutory authority to write rules, which would become law, essentially, unless changed by the Secretary of DHHS.  All other commissions on bioethics–and every other commission I know about--are advisory commissions.  They write reports and they get read, and maybe something happens and maybe it doesn't.

So this was a unique commission, in that it had rule-making authority, and the Common Rule has obviously become the set of standards--really, international set of standards--for ethically responsible research involving human subjects.

So, without a doubt, that was the most important product of the Commission.

INTERVIEWER:  You have said that you were in conversations with Senator Kennedy and Sargent Shriver.  And I've also heard that it was at Senator Kennedy's insistence that the rule-making component was put into this, so it wouldn't just be another advisory commission.  Would you say that you played a role in that convergence?

DR. FOST: No, not that I remember.

INTERVIEWER:  At the time of the Commission's work, how did you see the role of the Belmont Report?

DR. FOST: I'm going to say some negative things about the Belmont Report, so let me say a positive thing: I think it's an absolutely wonderful piece of work.  I have enormous respect for Tom Beauchamp and the other authors of it.  And I think it's obviously a seminal piece of writing in the history of thinking about ethics and regulation of human subjects research.

That said, I don't think its influence has been substantial.  First of all, I think very few people have read it.  I think everybody recites the famous Georgetown mantra, as it's called, and that's about all they know about the Belmont report.  So, if I had a nickel for everybody who's read the Belmont report, I think I'd have about $1.50.

Even among those people who have read it and who take it seriously, while I think it was important and helpful to elucidate these principles that are at the core of how we think about research involving human subjects, I don't think the elucidation of those principles has really had a major influence in how we think about human subjects research; that is, I think ethics on the ground is more complicated than that, and knowing what the central principles are doesn't get you very far, because in almost every controversial case you can think of, these principles conflict, and the issue is how to balance them and how to apply them.

So, simply reciting them or saying "these three principles are very important," I don't think sheds a lot of light on whether or not a specific research project should go forward.

So, for those who want to think critically and conceptually about research ethics, the Belmont Report is a seminal document.  But in terms of how IRBs actually work, how decisions are made, how we come to consensus, I don't think it's had that major a role.

INTERVIEWER:  Why do you think it's had such visibility over time?

DR. FOST: That's an interesting question: why has the Belmont Report has such visibility over time?

People like to think that there are core principles that guide thinking and decision-making in this area.  I think, more than that, people like to think there are algorithms.  I mean, it's the most frustrating question I and others probably get asked: "Could you just tell us, in 10 minutes--"-- you know--or an hour, maybe, "--what are the principles for making ethical decisions?"  And it's like asking a cardiologist, "Could you please just tell me what the principles are for managing someone with heart disease in an hour?"  You could.  But it's not going to get you very far in any individual case.

So the Belmont Report allows people to recite this mantra--"beneficence, autonomy and justice"--and think that they now have said something profound or meaningful.  And, as I said before, I don't think the report has had that much influence, I think it's that mantra, those three famous words, that everybody knows.


But they're recited because it's appealing to have some concise sort of summary of core principles.  But, as I say, I don't think the influence has been that great.

INTERVIEWER:  Going back to the larger work of the Commission, at that time did you feel there were any issues or ideas that should have been addressed but weren't?

DR. FOST: Well, as others have noted, there's one large body of research involving human subjects that's not explicitly covered by the Common Rule, and wasn't really addressed very much by the Commission; and that's research involving mentally impaired adults, and people with dementia, people with mental illness.  And there are different and complicated issues involving that kind of research.

So I think I, like many others, think there should be another supplement to the Common Rule at some point, after another commission looks at it carefully.  There's been an attempt to do that, and there have been, obviously, very well thought out proposals for doing that that have not achieved consensus.  But maybe another crack at that would be timely.

So it was one area that the Commission didn't address and that's still, I think, an important omission from the Common Rule.

INTERVIEWER:  And that's still being looked at today?

DR. FOST: Right.

INTERVIEWER:  If you could go back to the time when you were on the Commission, is there anything that you would do differently then?

DR. FOST: Not really.  I mean, I think it was a remarkable enterprise.  I think it had remarkably deep and sophisticated discussions; reports that were written for it, the consultants' papers.  Other than the President's Commission for the Study of Ethical Issues in Medicine and Research, it's hard to think of another national commission that thought so deeply and with such nuance about complex ethical, legal, philosophical and political issues.  I think that one really set a tone that's only been equaled once, and that was the President's Commission.

I think other bioethics commissions have not been really as impressive as the National Commission and the President's Commission.  And a lot of that had to do with the staff.  The Commissioners were excellent people and played a major role.  But as is often the case with Commissions, it's the staff that controls the agenda in many ways, and does the writing, and in many ways determines the outcome.  And that staff, you know, was future superstars in bioethics.  So it was--I have very little of a critical nature to say about either the process of the Commission--it took its time, it went very deeply, it had public exposure.  And I thought it was just admirable in pretty much all regards.

INTERVIEWER:  Was there anyone on the staff that you worked with more closely than others?

DR. FOST:  No, I got to know pretty much everybody on the staff, to greater degrees or less.  But, no, I don't think there was any one person who I was more involved with.

INTERVIEWER:  Since the time of the Commission's work, what changes have you seen in human subject protections?

DR. FOST: I think the single most profound change has been the shift from the concern about ethical issues--which is mainly what the National Commission was about--towards not just regulatory issues, but compliance with regulatory issues.  My view is that the system is way overweighted now with documentation and compliance.

Obviously, there must be compliance and there must be documentation, and there must be oversight.  But I think far too many resources and time now is spent on documenting compliance with regulations that have little to do with protection of human subjects so that things that IRBs could be doing that would have profound effects on improving protections they're not doing because resources are just being consumed, and still expanding, to avoid getting in trouble with regulatory authorities over extremely micro issues.

That, to me, has now dominated the field.  It's what counts for most of the growth in money that's been spent on IRBs and on regulation.  And, in my view, it's reached diminishing returns.

INTERVIEWER:  Over time, some feel that the focus has shifted away from protecting human subjects from risk, and toward permitting access to innovations that could be helpful.  Do you agree with that?

DR. FOST:   Yes, but not--that shift has occurred, but not enough; it hasn't shifted enough.  And I think that deserves more attention.  That is, 30 years ago, the concern was about inclusion--"guinea pigs" became the metaphor for human subjects.  There was something bad or unsavory about--or risky--about being a human subject.  Thirty years later, the concern is more about exclusion; that is, NIH requires you to have minorities, women and children as research subjects, and people in those groups want to be research subjects.

That said, I don't think the public yet appreciates how desirable it is to be part of a research study--that is, particularly if you're sick.  Your chances of being well taken care of, and of reducing your risk of harm, and achieving benefits, I think, for a lot of conceptual reasons and empirically, are much higher if you're in a well-designed, well-supervised research study than being in a doctor's office.  And there's data--there are studies to support that.


So that word doesn't get out, because the press coverage is--like the press coverage of most things--is negative; that is, when something bad happens, when one person dies, everybody knows about it and focuses on that for a long time.  But the millions of people who benefit don't get quite so much attention.  And even the structural safeguards of being a research subject--most people, I think, are not so aware of.


So, I think inclusion is the preferred goal today, and appropriately so.  I don't think that's widely enough appreciated by the public, and certainly not by the press--or the Congress, perhaps.


INTERVIEWER:  What changes do you think would help improve that situation?

DR. FOST: Education of the public and of the Congress.  I think it's a role that regulatory agencies could do a better job; that is, I think the leader of OHRP, and appropriate leaders at FDA, need to speak out more about that.  There needs to be less focus on the bad things that happen.  I mean, bad things do happen and we need to look at them carefully and try to reduce the risk that they can happen again.  But I think OHRP and FDA could do a better job in educating the public and political leaders in a more balanced way.

INTERVIEWER:  Do you think today's research environment is overly restrictive?


DR. FOST: Yes, I think it's more over-regulated than restricted.  Research is expanding.  I mean, the NIH budget has boomed in the last decade.  Pharmaceutical research is booming.  Our IRB, like most others, is expanding; I mean, the number of protocols that we review is up, up and up.  So it's hard to say that it's more difficult to do research than it used to be.

The one area in which that's not true is the effect of HIPAA on medical records research.  Colleagues of mine here and I have a paper in press now showing a 77 percent reduction in medical records research at the University of Wisconsin as a result of the HIPAA regulations; a dramatic reduction – people who were contemplating a study and decided not to do it.  You multiply that nationally, and you have tens of thousands of important epidemiologic studies that are not being done--at little or no risk to human subjects.

Not all of those are going to find the cure for cancer, but somewhere in there are important discoveries, and it's my view that the most important discoveries come from epidemiologic and data-base research, not from clinical research.

So that--HIPAA has had a profound effect.  That's not a Common Rule issue--although the Common Rule is still the substrate of the HIPAA regulations.

So I believe the regulations are deterring research in that area.  But biomedical research--intervention research, and even non-therapeutic research on human subjects--is thriving, despite the transaction costs.  People are figuring out ways to do it and seem not to be deterred.

INTERVIEWER:  Do you think subjects are at any greater risk today, because of this proliferation of studies or other factors--at greater risk than they were before?

DR. FOST:  There's far more technology today, and technology always has risks associated with it.  So--no question but that subjects and patients are now exposed to invasive and interventional techniques that weren't dreamed of 30 years ago.

On the other hand, many of these techniques are safer than their predecessors.  Just to take one example, 30 years ago when we wanted to get imaging of the brain we had to do arteriograms, or pneumoencephalograms--both of which had profoundly greater risks and discomforts than CT scans and PET scans, which are virtually riskless.  So there are many examples of technology that have reduced the risks of doing research.  But there are some that are also much more invasive and more risky.

So it's a mixed bag, but there are certainly more risky technologies today that people want to investigate than there were 30 years ago.

INTERVIEWER:  What are the changes you'd like to see today, in the broad field of human subject protections?

DR. FOST: Well, as I said before--and as Dr. Koski, the former director of OHRP, and now Dr. Schwetz, the new director have both said--there really needs to be a dramatic focus on what regulations matter, and which ones don't matter so much; where to crack down, and where to be more flexible.  That is, I think there needs to be, really, a substantial reduction in the micro-regulation.  That's one area that's important.

The second think that's, I think, very troublesome today is the multiple levels of second-guessing about decisions by IRBs.  There are investigators now at some institutions who have to go through more than one IRB at their own institution then have to deal, in collaborative studies, with IRBs of other institutions--which will inevitably disagree.  A month, and sometimes a year, can be spent doing that.

Study sections are increasingly populated by ethicists who have different ideas about not just the study, but about the consent form, and the syntax of the consent form.  OHRP, obviously, sometimes gets involved in these things.  The NIH Institutes now have ethicists weighing in on these things.  And, of course, people will disagree about the details of a consent form, and about specifics of a study.  And it's simply not very productive.  There are diminishing returns when you have five, six, eight, 10 ethicists--and other scientists, and others--each weighing in, and trying to incorporate, or to meld all these views, and to get a study off the ground.

So I think the threshold for review of ethical issues in research, at the institute level, at the OHRP level, at study-section levels, has to be much broader.  I mean, there needs to be some judgment about whether this study, broadly thought of, is ethically appropriate.  But the sort of micro-regulation of language in the consent form--simply--it's not--I don't think it's productive to have that be looked at four, five, six, seven times, because it's endless, and is really a great deterrent to research, and it is discouraging some people from going into the field.

INTERVIEWER:  In your own field of pediatrics, how do the Subpart D regulations play out in your work?             

DR. FOST: Well, obviously, as an investigator, as well as an IRB chair, we respect and try to comply with Subpart D, and I think Subpart D, like the rest of the Common Rule, is a very useful template for thinking about research involving children.

That said, there are enormous ambiguities in the key concepts of Subpart D, and enormous variation in how IRBs interpret it.  And I think it is timely to take another look at Subpart D, and change some of the language there, or perhaps develop more specific guidelines that would attract consensus.  For example, the definition of "minimal risk" I think was intended to be something other than the way in which it has come to be interpreted.  I was involved in the discussions about the writing of that definition, and such phrases as "the risks of ordinary life" are now realized to be hopelessly ambiguous; that is "the ordinary life" of somebody in Kenya, or Kabul, Afghanistan, is obviously not what the Commissioners had in mind; surely it's not intended that you could do research in those places that had risks--10 percent risk of fatality, because that's the fatality rate of children in those areas.  And, similarly, phrases like "risks that are commensurate with a routine visit to the doctor"--I'm fairly certain what the Commission had in mind was a visit to a doctor for a health supervision visit--a so-called "well-baby visit."  But as studies have shown, IRBs around the country have interpreted that as meaning a visit to a specialist.  So we have nephrologists and gastroenterologists who say a routine visit to my office involves a kidney biopsy, or a small-bowel biopsy, and therefore this is minimal risk--and IRBs sometimes accepting those interpretations.  It's not what was intended and, I think, is not appropriate.  And there are other elements of Subpart D that I think need to be re-looked at and rewritten.

There's a very large issue with Subpart D which the National Commission did not do a great job with.  They tried, but never reached closure on it--and that's the fundamental question of why any non-therapeutic research--even of minimal risk--is ethically acceptable.  This was the great Paul Ramsey-Richard McCormick debate, and it's never really been satisfactorily resolved.

We don't allow research--non-therapeutic research--on adults without their consent, even of minimal risk.  So why do we have a different standard for children?  The claim is made "because it's good for children."  But it would be good for adults, too.  We could make much more progress in adult diseases.

So that fundamental question of what the real ethical justification for any research in children has never been really adequately addressed, in my view, and it would be an interesting project for another commission, or some national body, to take a look at.

I think more thought and reflection and rationale would be helpful, as to why we think it's okay to do non-therapeutic research on children who cannot consent, or assent--I mean, infants, two-year-old children--because of the strict utilitarian argument that it's good for children as a class could be applied to every other class of human subjects also, and it's not thought to be a sufficient reason for doing research without consent in those settings.

So I think it would be a useful project to get the best minds thinking about that more critically, and then addressing some of the more practical issues of Subpart D, such as the definitions of minimal risk and some other concepts in there.

INTERVIEWER:  Have you ever had any experience with a 407 kind of protocol?

DR. FOST: Not here.  So--no, no personal experience.

INTERVIEWER:  Is that one [of the areas] that you also think could use some more thought to it?             

DR. FOST: A colleague has just written a paper that's been submitted for publication, looking at the 407 reviews that have been done.  This is the first--as far as I know--comprehensive review of all the 407 reviews.  And I've seen a draft of that paper, and on the one hand I'm impressed; I think there's been some very thoughtful discussion, and I think appropriate decisions made.  On the other hand, there seems to be some inconsistency, in both the process and in the substance of the decisions.

So there's not a lot of experience with 407 questions.  And it's still sort of in its infancy, because there just hasn't been a large volume of protocols.

But I think, so far, it's working reasonably well.

This reminds me–as a supplement to your question earlier, about what needs to change–we talked earlier about over-regulation, and over-emphasis on documentation and compliance which, as I said, I think is having a negative effect on protection of human subjects, because it's diverting attention from IRBs, and IRB staffs, on things that would better serve human subjects protections.

There are two other things that are happening that's aggravating that situation.  The shift toward accreditation--while understandable and probably will be a more efficient way, oversight of IRBs, because OHRP can never visit more than a small number of IRBs--but the accreditation agencies have their own idea about compliance, and I'm familiar with the guidance standards that both of the present agencies are using.  And they both go way beyond the Common Rule; that is, even what I consider to be excessive concern about compliance with dotting every "i" and crossing every "t" of the Common Rule is now going to get worse, because accreditation agencies are requiring IRBs and institutions to have standards in areas that don't even appear in the Common Rule--which will greatly increase--already has--the money and the resources going into compliance--it's just not strictly compliance, but accreditation is a code word for compliance.  And so it's diverting even more of the resources into areas that, in my view, have little to do with protection of human subjects.

So that's–accreditation, I think, is a good thing, but inappropriate regulation is not good.  And, in my view, some elements of these standards have gone too far.

Second, litigation is now becoming a growth industry.  There was very little litigation in human subjects research for the first 25 or 30 years since the National Commission.  But it's now becoming much more common--including litigation against IRB, including IRB members as defendants which, of course, greatly ratchets up institutional concern and resources to document things to reduce the risk of litigation--so-called defensive medicine, or defensive research oversight.

So that is changing the landscape also in a way that--again, in my view--has very little to do with protection of human--will not have much effect on protection of human subjects, and actually will have a negative effect.

INTERVIEWER:  Are there things that you think might be done to change the course of where accreditation is going, or where litigation is taking us?

DR. FOST: Well, the accreditation agencies need to calm down and focus on important things, and again, not be obsessed about things that have little relationship.

Let me give a specific example that dramatizes this.  OHRP, when it began shutting down places at Duke and subsequent medical centers some years ago, understandably, when they visited an institution, looked at everything.  So they would find things like failure to document quorum requirements, and established a principle that goes beyond Robert's Rules.  That is what all IRBs had been using, prior to that, was Robert's Rules, which was--you establish a quorum at the beginning of a meeting, and if it's not challenged, it's presumed to exist throughout the meeting.

OHRP felt that was not sufficient; that a quorum needed to be proven for every one of a hundred action items.  So IRBs now, as a result of that, have to record the actual vote and document the existence of a quorum.  Well, that requires an enormous amount of paperwork.  Our minutes are 150 single-spaced pages for a two-hour meeting, because of the need to document that and many things like that.

When the accreditation agencies came along, one of them--we were actually the first place to be visited on a sort of test run by one of the accreditation agencies--looked at our minutes about documenting the quorum, and noted that it didn't say whether the non-scientific member of the IRB was in the room at the time, because it didn't have the names of the members.  And since there's a requirement that IRBs have a non-scientific member, if he or she's not in the room, then you don't have a duly constituted IRB.

The result of that--at Johns Hopkins, to take one place that I'm familiar with--was they began passing a clipboard around the room for each protocol, and got the signatures of all 24 people in the room, for a hundred action items.  So, for one meeting, they had 2,400 signatures to document that the non-scientific member and the other appropriate members were in the room.

Now, has there ever in the history of the world been a research protocol that was approved, that shouldn't have been approved, because the non-scientific member was out of the room at the time?  I don't think so.  Does this improve protection of human subjects?  No.  Does it require an enormous infrastructure, and storage, and so on?  And that's just one of many examples.

But that's an example of an accreditation agency going beyond where OHRP had gone and, in my view, OHRP had gone already too far in terms of cost-benefit ratio of these sorts of compliance things.

So, accreditation agencies need to be more reasonable.  That requires effective leadership, appropriate members on their boards and so on.  I'm a member of the board of one of the two agencies, and I've been encouraged by the responsiveness of the agency in trying to cut back on some of these things.  But, in my view, both  of the accreditation agencies still have--are guilty of piling on in this area.  And, as I say, I think it's having a negative effect on protection of subjects, because it's diverting our attention away from much more productive things that we should be doing.

INTERVIEWER:  And on litigation?             

DR. FOST: It's a free country. [Laughs.] I don't know what to do about that.  I'm very uneasy about legislative proposals to reduce access to the tort liability system--to protect industry.  So I've been critical of those who advocate laws that would reduce access by aggrieved plaintiffs.

So it's a tough area in a country that has the tort liability system that we do.  Eventually--eventually--appellate courts will establish principles that may reduce the noise level in litigation, but that is decades and decades away.  And we've seen one very bad example of what can happen when tort litigation in human subjects research gets up to the appellate court level, where judges have little or no knowledge or experience with this.  This had to do with a lawsuit against Johns Hopkins, the Kennedy-Kreeger Institute at Johns Hopkins--involving a study of lead-infested homes in East Baltimore.  And apart from the merits of the case--the initial case was thrown out, and the plaintiffs appealed it to an appellate court in Maryland--and the judge, in the course of writing his opinion on the case, made a comment that all non-therapeutic research involving children is illegal--clearly, way beyond what the National Commission thought, or what the Common Rule thinks.

So, Hopkins had to immediately shut down all its non-therapeutic research involving children.  There was a tremendous amount of noise and furor from around the country, and after a relatively brief period, the judge, in an extraordinary comment--I've never heard of this from an appellate court, absent any hearing--just issued a sort of statement saying, "Never mind.  I didn't mean what I said"--or-- "I misunderstood what I said," and rescinded it, or modified it to something that was closer to what the Common Rule says: that you can do non-therapeutic research involving children if it's of minimal risk.  He didn't use that exact language, but that was sort of what he now clarified.

But it was just an astonishing sweeping comment, that had a devastating effect for a short period of time on a major research institution.  And it's just one example of the noise level that's going to occur with these cases--not just at the trial level--but beginning to get up to the appellate level, as judges start to think these things through.

But there are dozens and dozens of appellate court systems--hundreds--in the country-- state and federal.  And finding harmony on these very complicated issues--it may be 50 years before that gets sorted out.

So I think the noise of litigation--some of it obviously appropriate.  I mean, there are obviously wrongdoers, and there is negligence, as in any system.  And litigation needs to be available.  But there's also going to be litigation for inappropriate reasons, and I worry a lot about how long it's going to take us to have a coherent system of appellate law in this area.

INTERVIEWER:  Regarding research in international settings, do you think that the ethical standards that were developed for this country are appropriate for international settings, for international research?

DR. FOST: International research obviously is a much more complicated area.  I mean, we have all the same issues that involve research in the U.S., with all the complexities of different cultures and different regulatory systems and so on.

The Common Rule provides a good substrate for beginning to thing about these things.  I think one of the problems in international research right now is the multiplicity of other guidance documents: the Declaration of Helsinki, the CIOM guidelines, the Nuffield Council report from England which, first of all, say different things--that is, provide different guidance--but, more importantly, say incoherent things; that is, have guidelines that are widely thought to be ethically incoherent.

So, for example, the Declaration of Helsinki, which famously prohibited placebo-controlled trials until recently, butts up against the Common Rule which obviously allows placebo-controlled trials, and the FDA, which practically requires placebo-controlled trials.  So you have the bizarre situation of the FDA requiring investigators to sign a Declaration of Adherence to the Declaration of Helsinki if you're doing international research, and simultaneously meeting their scientific requirement for appropriately designed studies, which often means placebo control.


It's incoherent.  I mean, investigators are simultaneously doing placebo-controlled studies and saying they agree to the Declaration of Helsinki--at least the former versions of it.  That's just one example of a morally incoherent principle--in my view and the view, I think, of most critics--so that international research is very difficult because you have to satisfy, in some cases, these different guidance documents that are subscribed to.  So people say, "Yes, I subscribe to the Declaration of Helsinki," but they don't really.

There are many other examples like that, of principles--aspirational principles--that sound good, that are in these documents, that I think have not received enough scrutiny, and that are definitely inhibiting and deterring research in developing countries that would be of enormous benefit.  I mean, I can't prove ahead of time that a specific study will be a benefit, but it would be in the interest of people of these countries to have research--ethically appropriate research--done, and much of it--a lot of it--is being inhibited or discouraged by over-attention to these international guidance documents.

So I don't think it's a Common Rule problem.  I think the Common Rule actually serves us very well in international research.  But something more than the Common Rule is needed, and right now there's not an equivalent of the Common Rule, in my view, that is a guidance document--certainly not a regulatory document--that makes as much sense as the Common Rule does.

INTERVIEWER:  Do you think something--a document like that – would be helpful in clarifying...?

DR. FOST: Yes.  And that will eventually happen.  I mean, the Declaration of Helsinki has been modified numerous times, and I think it does keep getting better.  And on very key issues, like placebo-controlled research, it now is tolerant of such studies under appropriate circumstances.

There are still other elements of the Declaration of Helsinki which, in my view, are incoherent and nobody really believes, but they're sort of in here, and they pretend they believe them; for example, a statement that the interests of the subject should always take precedence over the interests of science.  Well, that's exactly the opposite of what happens when you do research.  You're not doing it for the interest of the subject, you're asking him or her to sacrifice their interests in the name of science.  So it's just silly to sign on to a document that states a principle that is contrary to what you're doing.

I think, over time, there eventually will be more coherence to these documents.  But how long that will take is--it will be measured in decades.

INTERVIEWER:  You made an interesting statement--for me--about the interests of the subject being secondary to the interests of science.  In an earlier reference, when you talked about more education for potential subjects, do you think that that idea could be incorporated into an educational program without frightening off...

DR. FOST: The key phrase is "without frightening off."  I mean, you know, talking candidly about these things risks being misquoted and there's many demagogues out there who will take critical thinking about this and use it in the wrong way.

Yes, you--I mean, there's been this fallacy over the last 30 years of pretending that the ethical principles that guide medical practice should be the principles that guide research.  So, in medical practice it's correct: the interests of the patient should always be primary, and the doctor should never--almost never--be pursuing other goals, other than the interest of his patients.

But in the research setting, the interest of the subject is never--[laughs]--the primary interest.  The primary interest is always to advance knowledge.  And the question is how to do that in a way that's ethically acceptable.

But saying that would be quite shocking.  It's just a simple principle of research, and there are other people--and I have pointed this out in the ethics literature, it's not new to me--but seeing it on the front page of the New York Times would be shocking, and would be used by critics of research as saying, "See, this is where we can't trust these people."

So, no--having candid discussions in an open democracy is always challenging, but I think it's critical that we move ahead.  And to move ahead, we can't just have scholars writing about these problems in scholarly journals, we need people at leadership levels educating the public and the Congress--and the press--to a more nuanced view of this area.

INTERVIEWER:  Is there any other topic, or any other subject that you would like to touch on?

DR. FOST: Well, we've gone far afield from the National Commission and the Belmont Report--so, no, not really.  I mean, I think you've given me a chance to spout off about most things that I feel are important in this area.

One think I could say that you may figure out a way of splicing in at some point--we talked earlier about my view that the field is presently over-regulated, and I said that this was possibly having an adverse affect on protection of human subjects.  But I think a couple of examples of that might be helpful, if they can find their way into your permanent archive.

Consent monitoring--consent monitoring is encouraged by the Common Rule.  It's not required.  We've done it half a dozen times or so here, on studies that involve substantial risk.  And, of course, there was a lot at stake, in which we thought it was really critical that subjects understand what was going on, and really make an informed choice.  So, by "consent monitoring," I mean either a written test, a quiz, of subjects to see if they understand the most important elements of the study, or an interview by someone unconnected with the research team, to assess subject comprehension.  Every time we've done it, every subject we've interviewed or assessed by quiz got an "A;"  that is, the comprehension was 100 percent--at least of the questions we asked.

I think that--due to a Hawthorne effect--that is, I think the fact that the investigator knew that his or her subjects were going to be quizzed, really focused their thinking on being very attentive, because you want your students to do well.  The teacher looks bad if the students all flunk the standardized test.

So I think it has a dramatic effect on improving the quality of consent, due to this Hawthorne effect.  And then, obviously, even if subjects don't do well, at least then you know that and you can go back and say, "We need to figure out a better way to inform these patients or subjects."

Well, it takes resources to do that.  It's very time consuming.  IRB staffs would love to do things like that.  It's much more interesting, and because it's much more productive, and it gets much more at the core of what you're trying to do.  But they don't have time to do it.  And there's not world enough or time, and there aren't infinite resources.  So when they're obsessed about the upcoming accreditation, or fear of an OHRP shutdown, or fearful of litigation, since consent monitoring is not required by any of those entities, it's not going to get done.  The things that are going to get done is getting 2,400 signatures at a meeting to make sure that the non-scientific member wasn't going to the bathroom when a protocol was being reviewed.  So, consent monitoring is one example.

A second is just writing of consent forms.  Investigators aren't very good at it.  They're not written at an eighth-grade level almost anywhere.  IRB staffs are much better at it.  They're experienced at it, and they have much better idea of how to get the language right.  But again it takes an enormous amount of time for them to do that.  The famous Maryland appellate judge commented that he thought it was inappropriate for IRBs to be helping investigators write consent forms; that they're not supposed to be helping the researchers, they're supposed to be regulating and overseeing the research.  So it's another example of a really ill-advised comment.

But it is an example of something that IRB staffs--if they hired the right people, people skilled in writing and so on, which most of them are--could do very well, but they simply don't have the time for it.

So those are two examples of the way the present resources could be used more effectively than taking quorum calls a hundred times a meeting.

INTERVIEWER:  Could the Tuskegee experiment occur again today?               

DR. FOST: Obviously not, is the short answer; that is, an IRB would not approve lying to people, it would not approve--there was no consent, obviously, in the Tuskegee study.  So the short answer is "no."

That said, there's a very interesting article which is stirring up a firestorm by an anthropologist named Richard Schweder at the University of Chicago.  And the point of his article is that the Tuskegee study, with a little bit of modification, not only could be approved today, but should be approved today, and that the men who were in it would very much have wanted to be part of it.

We don't have time here to go through all of Schweder's points, but one of them is--there are many critiques--many problems with the Tuskegee study, and one of the most basic ones is that effective treatment was withheld from men with syphilis.

It turns out that the Tuskegee study was about men with late latent syphilis, not acute syphilis.  This is not widely appreciated by many people who talk about it.  And the treatment at the time--arsenicals--probably had no benefit on latent syphilis.  So, point number one--and it was very toxic treatment.  Compliance with it was very poor because it was such arduous treatment and so uncomfortable, and probably had little, if any, benefit anyway.  So the withholding of arsenical treatment at the time almost certainly could be approved today, that is if an IRB--if that was the only treatment available.  If you had syphilis--untreatable syphilis, no known effective treatment--why would you not want to be in a study where you were followed carefully, got some of the small ancillary benefits of being seen regular--a few indirect benefits, funeral costs, and so on.  Other than the one spinal tap that was done, and some men--I mean we have subjects today who agree to have spinal taps done for non-therapeutic reasons for studies, so they might or might not have agreed to that.  But I don't think that was a critical part of the study.  But just being followed without treatment--what should be so controversial about that?

When penicillin became available, there was at the least controversy, again, about whether it was effective for late latent syphilis.  And there was divided opinion about that.  There were no good studies in the early days of penicillin, obviously.  There were some risks to it.  There was something called the Herksheimer reaction; a sudden breakdown of spirochetes all over the body, produces an endotoxin that had serious adverse event effect.

So at least for a time after penicillin became available--while it was clearly curative of acute primary syphilis, its effect on latent syphilis--Schweder quotes medical literature to this effect--was ambiguous.  So, again, withholding it might not have been so deviant from the standard of care in the early days of penicillin.  By the time penicillin clearly was thought to have some effect was in the late '60s, early '70s, Schweder claims, right around the time when the Tuskegee study was exposed, and Congressional hearings were held, and it was suspended.

So, the argument he makes is not that it was okay--certainly deceiving people, lying to people is never okay, and nobody would approve that.  But his claim was it wasn't necessary to lie to people; that if they had been told the truth, they might very well not only have joined the study, but stayed in it for quite a long time.

Now, there's some point at which that would not have been the case; that is, at which efficacy of penicillin, even for latent syphilis came to be documented and established, at that point it would have been obviously appropriate to suspend the study and try to get treatment to the men, and to inform them of whatever the knowledge was at the time.  But at least for the first 20-plus years of it, there's a reasonable argument that it could be approved, absent some of the obvious unethical parts of it, which the deception was obviously the worst.

There's another article that I know a colleague has written that he's trying to get published now, with a similar look-back at the Willowbrook studies.  I mean, the other great poster child of unethical research is the famous Willowbrook study.  This was a state institution in New York, profoundly retarded children and adults, in which hepatitis was endemic, and in which Dr. Krugman, the investigator gave hepatitis--administered fecal extracts--to children who were being admitted.

Well, there's more to that story than has been told, and it's been mis-told in many ways over the years.   I don't want to name the colleague who's writing this paper because I don't know if he wants to be so identified until the paper's published, but it's a very thoughtful paper.  It has arguments in it that I've been making, actually, for quite a long time, but he does it better.  And I'm glad that he's written it up.  But it's--I mean, it's not clear that the Willowbrook study was contrary to the interests of the children who were enrolled in it.  It's often been presented as a non-therapeutic study and that's probably not correct.  Dr. Krugman saw it as a vaccine trial--a very primitive vaccine trial which eventually led to the development of hepatitis B vaccine.  And there were reasonable prospects of benefit in his view, and other informed experts in the field have agreed with that view.  Again, presented to an IRB today, knowing everything that was known then, it's not at all clear that it would be turned down, or that an informed parent wouldn't want their child in it.

One other interesting thing about the Willowbrook study that's well documented, but is not part of the sort of lore of Willowbrook: the consent standards that Dr. Krugman used exceed--exceeded--any study that I've known that's ever been done since.  And this was in the 1960s, before the Common Rule, before the National Commission; no peer review, and very primitive notions of informed consent.  But Dr. Krugman required any parent who was contemplating having their child enrolled in this study come to a series of meetings--groups meetings--asked them to bring their pediatrician along, in which enrollment in the study was not discussed, just the study was being discussed.  He wanted them to be part of the design and the development of the study, and wanted them to understand as much as he possibly could. And after a series of these meetings, then if they were interested, then they could come to a meeting, meeting with--again, have their pediatrician present if he or she was available--get a consent form, explain to them, "Take it home, reflect on it," and only when they were quite certain that they understood it and really wanted to be part of it could they enroll.  That's a standard of informed consent that, off the top of my head, I don't know of anybody who's gone that far.

So this picture of Krugman as sort of a mad scientist, using children as a means to an end, and exposing them to great risk of harm is not a fair and balanced way to think about the Tuskegee study--I mean, the Willowbrook study.

I should say that I did research at Willowbrook as a medical student, and again didn't have to go through any committees, or get consent from anybody.  I drew blood samples from children for studies I was doing in medical school.  So I have some arguable conflict of interest in trying to defend research involving Willowbrook.  But others are starting to look at Willowbrook.

So I think Tuskegee and Willowbrook deserve at least a re-look.  It would be an interesting national conference to look at both of these studies in the light of these papers and see what the lessons really are.


Content created by Office for Human Research Protections
Content last reviewed on March 18, 2016