Thereza Imanishi-Kari, Ph.D., DAB No. 1582 (1996)

Department of Health and Human Services

Departmental Appeals Board


SUBJECT: Thereza Imanishi-Kari, Ph.D.
Docket No. A-95-33
Decision No. 1582

DATE: June 21, 1996


The Research Integrity Adjudications Panel appointed to hear the appeal of
Thereza Imanishi-Kari, Ph.D., from findings of scientific misconduct
concludes that the Office of Research Integrity (ORI) did not prove its
charges by a preponderance of the evidence. The Panel recommends that no
debarment be imposed and determines that no other administrative actions
should be taken.

ORI's charges relate to a paper co-authored by Dr. Imanishi-Kari
and published in the journal Cell in 1986. ORI found that Dr.
Imanishi-Kari had intentionally and deliberately fabricated and
falsified experimental data and results. Based on a report
issued by ORI, ORI and the Debarring Official for the Department
of Health and Human Services jointly proposed various
administrative actions, including a ten-year debarment from
receiving federal grant and contract awards.

The Panel included two members of the Departmental Appeals Board,
Judith A. Ballard and Cecilia Sparks Ford (who presided), and an
independent scientist, Dr. Julius S. Youngner, Distinguished
Service Professor Emeritus at the University of Pittsburgh School
of Medicine.

Dr. Imanishi-Kari's appeal presented complex and interrelated
arguments in a dispute which has a long and tangled history.
The Panel held a six-week hearing beginning in June 1995. We
amassed voluminous exhibits, including more than 70 original
laboratory notebooks, and a 6500-page hearing transcript. The
parties submitted lengthy factual and legal arguments, and ORI
proposed thousands of findings of fact and conclusions of law.
The record was completed in April 1996.

The Panel examined piece-by-piece all the record evidence,
including the original laboratory notebooks, as to each charge.
The Panel weighed testimony from numerous experts, not by
evaluating their credentials--all of which were impressive--but
by determining the basis for each opinion, any underlying
assumptions, and the probative value. The Panel also evaluated
the credibility of witnesses with firsthand knowledge of relevant
events, after observing their demeanor and comparing their
testimony to other evidence.

ORI presented forensic evidence using statistical and document
examination analyses. These analyses were based on unsupported
assumptions and, at best, inconclusive. To determine what
inferences to draw from the evidence, the Panel considered the
parties' arguments about all of the evidence and evaluated
relevant factors (such as possible motive).

Our decision is organized as follows. First, we set out the
legal framework for our decision and the undisputed facts about
the history of the case. We then summarize a number of general
findings made after analyzing the evidence on all of the charges;
these findings are reflected, but not necessarily repeated,
throughout our analyses of individual charges. The general
reader may find these first three sections a useful overview of
the case. We next discuss general conclusions on the statistical
and document examination analyses, and why they are of limited
probative value. Finally, we address in detail the evidence and
arguments on each individual charge.


The Department of Health and Human Services has the authority to
protect the integrity of research it funds by taking
administrative actions against those who have engaged in
scientific misconduct. In accordance with the guidelines for
these cases, Dr. Imanishi-Kari requested an evidentiary hearing
in response to the joint notice (Charge letter).

The Departmental Appeals Board (DAB), in the Office of the
Secretary, hears appeals from findings of scientific misconduct
made by ORI, whether or not debarment is one of the actions
proposed. 1/ See 57 Fed. Reg. 53,125 (1992); 59 Fed. Reg. 29,809
(1994). Under the guidelines, a Research Integrity Adjudications
Panel is appointed to decide each appeal. For this case, after
consulting with the parties, the Board Chair appointed Dr.
Youngner, a virologist and immunologist, to serve on the Panel.

Where a debarment has been proposed, the Panel provides findings
of fact and a recommended decision for the Debarring Official.
The Debarring Official may reject the findings of fact only if
the Debarring Official determines them to be arbitrary and
capricious or clearly erroneous. The Panel's decision is the
final agency action with respect to other administrative actions
proposed by ORI.

Regulations at 45 C.F.R. Part 76 and 48 C.F.R. Parts 9.4 and 309
govern debarment actions. In accordance with those regulations,
the charge letter alleged these causes for debarment based on the
findings of scientific misconduct:

A history of failure to perform or of unsatisfactory
performance of one or more [grants or contracts]; or

* * *

Any other cause of so serious or compelling a nature that it
affects the present responsibility [of the individual or

45 C.F.R.  76.305(b)(2), (d); 48 C.F.R.  9.406-2(b)(1)(ii),

The debarment regulations provide for proceedings that are as
informal as practicable, consistent with principles of
fundamental fairness. However, the respondent must be afforded
an opportunity to appear with a representative, submit
documentary evidence, present witnesses, and confront any witness
the agency presents. The Debarring Official's decision on the
proposed debarment is made after any fact-finding proceeding and
is based on the administrative record. 45 C.F.R.  76.310
through 314 and 48 C.F.R.  9.406-3.

The Panel provides a de novo review. What this means is that a
Panel decision is not a review of what ORI did during its
investigation or whether what ORI found was reasonable based on
the evidence ORI considered. In this case, the Panel held an en
banc 28-day evidentiary hearing during which both sides had an
opportunity to present testimony from witnesses and documentary
exhibits. This hearing was Dr. Imanishi-Kari's first opportunity
to confront and cross-examine witnesses against her and to test
the expert opinions on which ORI relied. 3/ Our decision is
based solely on the evidence admitted into the record before us.

Under the guidelines, the Panel is to determine whether ORI
proved any charges of scientific misconduct, and, if necessary,
to determine the appropriateness of the proposed administrative
actions. The guidelines and the debarment regulations require
ORI to prove scientific misconduct by a preponderance of the
evidence. This means evidence that is more convincing than the
opposing evidence and shows as a whole that misconduct was more
probable than not. 4/

The regulatory definition of scientific misconduct, first
published in 1989, states:

Misconduct or Misconduct in Science means fabrication,
falsification, plagiarism, or other practices that seriously
deviate from those that are commonly accepted within the
scientific community for proposing, conducting, or reporting
research. It does not include honest error or honest
differences in interpretations or judgments of data.

42 C.F.R.  50.102. Findings of scientific misconduct, if
proved, carry serious consequences. Thus, a federal agency
contemplating taking administrative actions against an individual
on such grounds must have a fully adequate basis in law and fact.

The charges here relate to conduct allegedly occurring before
this definition was published. ORI said, however, that it was
charging only intentional and deliberate falsification and
fabrication. It is not disputed that such conduct could be a
basis for a prospective action to protect the integrity of
federal research, even if the conduct occurred before the
definition was published. As we have noted before, however,
"[t]he unquestionable duty of a scientist to seek accuracy . . .
does not necessarily extend to a duty to achieve perfection or
freedom from all inadvertent error." Dr. Rameshwar K. Sharma,
DAB No. 1431, at 13 (1993). Previous rulings in scientific
misconduct cases relating to conduct occurring before the 1989
definition was published have made clear to ORI that charges may
not be expanded beyond that definition to encompass honest errors
or honest differences in interpretations or judgments of data.

During and after the hearing here, ORI sought to expand its
charges beyond those stated in the charge letter, to encompass
conduct that would not amount to intentional and deliberate
falsification or fabrication but which ORI or its witnesses
considered bad practice or bad interpretations or judgments of
data. This raised questions of fundamental fairness.
Dr. Imanishi-Kari could not reasonably have been expected to
respond to charges not previously made, concerning conduct not
mentioned in the ORI Report or practice standards not previously
cited as a basis for a misconduct finding.

ORI also suggested in post-hearing briefing that a previous
decision had set a documentation retention standard for research
data. Contrary to ORI's suggestion, that case did not find
scientific misconduct based on lack of documentation, but was a
plagiarism case which merely referenced a regulatory standard for
record retention under federal grants as support for an inference
drawn from the lack of any original primary data for published
work. Dr. David C. Bridges, DAB No. 1232, at 83-87 (1991). We
also note that the retention requirement is a three-year
requirement, and there is no basis in the record here for
concluding that Dr. Imanishi-Kari had failed to meet it, to the
extent it applied. See 45 C.F.R.  74.21 (1984-86). Similarly,
ORI's reliance on a 1989 publication as establishing standards
for research conducted before that time is misplaced.

Our analysis in this decision evaluates whether ORI proved by a
preponderance of the evidence intentional and deliberate
fabrication or falsification, as charged.


The Cell paper and its central findings

The paper at issue here, entitled "Altered Repertoire of
Endogenous Immunoglobulin Gene Expression in Transgenic Mice
Containing a Rearranged Mu Heavy Chain Gene," was published in
1986. 45 Cell 247 (April 25, 1986) (Cell paper). 6/ This paper
resulted from an interdisciplinary collaboration between two
laboratories at the Massachusetts Institute of Technology (MIT):
one headed by Dr. Imanishi-Kari and the other by Dr. David
Baltimore. Stipulation (Stip.) 2, 3. 7/ Dr. Imanishi-Kari came
to MIT in 1981 after extensive experience in laboratories in
Finland, Germany, and Japan. Stip. 1; Ex. R31. 8/ Dr. Baltimore
had approached Dr. Imanishi-Kari about collaborating to study
antibody expression in mice after a gene derived from one strain
(BALB/c) was microinjected into a closely related strain
(C57BL/6). 9/

The central finding of the Cell paper was that the introduction
of the transgene changed the antibody repertoire. The Cell paper
reported that antibodies produced by the transgenic mice included
a high frequency of antibodies with the idiotype of the inserted
gene but heavy chains characteristic of the endogenous mice (into
which the gene was inserted). 10/ In order to study this
phenomenon, the authors used both serology and molecular biology.
Dr. Imanishi-Kari was primarily responsible for the serological
work. 11/

The main portions of the Cell paper involved in the charges
before us are Figure 1, Table 2, and Table 3. Figure 1 reported
experiments testing expression of idiotype-positive antibodies in
normal and transgenic mice using anti-allotype reagents (BET-1
and AF6). 12/ Table 2 reported frequency data about idiotype-
positive antibodies from hybridomas, comparing normal and
transgenic mice. Table 3 provided both molecular and serological
information about 34 individual hybridomas (only the serological
information is at issue).

The co-authors and others in the laboratories

The co-authors of the Cell paper, besides Dr. Imanishi-Kari and
Dr. Baltimore, were Dr. Weaver, Dr. Reis, Mr. Albanese, and Dr.
Constantini. 13/ The record shows that there were frequent
discussions between the researchers in the two laboratories about
their experimental results. Dr. Weaver was a post-doctoral
fellow in Dr. Baltimore's laboratory and wrote the first draft of
the paper. Stip. 18; FFCL II.BB.4. He conducted much of the
molecular analyses reported in the paper and assisted with some
of the serological work (particularly the first fusion
experiments). Dr. Baltimore supervised the molecular work and
was actively involved in the preparation of the paper. Dr. Reis
worked in Dr. Imanishi-Kari's laboratory briefly in 1982 and then
from February 1985 to January 1986. Stip. 5; ORI FFCL II.X.3.
Dr. Reis worked closely with Dr. Imanishi-Kari on the serological
work. Mr. Albanese was a research assistant in
Dr. Imanishi-Kari's laboratory and worked on some of the
molecular biology. Stip. 6; ORI FFCL II.A.1, 4. He was not
actively involved in drafting the paper.

In June 1985, Dr. O'Toole came to Dr. Imanishi-Kari's laboratory
as a post-doctoral fellow on a one-year appointment to work on
experiments to extend the findings in the Cell paper. Stip. 4;
ORI FFCLs II.W.2-4. Dr. O'Toole provided some data cited but not
shown in the Cell paper, and read and commented on the paper in
draft form. Cell paper 257-58. Among the other persons working
in Dr. Imanishi-Kari's laboratory at MIT were two then-graduate
students, Dr. Boersch-Supan (1981-85), who also read the draft
and was acknowledged in the Cell paper, and Dr. Maplethorpe
(1981-85), who had no direct role in the Cell paper.

Prior history of this matter

The dispute in this matter began in May 1986 when Dr. O'Toole
reviewed a laboratory notebook (now called R-1) left by Dr. Reis
when she departed and made available to Dr. O'Toole by Dr. Reis
for Dr. O'Toole to use in her work with breeding and testing
mice. Dr. O'Toole was concerned that data on some pages of that
notebook were inconsistent with data in the Cell paper. Dr.
O'Toole copied the pages which concerned her (the "17 pages") and
took them to Dr. Huber (a professor of pathology at Tufts and a
friend at the time of both Drs. O'Toole and Imanishi-Kari). Tr.
1773, 1801 (Huber), 1692-93 (Woodland). Dr. Huber sought the
assistance of two other immunologists who were part of a group of
Boston immunologists that met together regularly for many years,
both of whom had previously had friendly relationships with both
Drs. O'Toole and Imanishi-Kari and both of whom had expertise in
idiotypes. 14/ Tr. 1708 (Woodland), 1775-76 Huber, 2787

At that point, Dr. O'Toole "did not think that the matter was one
of fraud and wanted to keep it among friends." ORI Br. 11. On
that basis, Drs. Wortis and Huber met twice with Dr. Imanishi-
Kari (Dr. Woodland was unable to attend the second meeting) and
concluded, after discussion and review of data, that the concerns
raised were satisfactorily addressed. Ex. H230; Tr. 1782-83
(Huber). Dr. O'Toole did not agree, and went to MIT with her
concerns. Dr. Eisen at MIT asked her to put them in writing so
that he could understand more clearly. See Exs. H265, R46; Tr.
1962, 1987 (Eisen). Dr. Eisen then met with Drs. Imanishi-Kari,
Baltimore, Weaver, and O'Toole to discuss Dr. O'Toole's concerns.
He felt that Dr. O'Toole raised questions about whether the data
justified the conclusions in the Cell paper and about possible
alternative interpretations. Tr. 1993 (Eisen). He concluded
that the dispute was of a kind "not uncommon in science," and
normally resolved by further work and debate, rather than one
involving misconduct. Exs. H233; R46, at 17; Tr. 1968-69

Because concerns about this paper persisted, NIH conducted an
investigation of the issues. A panel of three scientists, Dr.
Joseph Davie, Dr. Hugh McDevitt, and Dr. Ursula Storb, was
established to assist in the investigation (NIH Scientific
Panel). The charge to the NIH Scientific Panel was broader than
our role but they considered most of the same scientific issues
presented by ORI before us. 15/ They found errors in the paper
serious enough to call for corrections but "no evidence . . . of
fraud, misconduct, manipulation of data, or serious conceptual
error." Ex. H262 (Letter from Dr. James Wyngaarden, dated
January 31, 1989, at 1). In fact, the NIH Scientific Panel
concluded after an "intensive review" of the raw data in the
notebooks of Dr. Reis, Dr. O'Toole, Dr. Weaver and
Dr. Imanishi-Kari, that it was "impressed by the amount of work
done in support of the studies . . . by the completeness of the
records, and by the abilities of both Drs. Imanishi-Kari and
Weaver to find, accurately interpret, and present data" on the
experiments then three or four years in the past. Ex. H262 (NIH
Scientific Panel Report at 10).

The Subcommittee on Oversight and Investigations of the House
Committee on Energy and Commerce of the U.S. Congress
(Subcommittee), then chaired by Congressman John D. Dingell,
conducted hearings in April 1988, May 1989, and May 1990 relating
to the Cell paper. A subpoena for laboratory data relating to
the Cell paper was issued by Congress in June 1988, and
Dr. Imanishi-Kari submitted her records to NIH in July 1988. See
Exs. H248 and H249. 16/ The Subcommittee requested the United
States Secret Service to conduct forensic analyses (particularly
document examination testing) of some of the laboratory notebooks
of Drs. Imanishi-Kari and Reis. See Ex. H500; ORI FFCL III.J.1.

The NIH investigation was reopened in May 1989. 17/ Ex. H269.
The case was referred to the U.S. Attorney's Office for the
District of Maryland which declined to prosecute. Stips. 83, 84.
ORI issued a 231-page report (with attachments) on October 26,
1994, concluding that Dr. Imanishi-Kari had committed scientific
misconduct in relation to the Cell paper. Stip. 93; ORI Report
passim. The charges against Dr. Imanishi-Kari were set out in a
letter to Dr. Imanishi-Kari, also dated October 26, 1994. Charge
letter passim. On November 23, 1994, Dr. Imanishi-Kari requested
a hearing on ORI's charges against her. Stip. 95.

The case presented before us turns on the forensic attack on the
authenticity of data recorded in the I-1 notebook since ORI has
stated that if the questioned data existed they were sufficient
to support the paper. 18/


In this section, the Panel explains how we evaluated the evidence
and, generally, why we do not draw the same inferences as ORI.
The findings we set out here are derived from the record as a
whole and from the detailed analyses of individual charges
addressed in other sections of the decision.

As noted above, several scientific reviews of the 1986 Cell paper
found that, although significant inaccuracies or misstatements
occurred, no scientific misconduct was involved. In reexamining
this case and at the hearing before this Panel, ORI presented
both complex scientific analyses and forensic evidence (not
considered by previous reviewers). ORI used statistical and
document examination techniques to reopen the question of whether
scientific misconduct did occur.

The sheer volume of material presented by ORI was enormous. The
number of exhibits and reports generated over the many years of
investigations and the lengthy hearing process required a massive
amount of time and resources to evaluate fairly. Weighing
evidence is not a mechanical process comparing the number of
pages or hours of testimony for each side; rather, the fact-
finder must assess the quality and probative value of all of the
evidence. This approach is in accord with that of courts
confronting large records. 19/

The main problem with ORI's case was not that more evidence was
needed but that less equivocal and more independently decisive
evidence was required for ORI to carry its burden of proof.

The Panel found that much of what ORI presented was irrelevant,
had limited probative value, was internally inconsistent, lacked
reliability or foundation, was not credible or not corroborated,
or was based on unwarranted assumptions. Of course, we were
unable to describe each and every assessment of evidence
individually because to do so would require a decision of
unmanageable length and complexity without altering the outcome.
20/ Where important, however, we have explained why certain
evidence was disregarded or discounted. We have also explained
generally in the next section why we conclude that (1) the
statistical analyses would support an inference of scientific
misconduct only if the assumptions underlying the analyses were
valid and the other evidence compelling (conditions not met
here); and (2) none of the document examination findings were
meaningful standing alone, and even collectively they were not
direct evidence of any act of fabrication.

Because the history of this case involved a direct attack on
Dr. Imanishi-Kari's honesty, we evaluated her statements
carefully and relied primarily on evidence in the record other
than her testimony, except where she was not contradicted. The
credibility of her testimony before us was bolstered, however,
when much of the evidence in the record, and in particular some
of the document examination evidence, corroborated her statements
and directly contradicted representations made by ORI.

Since neither party's case depended entirely on direct evidence,
each party sought to draw inferences from the evidence in the
record. To evaluate competing explanations of observed anomalies
or data disputes, the Panel was obliged to consider which
explanations are more likely and to weigh the reasonableness of
the inferences which the parties would have us draw. Evaluating
alternative theories of a case and drawing reasonable inferences
from the facts found is a proper and necessary part of any
decision-making process. Were we to decline to do this, on the
grounds that this constitutes speculation (as ORI suggested),
ORI's case would fail entirely.

After examining the entire record, we reached a number of general
findings about why it is not likely that falsification or
fabrication was the cause of any errors or anomalies presented to
us. These findings are reflected in our analyses of individual
charges, but not repeated there:

 All of the scientists who looked at the questions raised
about the Cell paper over the preceding decade (at Tufts,
MIT, and on the NIH Scientific Panel) found no evidence that
scientific misconduct had occurred. 21/ While they found
errors in the paper, and the authors published corrections,
the dispute appeared to center on differences about how
judgment was exercised, how experiments might be
interpreted, and whether phrasing in the paper was correct.
Although ORI raised some new or differently-presented
scientific issues before us, these also proved to be largely
matters of interpretation, judgment, and confusing
laboratory jargon.

 The one important aspect of the case which was not
considered by previous reviewers is the forensic attack on
the authenticity of some of the laboratory data based on
statistical and document examination methods. However, at
most, these analyses identified some possible anomalies, but
provided no independent or convincing evidence that the data
or documents were not authentic or could not have been
produced during the time in question.

 Most of the data allegedly fabricated were not even included
in the Cell paper. Two-thirds of the data in the I-1
notebook were not challenged, including much of the original
data that were published.

 Often, ORI found that Dr. Imanishi-Kari had misrepresented
data, without fully understanding how she had represented
data. We examined the Cell paper and her notebooks as a
first step in analyzing whether the experiments and results
were misrepresented.

 Many of the questioned notebook pages contain material that
is not helpful in supporting the conclusions in the Cell
paper. 22/ ORI's own expert, Dr. Gilbert, testified that
"[p]eople provide, when they fake experiments, they provide
data that's generally too accurate. They provide data that
is too sharp." Tr. 1481-82. Yet in many cases the results
included on the questioned pages were conflicting or bizarre
in ways more likely to raise than resolve scientific
questions. Even if Dr. Imanishi-Kari were cleverly avoiding
too perfect a look, there is no demonstrated reason that she
would create bizarre and conflicting results which could
only call attention to problems rather than mask them.
Furthermore, in many instances, it was as hard to comprehend
how particular alleged anomalies could have been created by
any fabrication scenario as it was to understand how they
could have occurred as errors or artifacts in the
experimental process.

 A degree of sloppiness in recording and maintaining data
certainly may warrant criticism of Dr. Imanishi-Kari's care
in recordkeeping (tempered with a recognition that standards
in this area have evolved since the mid-1980's). We find it
completely illogical, however, that Dr. Imanishi-Kari would
retain intact the most central data complete with flaws and
then intentionally create false data on more peripheral
points that are equally imperfect. 23/

 Many of the most troubling forensic attacks are mounted
against the most trivial or peripheral data. For example,
the "January fusion" controversy, which was one of the most
troublesome, centers on the data from transgenic fusions not
used for any purpose whatsoever. The only explanation
offered for why such data would be fabricated was that the
addition of transgenic fusion data might lend verisimilitude
to a related normal control fusion used as one part of the
negative results reported on normals in Table 2 but about
which little direct forensic evidence was offered. See ORI
Br. 83.

 ORI's descriptions of the forensic findings were not always
dependable. For example, as described by ORI, one type of
Secret Service analysis seemed to provide support (albeit
limited) for ORI's position on two important issues: what
coat was used for a certain isotyping assay and whether
certain control results related to a particular allotyping
assay or to a later retest. The actual results, however,
were not as described and were consistent with (indeed,
arguably substantiated) Dr. Imanishi-Kari's version of
events (which was also corroborated by other evidence).

 ORI's descriptions of experimental results were similarly
not reliable. For example, ORI's findings relied on a
definition of "double producers" (a term used in the paper)
that was not confirmed by any of the witnesses, even ORI's
own independent experts. In evaluating whether double
producers existed, ORI treated +/- results on an assay read
by eye as "positive scores" definitively detecting double
producers. The results were equivocal, however, and were
given undue significance by ORI. Frequently, ORI
interpreted the same or similar results one way for one
purpose and another way for another purpose. This
circularity in ORI's reasoning made it difficult to sort out
ORI's various arguments.

 In some areas, ORI's attempts at establishing a possible
motive for fabrication or falsification requires very
convoluted reasoning. For example, Dr. Imanishi-Kari is
alleged to have fabricated a page with very poor results
using a reagent (BET-1) which ORI charged did not work as
well as reported, in order to bolster a story that BET-1
worked poorly for a while but later worked well in some
other instances. See ORI Br. 41-48. On the whole, most of
the alterations for which no reasonable motivation was
adduced look much more like the marks of untidy "real life,"
than like suspicious efforts at some intentional laundering.

 The Cell paper as a whole is rife with errors of all sorts.
Many of them are obvious to a careful reader and in no way
strengthen the authors' arguments. The authors addressed a
number of these errors in published corrections (including
an overstatement of the degree of specificity of BET-1 in
relation to Figure 1 and clerical errors in Table 3).
Exs. H2 and H3. There are additional errors evident on the
face of the paper, some of which, despite all these years
and layers of review, have never previously been pointed out
or corrected. 24/ Responsibility for the pattern of
carelessness in writing and editing of this paper must be
shared by all the participants, including the main drafter
(Dr. Weaver), the leading collaborators who shaped the
communication and drafting process (Dr. Baltimore and
Dr. Imanishi-Kari), the contributing authors who failed to
catch errors in their areas of expertise, those who read the
paper in draft form (including Dr. O'Toole and Dr. Wortis),
and the reviewers and editors who failed to pick up errors
in the original submitted text of the paper. While a high
rate of careless errors is no defense to intentional
falsification and fabrication, the presence of so many
pointless mistakes at least raises a question whether the
mistakes singled out as intentional (because they arguably
favor the authors) really represent conscious efforts to

 One of the strengths of the Cell paper was the combination
of serology and molecular biology in studying the effects of
the insertion of the transgene on the murine immune system.
However, the same interdisciplinary collaboration between
two laboratories that produced this strength also presented
apparent obstacles to communication and risks of
misunderstanding. In a number of areas, poor communication
may have contributed to confusion in interpreting and
presenting experimental results.

 Most of the witnesses with personal knowledge of contested
events testified on behalf of Dr. Imanishi-Kari. While ORI
suggested that Dr. Imanishi-Kari's witnesses might be
biased, ORI did "not contend that [they were] lying." ORI
Reply Br. 22. Conflicting accounts of the events by ORI's
witnesses were uncorroborated.

 Finding intentional and deliberate falsification and
fabrication as alleged by ORI would require a conspiracy of
authors and an intentional coverup by those to whom
Dr. O'Toole turned at MIT and Tufts. 25/ Yet, it is
unlikely that Drs. Weaver and Baltimore conspired with Dr.
Imanishi-Kari to create results used in the Cell paper when
that paper's conclusions significantly diverged from some of
their prior research and theories and presented surprising
results, rather than simply bolstering their prior work.
While Dr. Imanishi-Kari's colleagues might judge her more
generously based on friendship, there is no basis in the
record to suggest that all of these scientists would
jeopardize their own careers by lying to save hers. 26/

 If Dr. Imanishi-Kari had plotted to publish false data in
the Cell paper or knowingly to interpret the data in
misleading ways, it seems hardly likely she would hire
Dr. O'Toole to extend the work reported in the paper, have
her review a draft of the paper, and give her access to a
notebook containing much of the problematic data.

 Much of ORI's attack on the Cell paper seemed disconnected
from the context of science at the time the published work
was performed. We sought to evaluate Dr. Imanishi-Kari's
motivation and actions with as much understanding as
possible of the state of knowledge, preceding work, and
expectations of the authors as reflected in the progression
of their actual scientific work.

This context informs our consideration of specific charges, and
our evaluation of credibility of testimony and other evidence
throughout the decision.

Contrary to what ORI argued, we need not distinguish between the
parties in determining what inferences to draw. ORI argued that
its "account of what likely happened must necessarily be
speculation," since no eyewitnesses to any misconduct had been
found and "no confession of guilt" had been made. ORI Br. 19,
n.6. On the other hand, ORI implied, Dr. Imanishi-Kari could
have testified about precisely what she did do, so that her
"after-the-fact rationalizations" must be disregarded as "self-
serving and designed to mislead." Id. This approach thus begins
with an assumption that Dr. Imanishi-Kari is guilty of misconduct
and anything less than a "confession" of her methods constitutes
rationalization. The opposite assumption, however, does not lead
to a conclusion that all she has to do to set the record straight
is to tell the truth. An innocent person might well not know
precisely how anomalies came to exist in data generated and
organized years earlier. For example, the claim that certain
data display odd statistical patterns (when analyzed) arose only
in 1990, and Dr. Imanishi-Kari might quite innocently be unable
to recall if anything unusual occurred during the experiment done
over five years before or to test equipment or reagents to
ascertain what artifacts might have arisen. Similarly, when
questions are raised years later about why the intensity of ink
on two counter tapes might be different, Dr. Imanishi-Kari might
well be unable to recall precisely what printers were used, when
ribbons were changed, or even whether a particular counter tape
might have been mislabelled or misfiled. As a result, both
parties were equally compelled to explore alternative
explanations that might account for the present state of the
experimental record. No unfavorable inference can reasonably be
drawn from Dr. Imanishi-Kari's inability to provide definite
information about how each perceived anomaly occurred.

The Panel found that the "notebooks" and raw data were often
incomplete and difficult to interpret with certainty, even by
Dr. Imanishi-Kari. This is not particularly remarkable, in and
of itself. For someone who did not conduct the experiments to
parse through the raw data of another scientist is likely to be
extremely difficult even when the data are current. 27/ Data
from many years ago are even less likely to be complete and
transparent upon review. 28/

In some respects, a parallel to the difficulty of after-the-fact
interpretation of one scientist's data by an outside observer
arose at the hearing with respect to the original ink analysis
data of the Secret Service. Much of the raw data are missing or
destroyed at this point (many of the actual ink analysis plates
have been broken or have faded). Mr. Stewart, one of the
document examiners, testified that he does not always prepare
work sheets to document his ink analyses, relying sometimes on
his own "cryptic" notes. Tr. 3762, 4056. Such recordkeeping
decisions, he stated, are "really up to the examiner." Tr. 4056.
Work sheets recording the original reading of the plates by the
examiners are often too incomplete to support conclusions in
themselves or to permit another ink analyst to independently
review their interpretation. He testified that: (1) he knows
from his report the conclusions he drew from the original plates;
(2) while the work sheets may not be perfect, they are enough for
him to explain his interpretations; (3) no one else ought to even
try to interpret his work from the sheets (instead they ought to
take samples, rerun the experiments, and do their own analysis);
and (4) in light of the critiques and the damage to the
originals, he redid the critical ones and they confirm his
interpretations. Tr. 3926-28, 4040-62, 4072-73, 4075-76. We
note that Dr. Imanishi-Kari has long suggested that her
questioned experiments could be rerun by investigators to verify
that the reported results would be obtained and that she would be
willing to let somebody with doubts spend time in her laboratory
to repeat those experiments. See, e.g., Ex. H245, at 6. As far
as we know, this proposal has never been accepted. 29/ However,
the point here is not that the ink analysis results were
incorrect or that they were not verified by the later retests,
but that after-the-fact interpretations of another scientist's
records are fraught with difficulty.

For example, although the Secret Service examiners knew at the
time of their work that they were preparing for litigation
(unlike Dr. Imanishi-Kari), their records contain alterations of
the results or omissions of important information. 30/ Tr. 4065-
72, 4075-76, 4081 (Stewart); Exs. H530, H531, H533. 31/ Just as
Dr. Imanishi-Kari has testified that she is able at times to
interpret notations that do not suffice for others to reconstruct
what she did or how she exercised her judgment in constructing an
experiment, the examiners testified that they could reconstruct
what they did even from incomplete records, and they made choices
that shaped their experimental design. 32/ These observations do
not impugn the validity of the Secret Service findings. We do
not infer that the document examiners were incorrect in
explaining what they did, or in their exercise of judgment in
their tests. Overall, however, these observations highlight the
fact that records of scientific experimentation may be opaque to
any one but the researcher and, after a number of years, even the
researcher may be forced to reconstruct what the researcher
"must" have done or meant. The point is that any scientist may
sometimes have to extrapolate from incomplete records to
reconstruct how work was done and may continually have to
interpose judgment in deciding what tests to run and how to
interpret them, and we must allow for that in reviewing
Dr. Imanishi-Kari's retrospective explanations of results that
may not be clear on their face.

We recognize that some of the apparent anomalies and
peculiarities in the records may be impossible to explain fully
at this point. No hearing process can absolutely rule out the
possibility of improper action. Here, in order to sustain the
charges against Dr. Imanishi-Kari, we would have had to find that
ORI proved that it was more likely than not that intentional
fabrication or falsification by Dr. Imanishi-Kari was the cause
of the anomalies, and we do not find that ORI met this burden as
to any of the charges.


A. Statistical analyses

ORI presented statistical analyses and testimony to support its
charges of fabrication with respect to certain sets of data (none
of which was used in the Cell paper). We note that the issue
before us is not whether statistical analyses provide a useful
tool for examining allegedly fabricated or falsified data.
Rather, the question is whether, in this particular case, ORI's
statistical analyses provide reliable evidence of fabrication or
falsification. We address ORI's analyses of specific data sets
in the context of individual misconduct charges below. Here, we
state general conclusions about why we do not find the
statistical analyses to be reliable, credible, probative evidence
of fabrication of data.

Based on our examination of the record as a whole (including our
own observations of the data), we conclude that--

 ORI did not establish that (even assuming the statistical
analyses are valid) it is reasonable to infer from that
evidence alone that the analyzed data are fabricated. While
ORI's statistical expert who did the analyses, Dr. Mosimann,
gave his opinion that the data were fabricated, we find that
his testimony was on the whole less persuasive than the
conflicting evidence. Contrary to what ORI argued, he did
not have more relevant experience than Dr. Imanishi-Kari's
expert (Dr. Speed), who testified without compensation and
who presented an opposing opinion. 33/ ORI's other
statistical expert (Dr. Barron, a colleague of Dr. Mosimann)
testified that he would not reach a conclusion of
fabrication based on the statistical analyses alone;
instead, Dr. Barron stated that "if the scientific and
forensic evidence is compelling, the statistical evidence
[in the ORI Report] . . . corroborates it." Tr. 5019
(emphasis added).

 Statistical results are valid only if based on valid
assumptions. ORI did not establish through a preponderance
of the reliable, credible evidence that the assumptions
underlying its statistical analyses were valid. The
testimony about the assumptions presented by ORI was
conclusory, and generally had no basis in any independent
empirical analyses nor any clear theoretical foundation.
The assumption underlying part of the analyses was an
expectation of randomness in fusion experiments, but Dr.
Mosimann's previous published work indicates that this
assumption is valid only if certain experimental conditions
are met. Unrebutted testimony indicates that it would be
unreasonable to expect these conditions to apply.
Generally, ORI did not relate its assumptions to the
specific experiments at issue. ORI relied primarily on
analyses of unquestioned data sets as "controls" to
establish its underlying assumptions, but this reliance is
misplaced, for reasons explained below. In particular, we
find that the unquestioned data sets are simply not
comparable to the data sets that ORI questioned.

 Even if we were to find that ORI's underlying assumptions
were valid, we would not find ORI's statistical evidence on
the whole to be persuasive. ORI did not establish that all
of the techniques it used were commonly accepted statistical
techniques, nor that it applied those techniques in an
appropriate way. Dr. Imanishi-Kari established that other
reasonable statistical analyses could lead to different

The fact that ORI's statistical calculations were in most
respects accurate is irrelevant in light of ORI's failure to
establish either the validity of its methodologies as applied
here or the reasonableness of the inferences it drew from its
analyses. Similarly irrelevant in a contested case such as this
is the fact that statistical analyses were used in investigations
that resulted in uncontested findings of misconduct.

B. Document examination methods

The forensic examinations conducted by the Secret Service on the
data in the I-1 notebook were presented by ORI to support many of
its charges. We address ORI's analyses of specific pages from
the I-1 notebook in the context of individual misconduct charges
below. In this section, we discuss the nature of the document
examination tests undertaken, the limitations of those tests
generally, and the validity of the underlying assumptions.
Finally, we state general conclusions about why we do not rely on
the document examination findings as definitive or reliable
evidence of fabrication of data in this case.

Overall, the Panel found the Secret Service document examiners
who testified at the hearing, Mr. Stewart and Mr. Hargett,
professional in demeanor and informative as to the techniques
they used. We generally relied on the factual information which
they presented concerning the results of their experimental tests
of the documents and the nature of the analyses performed. The
examiners made clear that this case was unique in their
experience, involving hundreds of hours of work over a number of
years. Tr. 3559-60, 3983. 34/ There was no evidence that any of
their previous cases involved reviewing laboratory notebooks of
scientists. 35/ This case was unlike many challenges to the
authenticity of standardized documents (such as birth
certificates or currency), in that the counter tapes and other
items used to create the I-1 notebook (pads, pens, notebook
pages) were authentic materials available in the laboratory.
Instead, the issues raised about the documents went to when they
were produced. The forensic document examination evidence is
thus limited in focus to questioning the time at which some of
the pages in the I-1 notebooks were created (and in a few cases,
the time at which the counter tapes themselves were generated).
36/ Further, the Secret Service did not testify that any
individual test conclusively dated any data or pages. Mr.
Stewart emphasized instead that, while none of the results of any
one test alone might cause him to find fabrication or
falsification, the convergence of the results demonstrated that
the conclusions were supported. Tr. 3931. While a series of
independently meaningful methodologies yielding mutually
supportive results may be reinforcing, it is not necessarily the
case that a series of tests, each admittedly insignificant on its
own, can be added together to claim significance.

The Panel found the Secret Service's conclusions to be
unpersuasive because:

(1) they depended on the erroneous assumption that the I-1
notebook was created as a contemporaneous, chronological
record of experiments as they occurred,

(2) they were based on comparisons with other laboratory
notebooks that did not establish a standard or norm against
which the I-1 notebook could fairly be measured,

(3) they were reached in a vacuum of information about the
kind of experiments done in Dr. Imanishi-Kari's laboratory
and the substantive nature of her scientific work which
influenced the way her data were organized and presented,

(4) each of the analyses on which they were based had
inherent limitations (as the examiners themselves testified)
that undercut their reliability for the purpose of
supporting the conclusions which ORI reached. 37/

We discuss each of these reasons in more detail below.

The data in the I-1 notebook were organized over a number of
months and not compiled into notebook form until much later.
The I-1 notebook did not purport to be put together in this
form at the time of the experiments.

Since the question of time is central to the document
examination, the Secret Service's conclusions were especially
dependent on an understanding of when the I-1 notebook pages were
"purportedly" created. Mr. Hargett testified that the
understanding on which he based his work was as follows:

[T]he I-1 notebook was supposed to have been created in
1984, contemporaneously with experiments that were being
conducted during that period of time. And that the
notations appearing in this notebook, and the results of
experiments being run, were the results of testing that was
being conducted at that period of time.

Tr. 3499-3500. Further, he understood that the notebook was
compiled chronologically in the sequence in which the experiments
were done, and that dates on pages purport to be "within a fair
amount of time" of the dates on which the experiments were
conducted. Tr. 3500. Consequently, in examining the notebook,
the examiners would have considered as notable any deviation from
these expectations.

The examiners' expectations appear to have been based on their
reading of a letter from a former counsel for Dr. Imanishi-Kari
responding to a request from the Subcommittee to indicate if the
laboratory notebooks were "created contemporaneously with the
scientific experiments" and "prepared prior to the submission of
the Cell paper." Ex. H267, at 2. The March 1989 letter confirms
that this assumption is correct, and on that basis the examiners
expected all the pages to be "contemporaneous." However, the
author goes on to state that "[i]n this context" he has "some
difficulty interpreting the word `contemporaneous.'" Id. He
then explains that:

Most of the of the original data presented in the Cell paper
are radio immuno assays . . . . Construing the word
contemporaneous strictly, the only entries in the Notebooks
created contemporaneously with the scientific experiments
are the gamma counter printouts . . . and the x-rays. All
hand written entries were made after the conclusion of a
particular experiment. Construing the word contemporaneous
in the broader sense, it is fair to say that all of the data
presented in the Cell paper were assembled contemporaneously
with the scientific experiments and placed in the Notebooks
in a timely fashion.

Id. (emphasis added). He also stated that the "data presented in
the Cell paper and recorded in the Notebooks were prepared prior
to the submission of the Cell paper." Id. at 3. He noted that
much of the data in the notebooks were not presented in the Cell
paper, being confirmatory or ancillary work, and that other
notations were entered after the paper was submitted. He
highlighted "in particular" some of the later notations. Id. In
addition, he pointed out that not all data are normally preserved
in a laboratory notebook. Id.

The language of the letter is ambiguous but was apparently
interpreted by the Secret Service examiners to mean that the
notebook existed in essentially its present form before the Cell
paper was submitted. The lawyer appears to have been focusing
mainly on the point that Dr. Imanishi-Kari did not create data
retrospectively to justify her reported findings but rather used
data created at the time by the gamma counters. However, the
reference to "data" having been "assembled" contemporaneously "in
the broader sense" can also be interpreted as consistent with
collecting the counter tapes into files, without necessarily
implying that they were immediately pasted onto pages of a
notebook. Also, the letter does not say that everything in the
notebooks except the listed notations was put there before the
Cell paper was submitted, only that the data published in the
paper were assembled prior to that.

Dr. Imanishi-Kari testified about her practices in collecting and
assembling data, about the source and meaning of dates in her
data records, and about the creation of the I-1 notebook. Her
testimony was corroborated by the observations of other witnesses
and supported by our review of the physical evidence. The
language of the lawyer's letter, while not evidencing a clear
explanation of these details (the significance of which may not
have been evident at the time) is consistent with her
descriptions as are her statements earlier in this matter. We
describe this process below and explain why we find it generally
credible that I-1 notebook was the product of this kind of
process (and hence never intended to be taken as
"contemporaneous" in the narrow sense in which the Secret Service
understood it).

At the time of the experiments later used in the Cell paper,
Dr. Imanishi-Kari kept a brown spiral notebook (of a kind common
at MIT) as an ongoing record of protocols and information such as
dates of certain fusions and assays. Tr. 5132-35, 5144, 5177-84,
5256-57 (Imanishi-Kari), 1699 (Woodland). As a rule, when an
assay was run and counter tapes were generated, she noted the
reagents or materials used in the experiment on the counter tape
itself and the tapes were either retained in loose form around
her office or placed in manila folders. 38/ Tr. 5123-24, 5131.
She did not necessarily date the counter tapes, and indicated
that she infrequently put dates on them, but was able to identify
them based on the reagents and samples tested. 39/ Tr. 5122-23,
5132, 5193, 5256.

She would further organize the data at various times, often
substantially removed from the date when they were generated. 40/
The data could be entered on a page either by pasting in the
entire tape, or by cutting the tape into strips containing only
the counts per minute column, or by copying the information on
counts per minute by hand. 41/ Tr. 5080, 5124-25, 5128-29.
These organizational efforts were sometimes undertaken when she
was preparing a paper or grant or needed the data for a specific
purpose; at other times simply when she "couldn't stand the mess"
in her office or did a general "spring cleaning." R. Br. 4; Tr.
5124-28. The latter included organizing data when Dr. Reis left
the laboratory in January 1986 and when Dr. Imanishi-Kari herself
moved to Tufts in July 1986. R. Br. at 4; Tr. 5126-27
(Imanishi-Kari), 2545 (Reis).

A significant aspect of Dr. Imanishi-Kari's work was that
frequently data from different assays on the same or related
samples had to be compared. Such data might be pasted in strips
on a single page, cut in pieces, or handwritten in order to line
up related results. As a result, experiments from very different
dates might be reflected on the same pages. Tr. 5075-81, 5106-
07, 5127. Furthermore, it is clear from examining I-1 (and other
notebooks) that the paper was drawn from many sources including
writing on looseleaf sheets, removing paper from pads glued at
the top, and tearing pages out of one or more spiral notebooks,
so that one set of assays is not necessarily all organized on the
same kind of paper. See, e.g., I-1:83-87. Even when the data
were on pages with notations (sometimes on groups of pages which
were numbered consecutively), the pages were not necessarily
placed in a single notebook form.

In general, Dr. Imanishi-Kari's dating appears to have been
erratic. 42/ She attributed little importance to the dates on
which particular assays were run (as opposed to mouse screening
for which accuracy in dating had significance). Tr. 5123-24,
5151-55. The dates she wrote down when she organized the tapes
were estimates, sometimes retrospective, and might refer to any
of several points in experiments (which might extend over a
considerable time period, as, for example, when a fusion was
performed, cells were grown up, supernatants were recovered, and
several assays were run). See Ex. H103, at 70-71. At best, the
dates might generally differentiate sets of assays, but they do
not purport to specify the day a particular counter tape was
generated or that a page was composed. Ex. H286, at 6. She
relied at times on the dates recorded in protocols in her spiral
notebook in making or correcting a "guesstimate" date for
experimental results. Tr. 5222-23, 5256-57, 5263-64.

Dr. Baltimore testified that he knew of Dr. Imanishi-Kari's data
collection and recordkeeping practices at the time of their
collaboration and often discussed the ongoing work in her office
and reviewed her data. Tr. 2071. He described this as follows:

She had folders with notes in them. She had notes on her
desk, in her drawers. She had lots of computer print-outs -
- not computer, teletype print-outs. . . . which described
the data from a whole variety of experiments. Some of
them were yellowing on her window sill. They were all
over. . . . It was quite remarkable that in that seeming
chaos when a pointed question came and I said let's look at
the data and see what the numbers look like and try plotting
them out, whatever. As fast as if she had a notebook in
front of her she could find the relevant material and we
could go over them and there was never an issue. I might
say that that is not unique in my experience. I've seen
other people who can do that. But it's a rare talent.

Tr. 2071-72; see also Tr. 4578 (Igras), 2374-75, 2404-05
(Weaver), 2565 (Albanese), 1855-56 (Boersch-Supan). The uniform
observation that Dr. Imanishi-Kari did not promptly record or
organize data into notebook form, but rather collected raw data
in loose piles or manila folders, is uncontradicted by the two
ORI witnesses who worked in the laboratory with her--Dr. O'Toole
and Dr. Maplethorpe. Not one witness testified to observing
Dr. Imanishi-Kari entering data into a looseleaf notebook (like
I-1) while recording her experiments.

It is undisputed that the compilation of the loose pages into
notebook form was undertaken prior to their being provided to NIH
in July 1988, following a subpoena from the Subcommittee. The
compilation was done at the suggestion of an attorney who advised
her to put her data together so that others could better
understand them and so nothing would be lost. Tr. 5142-43
(Imanishi-Kari); Exs. H103, at 51; H248; H249. Dr. Imanishi-Kari
testified that the attorney suggested that she organize her notes
into three-ring binders, like those utilized by Dr. O'Toole and
Dr. Reis, after observing that Dr. Imanishi-Kari's notes included
loose pages and pieces of counter tape maintained in manila
folders. Dr. Imanishi-Kari testified that, following his advice,
she did not make changes or add information to the pages at that
point, other than reattaching tapes that came off while making
copies. Tr. 5143-44, 5149.

At that point, Dr. Imanishi-Kari disassembled her spiral notebook
and dispersed the contents through I-1 in an effort to connect
the protocols and summaries in the spiral notebook with the raw
data from related experiments. Tr. 5142-47. In the process,
experiments were grouped together topically, which resulted in a
rough progression of the research but not a strict chronological
presentation. Id. 43/

It is not believable that Dr. Imanishi-Kari intended to convey to
NIH and the Subcommittee a false representation that she placed
her data in a bound notebook in final form concurrently with the
dates on which she conducted experiments, when she knew that
virtually anyone who had set foot in her laboratory would be able
to contradict that assertion. Also, many people, including Dr.
Maplethorpe who plainly had no interest in lying to benefit her,
had seen Dr. Imanishi-Kari's spiral notebook in its prior form.
Tr. 948 (O'Toole). In addition, pages submitted to NIH in xerox
form before the date when Dr. Imanishi-Kari testified that she
interleaved the spiral notebook with other loose pages to create
the I-1 notebook show those pages with the spiral binding intact.
See, e.g., Ex. H250. It is unlikely that she would submit data
to NIH in one form and then only a few months later intend to
mislead NIH and the Subcommittee into believing that they had
always been in another form.

ORI acknowledged in its proposed FFCLs that the I-1 notebook "may
have been assembled well after an experiment was conducted, or
certain notebook pages could have been transcribed from scraps of
paper or other fragmentary sources well after the experiment was
carried out." ORI FFCL XII.E.2. ORI continued to deny that this
process could explain all the dating mistakes because five pages
(I-1:30, 41, 42, 43, and 113) were shown by a "confluence" of
techniques to have been produced on "the same pad, with the same
ink," and with impressions (from indentation analysis) showing
they were all created in 1986 but dated in 1984. However, this
assertion misrepresents the significance of the forensic
findings, since, as we discuss below, the techniques relied on
are not able to identify a single pad or pen as the source of
these documents or to ascertain with certainty the order in which
these pages were created. Even were the pages written in 1986,
this would not eliminate the possibility that the data were
collected in 1984.

ORI also argued that Dr. Imanishi-Kari's explanations were
inconsistent with earlier descriptions of experiments by
Dr. Imanishi-Kari which refer to the dates which appear on pages
in the I-1 and R-1 notebooks, allegedly implying that at that
point Dr. Imanishi-Kari took the dates on the pages more
seriously. ORI Reply Br. 3, n.4, referencing Ex. H245 (Letter
from Dr. Imanishi-Kari to Dr. Bick in 1988). However, this is
not inconsistent with Dr. Imanishi-Kari's practice of using the
dates loosely to identify and distinguish experiments without
placing much importance on the specific day on which a particular
sample was run through the counter. Also, since it appears that
the page numbers were put on later, referring to dates may have
been the only way to distinguish similar experiments in the
notebook in her letter to Dr. Bick. Thus, the letter does not
support ORI's inference that Dr. Imanishi-Kari's description of
her practices in record-keeping and data organization changed
after she learned of the Secret Service's findings. 44/

Notebooks of other MIT researchers do not establish a
standard by which the I-1 notebook can be judged fairly.

The Secret Service examiners testified that, in evaluating the
authenticity of a document, it is critical that a "comparative
norm" be established against which to judge the "normalcy" of the
questioned document and to weigh the importance of any deviation.
Tr. 3613 (Stewart), 3229-30 (Hargett). The document examination
work began with only three laboratory notebooks, all belonging to
either Dr. Imanishi-Kari or Dr. Reis (currently in the record as
I-1, I-3 and R-1). Consequently, the examiners had nothing with
which to compare these notebooks except each other. Tr. 3215,
3227 (Hargett) (We "didn't know what researchers' laboratory
notebooks were to look like, quite frankly . . ." Tr. 3229
(Hargett)). The examiners requested that the Subcommittee obtain
additional examples of notebooks from MIT for the relevant time
period. The Secret Service then received 26 notebooks produced
by various MIT researchers and used these for comparison. 45/ Tr.
3617 (Stewart).

However, Mr. Stewart noted that establishing a comparative norm
is sometimes difficult and that "this case is an example of one
that was difficult to do." Tr. 3613. We strongly agree that
ascertaining any norm for a laboratory notebook based on the
assortment of submissions called "notebooks" in this case is
indeed difficult, if not wholly futile. Some submissions conform
to the mental image that the phrase "laboratory notebook"
generally evokes, that is, they are indeed bound notebooks with
numbered pages proceeding in evident chronological order, or at
least looseleaf notebooks in some semblance of order. However,
many others consist of simple manila folders with loose papers
and tapes and a few scribbled notes and with dates in no evident
order and years apart, or of pieces of partially-used pads. The
I-1 notebook (and others from Dr. Imanishi-Kari's laboratory)
seems to fall between these two extremes.

The problem of determining what range of practices in recording
laboratory data would be considered authentic as opposed to
outside the norm is dramatically different from the sort of
document authentication issues normally presented to the Secret
Service. The usual approach is simply to use as a standard a
document about which no questions of authenticity are raised.

A standard could be handwriting of an individual, a standard
could also be a genuine currency, it could be a genuine
document of some type, perhaps a birth certificate, it could
be a genuine credit card.

Tr. 3556 (Hargett). When asked how to determine a comparative
norm in a situation where "there might be many kinds of documents
that would be considered genuine," Mr. Hargett testified as

Let me take it back to a typical case that we would work,
let's say. Let's say it is a false identification case.
Where we have to determine whether or not certain
identification is genuine. It's important for us to go back
to the manufacturer of that genuine document, and not only
we receive that genuine document, but also receive any
changes that have been made to that document, because we
don't know, for sure, what variation of that genuine
document we may be comparing against the questioned

Tr. 3560 (Hargett). In this case, the examiners ultimately
relied on representations made to them that the other notebooks
which they received as "samples" from other MIT researchers were
"typical" and "unquestioned." Tr. 3562 (Hargett). However, the
one fact which emerges most clearly from a review of the
"notebooks" in the record is that researchers here employed all
sorts of idiosyncratic methods to record data. While the other
researchers' work may well be unquestioned, in that no one has
suggested any reason to question them, we have no basis to
presume, for example, that dates in other notebooks more
accurately represented dates on which those experiments were
conducted. 46/ Certainly, none of the notebooks can be described
as "typical" of any researcher's products and, to a large degree,
there simply cannot be said to be any "norm." Cf. Tr. 3863
(Stewart) (no one form became the norm).

The initial use made of the comparison notebooks was in relation
to "trends" observed in the I-1 notebook. Some of the
observations were that counter tapes were cut into small pieces
and moved from one location to another, that changes were made to
dates, reagents, or other notations, and that some pages
contained writing from more than one color of pen or counter
tapes that differed from each other in color or intensity of ink.
Tr. 3603-04 (Stewart), 3228-29 (Hargett). Some of the same
problems were found in the R-1 notebook, which led the Secret
Service to disregard it as a "norm." 47/ Tr. 3614 (Stewart).
Yet most of the same observations also could be made about other
researchers' notebooks to a greater or lesser extent. 48/ Tr.
3863 (Stewart).

Finally, it is not evident that a norm based on notebooks from
MIT in the mid-1980's is relevant to evaluating the legitimacy of
Dr. Imanishi-Kari's methods of data collection and recordkeeping.
Dr. Imanishi-Kari's practices for recording data would not
necessarily have been shaped by MIT laboratory culture, since she
had already worked for a number of years in laboratories in
Europe and elsewhere which may have had different standards. No
evidence was presented that she received any instructions from
MIT regarding acceptable methods of recording or preserving data.
Therefore, we do not find evidence of a standard for laboratory
notebooks that would apply to expectations for her records.

The Panel concludes that the Secret Service did not have a basis
to establish a comparative norm against which to evaluate
Dr. Imanishi-Kari's notebooks.

The other researchers' notebooks and counter tapes used by
the Secret Service are a substantially incomplete set of
those produced in the laboratories at the time.

The Secret Service examiners operated under the assumption,
admittedly not verified, that they had obtained most of the
notebooks generated during the relevant time in the MIT
laboratories involved. Tr. 3503-04 (Hargett). 49/ However, it
is clear on the record before us that not even the larger sample
of about 60 notebooks which were eventually obtained, and
considered at least to some extent in the document examination,
constitutes even close to a complete record of the way data were
recorded then, since only those which were still available from
MIT in 1989 or 1990 were obtained. Tr. 3864-66 (Stewart). 50/
In fact, MIT Provost Deutch informed the Office of Scientific
Integrity (OSI, ORI's predecessor) in 1989 that "the general
practice in research groups which were likely to use [gamma
counters] . . . is for students and postdocs to take their
laboratory notebooks with them when they depart from MIT.
Accordingly, we do not have access to the universe of past
laboratory notebooks." Ex. H278, at 5. He further indicated
that it was not feasible for MIT to seek information from persons
no longer associated with MIT. Id.

The Secret Service concluded that its sample nevertheless
represented an adequate selection of the notebooks. ORI argued
that this was sufficient because the notebooks represented 12
different researchers and there was testimony that only 15
researchers "regularly used the three counters on the first
floor" laboratory, while others in the building used these
counters rarely. ORI Br. 64-65, citing Tr. 5107 (Imanishi-Kari),
4587, 4611 (Igras). However, this calculation misstates in
several ways the number of researchers whose notebooks might have
contained relevant counter tapes.

First, the testimony was that 15 people from the first floor were
regularly using the counter machines for similar types of
experiments at any one time. Tr. 4587 (Igras). The relevant
time period covered several years during which it is clear that
turnover occurred, in students and others who might work in the
laboratory for relatively short periods. The record indicates
that the cumulative number of researchers using the machines
during the period was considerably more than 15. Second, while
others in the building may have used the first floor counters
less often, that does not mean that they did not have access and
make some use of them so that their notebooks may have had some
of the relevant counter tapes. Third, no evidence was presented
that, for those researchers who were represented among the more
than 60 notebooks ultimately obtained, these were all of the
notebooks which they generated during the relevant time period.

The Panel concludes that the "notebooks" reviewed by the Secret
Service did not represent a complete universe of notebooks
created in MIT during the relevant period.

The Secret Service did not have any information about the
scientific significance of any of the materials they
reviewed and hence attributed unwarranted suspicion to
characteristics of the data for which there were scientific

The Secret Service examiners made very clear that they conducted
all their examinations without inquiring into the substantive
science or laboratory techniques related to the data in the I-1
notebook. Tr. 3497-98 (Hargett), 3601-02 (Stewart). This
approach was intended by the Secret Service examiners to insulate
them, in order to conduct their tests objectively, without
preconceptions about the significance of particular experiments
or pages. 51/ However, without interviewing the persons who
generated the data or compiled the notebooks, the examiners made
assumptions about what would be expected in the notebooks. They
operated in an almost complete vacuum of information about the
nature or meaning of the research notebooks and data.

The substantive purpose of the data was relevant to some aspects
of the organization of the I-1 notebook which the Secret Service
considered outside the norm. For example, the use of an
unusually large number of cut strips and small segments of tape
(and some dating problems) may result from a frequent need to
directly compare results of several large assays which could not
have been performed in a short time and for which the samples
could not easily be lined up without much cutting or hand
transcribing of numbers. Thus, the overall assessments of the
examiners about the oddity of the I-1 notebook is of more limited

The detachment of the Secret Service analysis from scientific
context affected the weight to be given their conclusions in
another way, as well. Many of the conclusions in the Secret
Service reports are couched in terms of whether certain pages or
data are "consistent with" or "most consistent with" having been
produced on particular dates. See, e.g., Exs. H501, at 2, Ex.
H502, at 3. The phrase "consistent with" is defined to apply
when "a definite determination is not justified because of a lack
of individuality, but the evidence points toward an event as
having happened with no evidence to the contrary." ORI
Report 191, n.150. The phrase "most consistent with" is defined
to apply when "more than two events are possible, but one is more
likely than any of the others." Id. The Secret Service
examiners lacked any information about evidence other than their
own forensic findings that was contrary to the conclusions they
reached or that showed that they were wrong about the relative
likelihood about two different events. Therefore, their
conclusions failed to take account of the larger picture.

Evidence presented on the existence or absence of matches
between certain counter tapes in the I-1 notebook and those
in other researchers' notebooks is of limited probative

First, we emphasize that the document examination tests do not
raise any suggestion that the counter tapes in the I-1 notebook
were not genuine counter tapes produced by machines of the sort
available at MIT throughout the period and regularly used by
Dr. Imanishi-Kari and her laboratory. Instead, the questions
raised were as to whether they were likely to have been produced
at the particular dates indicated on the pages in the I-1
notebook where presently located.

In that regard, several aspects of the tapes were given
significance. The greatest attention was directed at green tapes
on pages dated in 1985 in the I-1 notebook. The tapes which the
examiners called "green" were distinguished by eye from the other
tapes which were mostly yellower in color, but the shading
difference between "green" and "yellow" was subtle and there were
variations in precise color. The examiners concluded that these
tapes were not produced at the "purported" time period because
(1) no notebooks other than I-1 had green counter tapes with
dates later than January 1984; (2) Dr. Imanishi-Kari's green
tapes matched tapes from much earlier time periods thought to be
produced on the same printers (in ink formula and the intensity
of the printing); and (3) Dr. Imanishi-Kari's green tapes did not
match other tapes from the purported time period. See Exs. H502,
at 3, H503; ORI Br. 89. We discuss next why each of those
findings had limited probative value. 52/

Even if no other notebooks now contain green tapes dated
after 1984, such tapes may still have been generated in 1985
and not be in the record before us.

After careful review we find that no clear dividing line has been
established to support an inference that no green tape could have
been produced in 1985. In order to derive such an inference from
the absence of such tapes in other researchers' notebooks after
January 1984, it would be essential to find that a virtually
complete set of the counter tapes generated in 1985 had been
examined. 53/ Here, on the contrary, as explained above, we have
no indication that laboratory notebooks examined represented the
majority (much less a complete set) of notebooks generated by
researchers in the laboratory at the relevant time. Even if
these were most of the notebooks, we have no indication that most
counter tapes generated at the time were ever retained or placed
into notebooks. 54/ MIT Provost Deutch informed OSI that
"individuals enter the results from gamma counter measurements in
their laboratory notebooks in different ways including cutting or
recopying tapes and it is unlikely that a complete record for any
instrument can be obtained." Ex. H278, at 5. We know from
examining extant notebooks that many researchers tended not to
preserve original tape data at all, but merely wrote down their
results (in particular, Dr. O'Toole). ORI itself noted "the
apparent practice of many researchers to destroy tapes when
finished with them." ORI Reply Br. 13. Nevertheless, ORI argued
in effect that "a fairly large sample of green tapes [is]
available." Id. A sample, even if it were fairly large, would
not suffice to eliminate the possibility that matching green
tapes were produced in 1985 in relatively small quantities and
hence were not among those that happened to be preserved by
anyone other than Dr. Imanishi-Kari. 55/ ORI asked us to
conclude that green tapes were no longer used in the laboratory
in 1985 based in part on information obtained by the Secret
Service around 1989 from a source for the tapes (Western Union)
to the effect that they stopped seeing greenish tinted tapes
"around 1983 and 1984." Tr. 3738-39 (Stewart). 56/ While ORI
concluded that this information was consistent with the green
tapes being phased out in 1983 and therefore unavailable in the
laboratory by 1985, it is equally consistent with the tapes being
available for purchase at least through 1984 and thereafter being
gradually used up by customers like MIT from their stocks. 57/
Such a pattern would also equally well accord with the undisputed
observation of the notebooks in the record, which show relatively
frequent use of green tape in 1981 through early 1983, followed
by sporadic appearances thereafter, ending in mid-1985. ORI
Reply Br. 13; R. Br. 30. 58/

We cannot even be certain that none of the other green tapes that
are in the record were from 1985. Undated green tapes appear in
two Maplethorpe "notebooks" (C-2 and C-5) which could be from
after January 1984. The C-5 notebook, which contains one of the
loose undated green counter tapes, has pages dated from 1984-85.
ORI argued that there is some evidence that these tapes are not
from 1985 because they were paper-clipped to other tapes dated
from February 1982-January 1984. ORI Reply Br. 13. However, we
do not know whether the paper clips were put on by Dr.
Maplethorpe or by someone else in the investigative process, and,
in any case, the grouping of tapes from three different years
together is hardly proof that an undated tape in the same
grouping could not be from a later date. 59/

We therefore decline to infer that green counter tapes dated
after January 1984 are necessarily misdated. We turn next to the
significance of finding full matches in earlier time frames for
particular green counter tapes.

The existence of "matching" tapes from 1981-1984 does not
prove that green tapes in the I-1 notebook were not produced
in 1985.

A "full" match in this context meant two green counter tapes with
matching printer type font and matching ribbon ink (formula and
intensity). There were difficulties with both the printer and
the ribbon aspects of a "match." Unless it were conclusively
established that two counter tapes were produced by the same
printer, it would be irrelevant whether they were produced by
ribbons using the same formula of ink or at the same point in the
cycle of intensity. We therefore consider whether the Secret
Service investigation was able to establish whether two counter
tapes were produced by the same printer.

The investigation of the counters and printers used by MIT
researchers at the relevant time period (roughly 1981-1986)
depended largely on information provided by MIT (indicating that
six counters were on site during some part of that period) and a
visit to view the location of those that were still in operation
in 1989. Ex. H278; Tr. 3726-3734 (Stewart). Of the six in use
at the relevant time, it is not disputed that three were on the
first floor, where Dr. Imanishi-Kari's laboratory (among others)
was located in 1984-1985: a Beckman gamma counter, a Packard
gamma counter and a Packard beta counter. Each counter came with
a printer with which each counter was normally associated
thereafter. 60/ Tr. 4571-72 (Igras). The printers involved here
were teletype-style impact printers using ribbons like those in
old typewriters.

The counter tapes produced by different machines could be
distinguished by several characteristics, some of which were
attributable to the counter itself and some of which were
attributable to the printer being used. See Ex. R67; Tr. 4246-53
(Richards), 4586-98 (Igras). The counter determined the format
of the information on the tape. Id. For example, only the
Beckman gamma counter periodically inserted a register number on
the tapes which it produced, and its formatting produced
printouts with two-place decimal points and double line spacing
between counts. Tr. 4585-86, 4593-96 (Igras); Ex. R67. The
Packard gamma counter format was single-spaced with no decimal
points. 61/ The Packard beta counter format was also single-spaced
with no decimal points. Tr. 4586 (Igras); Ex. R67. The teletype
printers functioned as "dumb terminals" in regard to the
formatting of the printout. ORI FFCL IV.D.9 and cites therein.

The teletype printers were all the same model but they differed
in the "font" of the letters and numbers which they produced
(which was determined by the printer not by the information sent
to it by the counter to which it was connected). Tr. 4028
(Stewart), 4251-54, 4438 (Richards), 4571, 4586, 4593 (Igras).
The printers normally associated with the Beckman gamma and
Packard beta counters had fonts that were "virtually identical."
ORI FFCL IV.D.7, and cites therein. It is thus clear that the
same font can be found on more than one printer. The printer
normally associated with the Packard gamma counter had a
different font, most easily distinguished by a slash through the
zero which does not appear in tapes produced by the other
printers identified here. Tr. 4597-98 (Igras), 3731 (Stewart);
Ex. R67.

Since there are tapes in the I-1 notebook exhibiting the font
characteristics of both kinds of printers, the examiners could
safely conclude that at least two different printers were used to
generate tapes in the I-1 notebook. See Tr. 3727 (Stewart). In
addition, there were tapes exhibiting the format characteristics
of both of the gamma counters from the first floor laboratory
(along with the font characteristics of the printers usually
associated with those counters). Since Dr. Imanishi-Kari agreed
that she predominantly used the two gamma counters on her floor
in conducting the research contained in the I-1 notebook, it was
reasonable to infer that the source of each of those tapes could
be traced to the counter with the matching format and either the
usual printer or one with same font attached to that counter.
Cf. Tr. 5097-98 (Imanishi-Kari). However, this evidence does not
establish either that all of those tapes were generated with only
two printers or that all of the tapes in the notebooks of other
researchers with matching format or font characteristics were
generated on the same counters or printers used in the I-1
notebook. 62/ See, e.g., Tr. 4552-53, 4557-58 (Verret) (if a
printer hooked to a counter broke down he would use a counter in
another department at MIT, most often in another building).

ORI disputed the possibility that matching tapes with identical
fonts could represent different printers as a result of switching
printers between counters or substituting back-up printers with
matching fonts during downtime. First, ORI relied on testimony
of Ms. Igras for the proposition that printer substitution
occurred only once while she was at MIT. ORI Br. 91. However,
it is clear in context that she testified more narrowly that she
was aware of only one complete replacement of a printer that
could not be repaired on site. Tr. 4623-24. She did not testify
that temporary or short-term use of a back-up during the much
more usual in-house printer repairs did not occur at other times.
63/ Mr. DeHaro (the building manager at the Cancer Center for 22
years) testified that such temporary changes did occur. 64/
Second, ORI's claims that printer breakdown was "a fairly rare
event" and "printer substitution even rarer" misstates the
record. ORI Br. 91. Ms. Igras did not testify that printers
broke down only two to four times a year, but rather that was how
often "you would get probably . . . outside service calls." Tr.
4573. On other occasions, the problems could be repaired in-
house. Id. Mr. DeHaro testified that he switched printers as
needed. Tr. 4448, 4469. He testified that printers broke
"pretty frequently" and that "three, four . . . times a year" he
would plug in another one that did work without keeping any
record of such substitutions. Tr. 4451. Further, Mr. Stewart
testified that he was told by Beckman representatives that it was
very easy to move printers between counters and that the printer
"broke down a lot" and "had to be repaired quite often." Tr.
3914-15. In addition, Mr. Stewart testified that he knew from
"talking to them at MIT that many times they made multiple calls
on repairing instruments, and that's why they had a stash of
extra machines to use in case these machines went down." Tr.
4103. ORI's position is also undercut by ORI's own argument that
printers were easily switched between counters without any
formality or record. 65/ In light of the testimony that printer
repairs and substitutions did not necessarily involve requests
for outside service or any formal recordkeeping, we find that the
service records for the counters and printers provided by MIT
represent only a part of the repair history, at least for the
printers which were "primitive machines" that could be worked on
by the users. Tr. 4573 (Igras). In addition, MIT indicated that
the service records were only those available as of fall 1989 and
MIT did not represent that they constituted a complete record.
Ex. H278.

Even assuming all of the I-1 notebooks tapes were generated by
the two counters on the first floor and that those counters were
using their normal printers to produce all those tapes, ORI did
not establish that counter tapes found in the comparison
notebooks were necessarily produced on the same machines, even if
the format and font on tapes were the same. MIT Provost Deutch
warned that, "in order to make an accurate association between a
specific tape and a specific machine, the individual student or
postdoc would have to remember which instrument was used for a
particular run." Ex. H278, at 5. 66/ There is no indication of
any researcher who was able to inform ORI or the Secret Service
of which counter and printer was used to produce a particular
tape. No log books were kept of the usage of the counters and
printers. Id. at 5. Rather, the examiners extrapolated from
matching formats or fonts that the source machines must have been
the same.

ORI argued that even if two different printers were used that had
the same font, the conclusion that the I-1 tapes matched much
older tapes "does not depend on the font in isolation, but rather
on the concurrence of several variables, including paper and
ink." ORI Br. 70. We discuss the findings on ink next ("paper"
means only that the tapes all showed the green tint), but it is
plain that the Secret Service findings connecting the I-1 green
tapes to earlier dates depended on their having been produced on
the same printer -- the use of the same formula of ink in a
ribbon used on another printer at a another time is meaningless.

In fact, ORI recognized that the findings of "full match" mean
little and in post-hearing briefing put most weight on the
absence of matching tapes in other notebooks from the particular
time periods during which Dr. Imanishi-Kari's green tapes were
purportedly generated. Thus, ORI argued regarding one set of
green tapes, as follows:

We note, as an initial matter, that the match itself is not
particularly critical: the critical fact is that the 1985
green tapes do not match other 1985 tapes from the same
counter that were generated in the same time period as the
June subcloning tapes. This fact alone provides substantial
evidence of fabrication and falsification.

ORI Br. at 69. We therefore turn next to the findings that
particular tapes from similar time frames do not match in printer
ribbon ink or intensity.

ORI did not prove that the absence of "contemporaneous
matches" to certain questioned Dr. Imanishi-Kari green tapes
implies that the questioned tapes were falsified as to date.

The Secret Service examiners tested yellow counter tapes with the
same format in another notebook dated in a narrow time frame
around the contested green tapes to determine if those that
appeared to have been generated by the same counter as the
contested green tapes had the same ink formula and similar
intensity of ink. Based on not finding matches in this effort,
the examiners concluded that the questioned tapes were not
authentic as to date. This conclusion depended on the tapes
being compared having been produced on the same counter and
printer, since it would be meaningless to merely show that other
counters or printers in use at the same time produced tapes that
were different from Dr. Imanishi-Kari's. However, the examiners
were not able to establish this predicate.

First, two tapes with the same format could be assumed to come
from the same counter only if they were both generated in the
first floor laboratory of the Cancer Center. While
Dr. Imanishi-Kari testified that she used the first floor
counters predominantly in the work contained in the I-1 notebook,
we have no assurance that tapes in other notebooks were generated
on first floor counters. Therefore, the occurrence of the same
format on a tape in another notebook as that on a tape in I-1 is
not a guarantee that the two tapes were produced on the same

Further, even assuming the matching formats implied that the same
counter was used for both tapes, this would not establish that
they were produced on the same printer. As discussed above,
printer fonts were not unique and substitution was not unusual.

In addition, all the difficulties discussed in regard to
assessing the significance of not finding green tapes after
January 1984 also apply to assessing the significance of not
finding contemporaneous tapes from particular dates that match in
tape color or ribbon ink. Matching tapes may have existed that
were not preserved or not included in this record. Without
confidence that the record would include any matching tapes if
some were generated during the same time frame, we cannot
reasonably infer from the Secret Service findings that the
challenged tapes were not generated at the time represented.

The mere use of yellow counter tape on dates close to those on
which Dr. Imanishi-Kari's green tapes were purportedly generated
is meaningless. The testimony established that the counter tape
paper rolls could be changed at any time by almost any one.
There was uncontradicted testimony that partial rolls were
sometimes removed before a long assay to ensure that paper would
not run out and then the leftover old rolls were used at later
date. Tr. 5101 (Imanishi-Kari). Therefore, the appearance of
one color roll of tape on a certain printer would not necessarily
mean that it would be reasonable to expect that printer to
continue to generate tape of that color for the length of an
entire roll.

In relation to the ink formula tests, Mr. Stewart testified that
the Secret Service attempted unsuccessfully to pinpoint the
dating of the ink used on the printer ribbons. Tr. 4010. 67/ In
addition, the tests used by the Secret Service were not able to
identify whether print was produced by the same ribbon, but only
whether the ribbons used contained the same formula of ink which
might have been used on hundreds of thousands of ribbons. Tr.
4109-10 (Stewart). 68/ The tests could establish if ink samples
taken from two ribbons did not contain the same formula of ink,
and it was clear that at least two formulas of ink were used in
ribbons to produce the tested counter tapes. 69/ See Tr. 3759-61
(Stewart). It was likely that MIT purchased the ribbons in large
quantities which might all have contained matching ink formulas
or might not have. Tr. 4108-09 (Stewart). There was no evidence
that MIT researchers used up each batch of ribbons before using
ribbons from another shipment (which might have supported an
assumption that matching ink formula indicated a similar time
frame if the formula in each batch or even each ribbon were
always the same, which was not demonstrated). Tr. 4109
(Stewart). The dry ribbons could be stored and used after many
years. Tr. 4111 (Stewart). ORI did not establish that tapes
from surrounding dates matched green tapes in the White-Scharf
and Maplethorpe notebooks. Exs. H30, H31, H75. 70/ Thus, no
inference may reasonably be drawn from the observation that tapes
from different time frames had matching ink formulas or that
tapes from similar time frames had different ink formulas.

In relation to ink intensity, the Secret Service examiners
observed that in general the trend was that the intensity of ink
color over consecutively dated tapes showed a gradual decline in
intensity as the ribbon wore out and then a sharp change to dark
intensity as it was replaced with a cycle of about 90 days. Tr.
3646-48, 3715-16, 4004 (Stewart). Mr. Stewart also noted that
ribbons that were worn might be reversed or raised laterally to
reach a clean area, rather than replaced, but assumed that this
would not affect the continuity of ink intensity, since it would
represent an abrupt return to very dark followed by a gradual
lightening, just as with a replacement. Tr. 3735-36. However,
Ms. Igras testified to another possible response to fading ink
intensity. She stated that sometimes "instead of replacing it,
people would rewind it" in order to "get their printout." Tr.
4580. The examiners did not indicate that they tested for the
possibility that such rewinding would result in print that was
more legible than print from the end of a ribbon but not dark as
a clean section would be. There was testimony that ribbons were
easily changed and sometimes switched between printers (which
would result in an abrupt change in intensity of the output of a
printer without a return to a "new" dark stage). Tr. 5102-03
(Imanishi-Kari). Some inference might be drawn from a break in
the cycle of ink intensity if it were established that tapes came
from the same printer while using the same ribbon. Since ORI did
not establish those prerequisites, inferences drawn from ink
intensity observations would be unreliable.

For the reasons explained above, we conclude that the fact that
tapes dated around the same time as Dr. Imanishi-Kari's green
tapes do not match her tapes in color or ink formula or intensity
fails to establish that her green tapes are not authentic as to
date. 71/

The Secret Service conclusions that several tests
collectively establish that certain pages in the I-1
notebook were compiled out of order in 1986 (rather than
1984 as "purported") are not adequately supported by the
evidence in the record as a whole.

The Secret Service relied on several tests to suggest that pages
in the notebooks dated several years apart were actually created
around the same time. Particularly disturbing was the argument
that these pages were created in 1986, after the Cell paper had
been published. However, the significance of these findings is
greatly diminished by our conclusion that Dr. Imanishi-Kari never
asserted that the pages in the I-1 notebook were created in 1984.
All these tests go to creation of pages not the creation of data,
and, as explained above, Dr. Imanishi-Kari never denied that she
organized data at different times than she generated them,
possibly as late as 1986.

Furthermore, ORI injected a false sense of precision to its
claims about the dates on which these pages were created by
misstating what the tests employed by the Secret Service can
show. For example, ORI discounted scenarios other than
fabrication on the grounds that they would involve
Dr. Imanishi-Kari having used the same pad and pens two years
apart. ORI Br. 46. However, the Secret Service considered only
whether pages from the I-1 notebook were created using the same
kind of pad paper and same formulas of ballpoint ink as pages
from the I-3 notebook (also belonging to Dr. Imanishi-Kari) which
dated from 1986. In addition, the Secret Service looked at
impressions (made visible by a mechanism known as an
Electrostatic Detection Apparatus or ESDA) from the writing of
one page appearing on other pages which were hence presumably
underneath at the time of the writing. The document examiners
themselves agreed that none of these tests proves anything
significant independently, although collectively they contended
they were meaningful. We consider next the limits of each of the
tests used, in weighing the conclusions which ORI would have us
draw from their results.

In regard to the paper used in the I-1 and I-3 notebooks, the
examiners studied the thickness and other characteristics of the
paper which was removed from pads of the kind glued at the top
(this analysis did not involve looseleaf or spiral notebook paper
which was also used for some pages in the I-1 notebook) and
certain identifiable, "reproducible" defects that occurred in
printing of the pads. Tr. 3683-87, 3693-95 (Stewart). By these
means, they determined that the pad paper in the I-1 and I-3
notebooks came from at least two different sources, based on
differences in thickness of paper and in specific defects.
However, the examiners testified that paper with a particular set
of defects could run through entire batches of pads. Also, the
evidence on the printing process for these pads showed that a
defect could occur on all or only some pages in a particular pad
depending on the cause of the defect and the practices of the
printer in interleaving pages to form pads. Tr. 3688, 3966-69
(Stewart). So again (at best) we can infer that some pages did
not come from same pad but cannot say if pages did come from same
pad, only that they could have. 72/ Mr. Stewart himself
testified that in his own practice he sometimes used only a few
pages from a pad or used old pads with "cobwebs" so that he had
to look for some other forensic basis to tie together any
significance in the use of the different pads within a short time
or the recurrence of the matching pad paper in different years.
Tr. 3676, 3699-3700. He characterized the pad paper analysis "in
and of itself" as "marginally significant," but pointed to the
other findings described here as providing a weight of forensic
evidence to which the pad paper defect analysis contributed. Tr.

The examiners also analyzed the ballpoint pen ink used to make
notations on various pages, but they were not able to identify
particular pens used or to determine that ink used was not
available at a purported time. No evidence was provided that any
ink formula found or pens containing any of the formulas found
were unavailable at any point during the relevant period. That
is, like the analyses of ribbon inks, these analyses did not
preclude production of the pages on the dates written on them.
73/ Again, the analysis permitted the examiners to say with some
certainty that two samples of writing were not done with the same
pen, but not to say if two matching samples were done with the
same pen, only that they could have been. Nothing suggests that
any of the ink formulations identified were in the least
uncommon. In fact, ORI did not dispute that pens (like the pads
of paper) were probably bought in bulk. ORI Br. 46-47. Common
experience suggests nothing unlikely about an office or
laboratory having lots of one or two sorts of ballpoint pens or
leftover pads from one or two shipments available over a two-year
period. Again, Mr. Stewart acknowledged that the use of pens
with the same ink formulas on pages from different years was not
remarkable "on the surface." Tr. 4188. Further, when asked if
the use of the same kind of paper and ink on documents dated two
years apart meant that one or both were fabricated, Mr. Stewart
testified that it did not. Tr. 3928.

Turning to the ESDA impression analyses, it is important to
recognize that this approach addressed only the order in which
pages may have been written and did not purport to identify time
frame in which any page was written. Furthermore, the ESDA
analyses did not purport to say anything about when the
underlying data on the counter tapes (as opposed to the notations
on the pages) were created.

The examiners differentiated between impressions which appeared
askew on the page below and those impressions that were "in
registration." "Registration" was defined as a situation where
the impressions are aligned when the edges of the two pages are
lined up. Impressions that were askew were interpreted only to
mean that the page being written on happened to be on top of the
page receiving the impressions at the moment that the notations
which appeared on the ESDA were made. The examiners inferred
that finding an impression in registration in the I-1 notebook
meant that the page being written was still attached to a pad
above the page receiving the impression and was written prior to
the page below. ORI Report 191, n.150. Based on this inference,
the examiners concluded that, for example, when impressions in
registration of a page bearing a date in 1986 appeared on a page
dated in 1984, this was evidence that the "purportedly" 1986 page
was actually created first. See, e.g., ORI Report 194. This is
possible, but other possibilities also existed. The Secret
Service witnesses acknowledged that stacking pages that were
already detached from a pad could cause impressions to appear in
registration. Tr. 3466-67 (Hargett). However, they rejected
this possibility for most examples of registration found in the
I-1 notebook on the grounds that many other impressions were
skewed, which meant that Dr. Imanishi-Kari did not always stack
her pages neatly, and that they did not find marks of staples or
paper clips. Tr. 3539-40 (Hargett), 5568 (Stewart). Thus, Mr.
Hargett testified that that was "our logic . . . be it right or
wrong . . . was that it was on a pad, and not straightening of
the papers." Tr. 3540. This logic is obviously flawed, since
one can straighten pages on some occasions without doing so
constantly and one certainly need not staple pages in order to
stack them briefly while writing. 74/ The ESDA process does not
distinguish whether the page receiving the impression already
contained writing. Thus, folding a blank page on a pad back over
a page lower in the pad which was already written on would create
an impression in registration even though the page below was
actually written first (but not yet removed from the pad). Cf.
Tr. 3258-61 (Hargett). Since Dr. Imanishi-Kari testified without
contradiction that she often compared assays from different time
periods in organizing her data, it is possible that impressions
of later notations would appear on pages that were already
written or that pages with earlier data might be compiled and
written on at the same time as a much later assay was being
recorded. We therefore cannot conclude from ESDA results alone
that a particular page was fabricated because it contained an
impression in registration of a page reporting data from a later
experiment. 75/ Mr. Hargett, when asked how much reliance he
placed on the ESDA results, acknowledged that if that analysis
were "by itself, standing alone, . . . we wouldn't be here
today." Tr. 3499. Each of the other analyses which the
examiners relied on in combination with the ESDA results suffered
from limitations and weaknesses which similarly undercut them,

It is also important to consider, in seeking overall conclusions
about the document examination evidence, whether
Dr. Imanishi-Kari had any conceivable motive for the allegedly
false dating of the questioned pages. While some of the pages
involved contained relevant data (in particular, I-1:41 and I-
1:113, which were shown to Dr. O'Toole at a meeting), 76/ ORI
offered no possible reason to fabricate other pages for which the
same findings were presented.

The Panel concludes that, in reaching their opinions, the
examiners and ORI were misled by mistaken assumptions about what
the documents they were reviewing "purported" to be, failed to
establish a meaningful comparative norm, failed to put the
documents which they were reviewing into a realistic scientific
context, and overlooked the impact of the limitations of the
examination results in supporting their conclusions. As a
result, the inferences ORI drew from the document examination
results were not reasonably supported. Therefore, we conclude
that ORI failed to prove that the document examination findings
in themselves demonstrate that it is more likely than not that
the conditions found in the documents were the product of
deliberate and intentional fabrication or falsification.


A. Figure I

1. BET-1 Specificity

Figure 1, charge one: ORI charged that Dr. Imanishi-Kari's
"failure to disclose significantly discrepant results in the
experimental record concerning the specificity of BET-1 . . . was
a deliberate misrepresentation of scientific results constituting
a falsification of data and scientific misconduct." Charge
letter 2-3.

Issue: Figure 1 of the Cell paper compares the sera of
transgenic and normal (C57BL/6) mice. In each assay reported, a
coating of anti-idiotype reagent was used to capture idiotype-
positive antibodies and then two reagents were used to test for
variants or allotypes of IgM antibodies. 77/ C57BL/6 mice
normally produce IgM antibodies of the allotype æb (hence
considered endogenous). By contrast, the BALB/c mice from which
the transgene was derived produce IgM antibodies of the allotype
æa. The researchers were interested in testing for allotype in
the idiotype-positive antibodies produced by the transgenic mice.
They used two anti-allotype reagents, both monoclonal antibodies:
AF6 to identify the endogenous æb allotype and BET-1 to identify
the transgenic æa allotype. In addition to data showing the
pattern of reaction of the normal and transgenic sera to AF6 (top
panel of the figure) and BET-1 (bottom panel), the figure
included data showing how each reagent reacted to control
hybridoma proteins which represented standards for C57BL/6 mice
(P9.37.9, æb) and BALB/c mice (20.1.21, æa). Both P9.37.9 and
20.1.21 were NIP-positive and anti-17.2.25 idiotype-positive by
other tests. Normal C57BL/6 and BALB/c sera were also used as
controls. The tests of the reagents against the control proteins
served to establish the effectiveness of the reagents for
distinguishing the two allotypes. Charge one attacked the data
in the bottom panel which showed that BET-1 did not react
significantly with the æb control protein (or the normal sera)
but reacted strongly with the æa control protein (and the
transgenic sera). ORI alleged that results in the notebooks from
other experiments which used BET-1 showed much less specificity.
In addition, ORI alleged that the following statement in the text
of the Cell paper was inaccurate for the same reason:

The BALB/c anti-NP IgM antibody, 20.1.21, bound only to the
anti-æa allotype reagent, as it should, and the C57BL/6
anti-NP IbM antibody, P9.37.9, bound only to the anti-æb
allotype (Figure 1).

Cell paper 249 (emphasis added).

Analysis: Figure 1 and the accompanying text report results of
specific experiments which used BET-1 as one of the reagents, not
a survey of results using BET-1. To the extent that the actual
behavior of the control proteins with BET-1 was represented
correctly, it is not clear that it is fair to characterize them
as misleading even if there are some contrary results in other
experiments. ORI did not set out a charge asserting that the
data reported in Figure 1 were not authentic, but made
allegations to that effect in its proposed FFCLs. Therefore, we
deal in this section first with whether the Cell paper falsely
reported the specificity of BET-1 in the assays reported in
Figure 1. In the next section, we consider ORI's claim that
Dr. Imanishi-Kari was required to report other data that were
"representative" of the experimental record as a whole and
whether she deliberately failed to do so. ORI Br. 35-36.

All of the experiments presented in Figure 1 were performed by
Dr. Reis, who prepared the graphs of her results for publication.
Tr. 4691 (Imanishi-Kari), 2504 (Reis). The original graphs for
Figure 1, which form part of the raw data, are at R-1:65-68 (with
headings in Dr. Reis's handwriting). Unquestionably, Dr.
Imanishi Kari worked closely with Dr. Reis and reviewed her work.
Tr. 4906, 5462; Ex. H100, at 3. Dr. Imanishi-Kari did not deny
that she shared responsibility for Figure 1, but it must be
recognized that she relied on Dr. Reis as the original source of
the data.

Assuming that the data at R-1:37-40 and the graphs at R-1:65-68
are legitimate, as we conclude below, there can be little
question that Figure 1 is an accurate reflection of actual
experiments insofar as it represents how BET-1 performed in those
particular assays. The textual statement, however, if taken
literally in asserting that the control proteins "bound only" to
the respective allotype reagents, amounts almost to another
example of error obvious on the face of the paper; as Dr. Eisen
said, "antibodies don't bind only to something;" there is always
some cross-reactivity to other proteins. Tr. 2022; see also Tr.
2823-24 (Capra). Even Figure 1 itself could be considered to
conflict with this statement since AF6, and to a minor degree
BET-1, show some level of non-specific binding, whether through
stickiness or lack of specificity, as ORI itself noted. ORI Br.
37, n.20. Dr. Imanishi-Kari has long acknowledged that BET-1
does not perform with absolute specificity, but asserted that it
discriminated well enough between the two allotypes and performed
as reported in the Figure 1 assays. The text is most reasonably
read in conjunction with the figure to mean that the reagents
behaved as specifically as shown in the data (which quantify the
specificity obtained in those assays).

In evaluating this charge, it is essential to recognize that
BET-1 is not a standard off-the-shelf reagent. BET-1 is a rat
anti-æa monoclonal antibody produced by a hybridoma cell line.
Cell paper 257. Preparation of BET-1 required cloning of the
hybridoma and purification of supernatants in batches. In order
to use BET-1 as a probe, it was further necessary to label the
reagent with radioactive iodine (125I). The useability of each
batch was limited by radioactive decay. No records were kept to
identify or date the preparations of reagent or the specific
iodination of reagent used in a particular experiment. 78/
Therefore, some of the difficulties observed with data using
BET-1 may reflect variation in the batches of BET-1 rather than
failure of BET-1 to function as expected as a reagent. 79/

Figure 1 does not misrepresent experimental data.

As noted, nowhere in the charge letter is it expressly alleged
that the raw data supporting Figure 1 were fabricated. However,
in its presentation at the hearing and its proposed FFCLs, ORI
alleged that forensic findings concerning R-1:37-40 undercut the
validity of the data so that they do not constitute credible
support for Figure 1. Cf. ORI FFCL IX.C.i.1-11; see Tr.
1297-1301, 1311-12 (Davie). Since Dr. Imanishi-Kari did not
object that she lacked notice of this allegation, and her brief
discussed this data, we address the issue here. R. Br. 44.

The document examination findings are: that the 5/24/85 date on
R-1:37 was altered in a different ink than other entries on the
page; that the date on the AF6 portion of the protocols on the
preceding page (R-1:36) was changed from 5/9/85 and the BET-1
protocol was entered at a different time since it does not appear
on an ESDA of R-2:22, which contains an impression of R-1:36
(from R-2:22, Ex. H541); and that the counter tapes from the BET-
1 and AF6 data were generated weeks or months apart and not on or
about the same day, 5/24/85.

As discussed in the overview of the document examination issues,
we do not consider the 5/24/85 date to be a representation that
the entirety of both assays were completed within a few days of
that date. It would be entirely consistent with the practices
for dating experiments in Dr. Imanishi-Kari's laboratory for one
assay to have been performed around 5/24/85 and the other around
5/9/85 and the dates changed to the latter when the entire
experiment was complete. ORI suggested, instead, that the motive
for changing the date was to permit Dr. Imanishi-Kari to claim
falsely that the BET-1 used in Figure 1 was from a new cloning of
BET-1 producing cells described on I-1:121, dated 5/22/85, where
BET-1 was working well, but which ORI alleged was a fabrication.
According to ORI, the BET-1 actually available for the assays at
R-1:37-40 was the same batch used in assays reported at R-1:18,
dated 5/7/85, where the reagent failed to work specifically. As
discussed below in relation to charge two on Figure 1, we do not
find the evidence adduced to support the claimed fabrication of
I-1:121 to be persuasive. Furthermore, since we conclude that
there is experimental evidence throughout the notebooks that
BET-1 was capable of identifying the æa allotype with adequate
(but not absolute) specificity, we do not agree with ORI that
Dr. Imanishi-Kari had a motive to alter the dates to create an
appearance of having performed the assays after the experiment on

This issue is complicated somewhat by confusion about precisely
which data represent the specific experiments reported as
Figure 1. The data at R-1:37-40 are in close accord with the
values on the graphs in Figure 1 (as Dr. Dahlberg himself
verified), except that the raw data for the fourth point for six
out of the eight curves shown in Figure 1 are missing. 80/ ORI
FFCL IX.C.i.7; see Dahlberg Decl., Att. 1. Dr. Reis indicated
that the missing dilution may have been completed on a different
day than the first three dilutions shown and simply were added
directly to the graphs to complete them. 81/ Tr. 2639. Dr.
Imanishi-Kari testified that the data now in the notebook are a
retest of the same sera which yielded virtually identical values
(at least as calculated by percentage of maximum binding for the
figure) but that the raw data for that assay are missing except
for the curves themselves. Tr. 5296, 5462-63. ORI did not
demonstrate that standards of conduct for scientists in
Dr. Imanishi-Kari's field at that time would have required
retention of the counter tapes or a record of the actual cpms,
where the quantitative results were preserved in the original
graphs in Dr. Reis's notebooks. ORI did not demonstrate that the
experiment could not have been performed as the witnesses for
Dr. Imanishi-Kari testified, with an additional dilution done
separately or as part of the retest. The data on R-1:37-40
(whether the data directly reported in Figure 1 or a retest
confirming the results on the same sera) constitute experimental
support for the accuracy of the quantitative results reported in
Figure 1.

Since we conclude that Figure 1 accurately reflected actual data,
we must next consider whether the figure and text nevertheless
misrepresent the experimental results because Dr. Imanishi-Kari
failed to disclose other experiments in which BET-1 was not
equally specific.

The results reported in Figure 1 and accompanying text were
not so discrepant that reporting them constituted a
misrepresentation of the experimental record. 82/

It is not disputed that the wording of the quoted statement in
the text overstates the degree of specificity with which BET-1
generally performs, in that the words "bound only" may imply
absolute specificity if read to refer to its general performance.
Dr. Imanishi-Kari did not dispute that sometimes BET-1 did not
work well as an allotype reagent, which she attributed to
iodination or other batch-specific problems or to "stickiness."
Dr. Imanishi-Kari said, however, she could discern these problems
in particular experiments from the controls and that she did not
use allotyping results from those experiments. It is not
disputed that BET-1 cross-reacted (i.e., gave erroneously
positive results) to some degree with the æb endogenous protein
even when BET-1 was "working."

This issue was directly covered by the NIH Scientific Panel
report which agreed that BET-1's specificity was not absolute and
required that a correction be published, but concluded that the
overstatement constituted error not scientific misconduct. 83/
The NIH Scientific Panel examined many BET-1 assays done by
different researchers in Dr. Imanishi-Kari's laboratory and found
that, while in some BET-1 failed to distinguish by allotype,
"most of the assays seen by the Panel clearly showed relative
specificity for æa." Ex. H262 (NIH Scientific Panel Report at
3). Those assays included one by Dr. O'Toole dated January 2,
1986 that showed 1,000-fold specificity and others of around 100-
fold specificity, all adequate to the purposes for which the Cell
authors used BET-1. Id. 84/ We agree with the NIH Scientific
Panel's assessment of the experimental record on BET-1
specificity because, as discussed below: (1) Dr. Imanishi-Kari
had no reason to lie about BET-1's specificity; (2)
Dr. Imanishi-Kari thought that BET-1 was adequately specific for
her purposes and was not attempting unsuccessfully to solve a
problem with BET-1 specificity in 1985; and (3) while the
implication that BET-1 was absolutely specific was erroneous, the
paper directed readers to the literature on BET-1 and was not
required to report all less successful results with the reagent.

It would not have been in Dr. Imanishi-Kari's interest to
overstate deliberately the specificity of BET-1 for æa.

Dr. Imanishi-Kari has long pointed out that she would have had no
motive to overstate the specificity of BET-1. If BET-1 failed to
discriminate well between transgenic and endogenous allotypes,
that would mean that endogenous antibodies were inaccurately
reported as transgenic. Since the central finding of the Cell
paper was the surprisingly high number of endogenous antibodies
bearing the idiotype, the authors had no reason to exaggerate the
level of exogenous antibodies. This position was forcefully
expressed by a former member of the NIH Scientific Panel who

[F]alse positive Bet-1 is not really a problem, for the
scientific question at hand. It would be a great problem if
Bet-1 did give false negative results with Balb/c (i.e.
transgenic) æ. AF-6 (anti-æb, i.e. anti-endogenous æ) is
the more important anti-allotype reagent. It clearly
discriminates well. In fact, focusing so strongly on the
Bet-1 problem makes one wonder whether the rest of the
accusations may be false. Dr. Imanishi-Kari had no valid
scientific reason to falsify the Bet-1 data.

Ex. H299, at 3 (Dr. Storb); see also Tr. 2772 (Wortis).
Interestingly, even Dr. O'Toole testified that she agreed from
the beginning that problems with BET-1 in themselves would not
much undermine the paper. Tr. 972.

ORI disregarded this argument, saying that such a "harmless
error" defense is irrelevant if someone has acted intentionally
to falsify or fabricate data. ORI Br. 34. 85/ However, the
point here is that the complete absence of any motive makes it
less likely that any overstatement of BET-1 specificity was
intentional and less likely that any anomalies observed on the
pages containing BET-1 data are the result of intentional
fabrication in an effort to support such an overstatement.
Motive is not a required element of ORI's proof, as it would be
scientific misconduct to lie deliberately in published reports
even against one's own interest. The absence of any motive to
deceive is nevertheless relevant to assessing intent.

A review of the complete experimental record does not
support ORI's view that Dr. Imanishi-Kari deliberately
suppressed discrepant results.

ORI undertook an analysis of experiments in the R-1 and I-1
notebooks which it interpreted to mean that BET-1 "generally did
not discriminate well" between the æ allotypes. ORI Report 63-
66. The experiments reviewed were at R-1:18 (from the 17 pages),
R-1:34; R-1:37; I-1:110; I-1:111-12; I-1:113; I-1:115-8: I-1:119;
I-1:121; I-1:125A; and I-1:128A. Each of these pages showed an
experiment in which the control proteins were bound to wells and
then probed with iodinated BET-1, except for I-1:110 (which used
non-iodinated BET-1). ORI calculated the percentage of maximum
binding for the largest values in the dilution series for each
experiment as a measure of the degree of cross-reactivity. Based
on this approach, ORI reported high percentages (above 20%) of
binding to the æb protein for the following tests: R-1:18 (93%);
R-1:34 (24%); I-1:111 (46%); I-1:113 (55%); and I-1:115 (34%).
The other experiments showed at least some cross-reactivity: R-
1:37 (the retest of Figure 1 data was reported by ORI as 13%,
although using the same calculation as the other experiments
would yield 8%--ORI argued that the positive control value was
used as the maximum because it was lower than the highest
experimental value which ORI considered as suggesting the control
was done separately 86/); I-1:110 (17%); I-1:119 (6%); I-1:121
(11%); I-1:125A (12%); and I-1:128A (8%). 87/ See ORI Report 58.
Based on this analysis, ORI concluded that BET-1 cross-reacted
with the æb control protein in the majority of the assays. ORI
Report 66. However, the same figures support the opposite
conclusion, i.e., that most of the assays demonstrate adequate
specificity, if not quite as good as that in Figure 1. (ORI
estimated the percentage of cross-reactivity in the graph as
about 5%, although this number cannot be read by eye. ORI Report
66; ORI FFCL IX.A.8. 88/

The Panel notes that in his analyses in the ORI Report, Dr.
Dahlberg, may have engaged in data selection and interpretation,
not unlike that for which he criticized Dr. Imanishi-Kari, in his
presentation of notebook data on the specificity of BET-1. For
example, although he acknowledged on cross-examination that it
was more accurate to compare assays after subtracting the
backgrounds (and other scientific testimony was that this method
was the proper approach), he did not subtract backgrounds in
comparing assays from the notebooks. Tr. 535-36, 545-46
(Dahlberg), 1910 (Eisen). Dr. Eisen testified that the result
was to overstate the degree of cross-reactivity by BET-1. Tr.
1910. In the case of the assay on I-1:110, for example,
subtracting the background counts would halve the cross-reaction
to a level in line with Figure 1 (from 17% to about 9%). Tr.
535-36 (Dahlberg). In addition, Dr. Eisen testified that the
percentage of maximum binding on which Dr. Dahlberg relied to
assess non-specific binding by BET-1 is not the only or even the
best way to measure cross-reactivity, nor is it clear that the
highest dilutions give the most significant information. Tr.
1903-17 (Eisen), 538 (Dahlberg). 89/ Dr. Dahlberg's approach
focused entirely on the degree to which BET-1 cross-reacted with
æb at high concentrations, but, as the authors reported in the
correction letter, even in those experiments where there was
cross-reaction, much higher concentrations of æb were required to
elicit the same level of reaction. For example, in two assays in
O-1, which ORI cited as showing very high levels of cross-
reactivity because at the highest concentrations BET-1 bound with
æb at 62% (O-1:35) and 86% (O-1:36) respectively, the binding at
the next lower concentration with BET-1 drops to 28% and 44% with
æb while remaining at over 94% of maximum binding for æa. ORI
FFCL IX.B.3. In other words, BET-1 still discriminated heavily
in favor of æa.

ORI claimed that the data showing good specificity occurred on
forensically questioned pages, whereas the data showing little or
no specificity occurred on "undisputed pages." ORI Report 87.
While it is true that ORI attacked the authenticity of only the
pages showing good specificity, Dr. Imanishi-Kari pointed out
scientific problems in the assays reported on the pages on which
ORI relied that caused her to disregard them as valid measures of
BET-1's specificity.

The data for the experiment at R-1:18-24 (part of the 17 pages)
are clearly aberrant, in that BET-1 reacted almost as well to the
æb control as to the æa control (93%). ORI did not point to a
single other assay with comparable results, and such results are
inconsistent with the literature on BET-1 and the experience of
other researchers discussed elsewhere. The evidence from Dr.
Reis is that this assay was an anomaly in which not only did the
controls indicate a problem with the BET-1 but cross-
contamination of the supernatants tested was observed when the
plates were thawed. Tr. 2514-16. Consequently, the assay was
run anyway "only to have an idea," but the results were
disregarded as unreliable. Tr. 2517.

The counter tape on page I-1:112 contains a handwritten notation
by Dr. Imanishi-Kari saying "Bad label BET-1!" which is an
apparent reference to problems with the iodination of that batch
of BET-1. See also I-1:113-14. The experiment at I-1:111 had a
very high background and subtracting it entirely eliminated the
evidence of cross-reaction, suggesting that the problem here was
stickiness of the reagent rather than cross-specificity to æb.
The experiment at R-1:34 had so high a background that Dr.
Dahlberg thought it inappropriate to subtract the background,
because it indicated something went wrong in the experiment
itself, but he nevertheless relied on the cross-reaction data
from that experiment, rather than disregarding the experiment.
Tr. 133-34, 537.

In addition, ORI's analysis ignored other examples in the
experimental record which supported the reported results as
within the range of those obtained in the laboratory using BET-1,
including some of those obtained by Dr. O'Toole. 90/ For
example, Dr. Dahlberg later acknowledged that O-1:22 and O-1:133
represent tests in which BET-1 showed specificity against the
control protein. Dahlberg Decl. 6, Att. 6. Dr. O'Toole agreed
that BET-1 clearly recognized æa better than æb in her competition
assay at O-1:174, and in fact that BET-1 was useful as a specific
reagent in competition assays. Tr. 897.

Dr. Dahlberg also acknowledged that BET-1 was acting with good
specificity on R-2:46 if it was being tested against the control
protein but contended that this assay could be disregarded
because he believed that the controls were performed with sera
rather than purified proteins. Dahlberg Decl. 5; ORI Br. 28. 91/
He pointed out that the control data referred to the strains of
mice (BALB/c and C57BL/6) rather than naming the control proteins
(20.1.21 and P9.37.9). However, Dr. Reis testified that R-2:46
was actually performed using control proteins. Tr. 2652-54. She
explained credibly that in her early days at the laboratory, when
she was still learning the names and sources of the various
reagents, she sometimes found it easier to remember which control
protein came from which strain by recording the source strain.
Tr. 2656-58. 92/

Furthermore, Dr. Reis testified that BET-1 generally performed as
a "good reagent" for her. Tr. 2655. While she agreed that it
was not working in the assay at R-1:18, she testified that the
lack of specificity in that instance was the result of
contamination. Tr. 2515, 2630. Dr. Imanishi-Kari testified that
she told Dr. Wortis and the others that the only experiment she
remembered where BET-1 failed to show any distinction between æa
and æb was the one in the 17 pages, i.e., R-1:18. Tr. 4966.

The testimony of Dr. Wortis supported the testimony of
Dr. Imanishi-Kari and Dr. Reis about the performance of BET-1 in
the laboratory. He testified that his experience with BET-1 in
his own work was that it discriminated between æa and æb, but that
there were sometimes certain "batches that did not discriminate
adequately for my purposes." Tr. 2784. In those instances, he
handled the problem just as Dr. Imanishi-Kari indicated that she
had, i.e., he did not use experiments in which the controls
showed BET-1 not working adequately, but he did not discuss those
difficulties in published reports of his work since he was not
writing about BET-1 but only using it as a reagent. Id.
Similarly, Dr. Kearney (an expert immunologist with considerable
idiotype experience) testified that BET-1 was allotype specific
when used on ELISAs, which test protein molecules. Tr. 2446.
Dr. Capra testified that, while the literature makes clear that
BET-1's specificity is not absolute, it also clearly
discriminates significantly, which is "as good as it gets" for
this kind of reagent. Tr. 2824.

ORI did not prove that BET-1's relative specificity was a
serious problem or focus of concern for Dr. Imanishi-Kari in

Dr. O'Toole's claims that the poor specificity of BET-1 was a
major problem on which Dr. Reis and Dr. Imanishi-Kari were
working are not persuasive. Cf. Tr. 905-06 (O'Toole). One of
the early concerns raised by Dr. O'Toole after copying the 17
pages from Dr. Reis' notebook was that BET-1 completely failed to
discriminate between æa and æb in the assay at R-1:18. She
testified that she was not surprised by this because that was in
accord with her experience at the time of that assay (May 1985).
Tr. 913. Before she copied the 17 pages, Dr. O'Toole had
commented on a draft of the paper. Tr. 914-15 (O'Toole). That
draft contained essentially the same statement that BET-1 bound
only to the proper control protein. Ex. H227, at 8-9. It is not
plausible that, if Dr. O'Toole believed at the time of reviewing
the draft or galleys of the Cell paper that Bet-1 did not react
specifically and she had been complaining frequently to Dr. Reis
and Dr. Imanishi-Kari about it, she would fail to react to the
statement in the draft, if not to Figure 1 itself. Yet we find
no evidence that Dr. O'Toole tried to correct this statement in
the draft or made any notation about it.

Dr. Reis and Dr. Imanishi-Kari both deny that Dr. O'Toole
constantly complained to them about BET-1's specificity or
indicated that it never worked for her, as Dr. O'Toole said she
had. Tr. 4953-55 (Imanishi-Kari); 2512 (Reis); 898, 1064
(O'Toole). Dr. O'Toole's testimony made clear that her
particular experiments depended far more on the precise degree of
specificity than did those published in the Cell paper (because
she had to be able to distinguish carefully between a positive
reaction caused by a small amount of æa as opposed to a large
amount of æb). Tr. 893. However, BET-1 was known to be least
effective in cell staining experiments, in which Dr. O'Toole
apparently was engaged. Tr. 2438 (Swain), 2453-56 (Kearney).
She may well, then, have had some frustrations with BET-1 to
which the other researchers, who found it adequate for their
purposes, were not attentive.

ORI argued, however, that Dr. Imanishi-Kari not only knew that
BET-1 was unable to discriminate between æa and æb with the degree
of specificity reported in Figure 1, but that she "lamented" this
problem openly, sought with Dr. Reis to reclone the hybridoma
that produced BET-1 to try to solve the specificity problem, and
admitted that it could not be solved to Dr. Weaver, Dr. Eisen,
Dr. Baltimore, and Dr. O'Toole. See ORI FFCLs IX.D. We find
that Dr. Imanishi-Kari was aware that BET-1 was not absolutely
specific, but find that she had a basis for believing that it
discriminated adequately for her purposes. ORI did not establish
that she was concerned about solving a "BET-1 problem."

The claims that Dr. Imanishi-Kari "confessed" that BET-1 was
worthless turn out to reflect simply her immediate
acknowledgments that its specificity was not absolute, rather
than any denial that it adequately discriminated. This
misunderstanding was explained repeatedly by Drs. Eisen and
Baltimore, who initially were shocked and upset when they thought
Dr. Imanishi-Kari was telling them the former and then relieved
when they realized she was expressing the latter idea. Ex. H235;
Tr. 2084-86 (Baltimore). 93/

Dr. Maplethorpe testified that he overheard a conversation
between Dr. Imanishi-Kari and Dr. Weaver and a student working in
the laboratory in June 1985 in which Dr. Imanishi-Kari told them
that BET-1 was having problems in cross-reacting with æb, but
that Dr. Reis was recloning to solve the problem. Tr. 5714-16.
Dr. Maplethorpe claimed that he had clear recollections of this
overheard conversation because he tape recorded it. Tr. 5717.
We find that this testimony lacks credibility. Since Dr.
Maplethorpe and ORI failed to produce this supposed tape, to
explain its unavailability, or to produce the student who
allegedly participated, we infer that this tape either never
existed or does not support Dr. Maplethorpe's allegations. Dr.
Maplethorpe also claimed that BET-1 was never observed to act
with specificity in the lab and that it always cross-reacted so
that it could not be used to characterize the allotype of unknown
antibodies. Tr. 5757-58. We do not find this testimony reliable
since Dr. Maplethorpe testified that he based it entirely on his
memory of overhearing complaints about BET-1 and the fact that it
was no longer in use, but that he never reviewed any of Dr.
Reis's data (despite the fact that testimony demonstrated that
data were openly available in the laboratory for others to
review). Tr. 5757-58 (Maplethorpe), 2563-64 (Albanese), 1839
(Boersch-Supan), 2374 (Weaver), 2603-04 (Reis). As discussed,
our own review of the data belies his assertions, since there are
data showing that BET-1 behaves specifically in the notebooks
from Dr. O'Toole, Dr. Reis, and Dr. Imanishi-Kari.

ORI further argued that Dr. Imanishi-Kari's awareness of a
problem with BET-1 specificity was demonstrated by Dr. Reis's
efforts to reclone BET-1. See R-2:2-7. Dr. Reis and
Dr. Imanishi-Kari denied that problems with BET-1 specificity
were the reason for recloning. Tr. 2508-10 (Reis). Instead,
both testified that the purpose was to obtain additional
quantities. This testimony is supported by the fact that
hybridomas producing both allotype reagents were recloned at that
time, even though no allegation has ever been made that AF6 had a
problem with specificity. 94/ Tr. 4902 (Imanishi-Kari).

The literature cited in the Cell paper provided the reader
the available information on BET-1 specificity.

The BET-1 reagent was originally provided by Dr. Wortis, who
obtained it from the scientists who originally made it. Tr. 2783
(Wortis). The authors of the Cell paper included references to
papers in the literature about the allotype reagents. See Cell
paper 249. Scientific experts indicated that these papers
provided the knowledgeable reader with accurate information about
the specificity of BET-1. Tr. 1901-03 (Eisen), 2823 (Capra),
2784 (Wortis). It seems unlikely that Dr. Imanishi-Kari would
set out deliberately to mislead readers about the specificity of
a reagent and yet point them to literature disclosing the full
picture about the reagent.

The Cell paper did not materially mislead readers by
omitting more detailed reports of BET-1 specificity results
in other assays besides those in Figure 1.

ORI argued that the Cell authors were required to report
"representative" results of BET-1 specificity assays. ORI Br.
35-36. ORI quoted in support of this argument a statement from
the Bridges decision that standards for reporting research
require a scientist to "report conclusions which take all
available experimental data into account unless there is a
scientifically sound rationale for disregarding data." Bridges
DAB No. 1232, at 81. While accurate, this language is
inapposite. First, as explained, we have found that the results
in Figure 1 were accurately reported. Second, Figure 1 did not
purport to represent a survey of tests of BET-1's behavior.
While BET-1 was used in some important experiments in the paper,
the focus of the paper was not on studying this reagent, and, as
noted, any difficulty in achieving complete specificity by
allotype would not have greatly affected the central conclusions
of the paper. Third, several scientists testified that one does
not report every time a reagent fails to work, although a
researcher should not use experimental results when a reagent
does not appear to be working properly. Tr. 3135 (Marrack), 2069
(Baltimore). 95/ ORI did not allege that Dr. Imanishi-Kari used
results where BET-1 did not discriminate adequately. Fourth,
scientific journals have limitations on space for articles, so it
is not reasonable to expect that every experiment using or
testing every reagent would be described. See generally Tr. 2069
(Baltimore). Finally, and most importantly, we do not agree as a
matter of fact that Figure 1 and the results obtained with BET-1
there were seriously out-of-line with the experience of the
laboratory with the reagent or that contrary results were
disregarded without reason.

The Panel concludes that ORI did not prove charge one, Figure 1
by a preponderance of the evidence.

2. Alleged fabrication of related data

Figure 1, charge two: ORI charged that Dr. Imanishi-Kari's
"deliberate creation and reporting of falsified and fabricated
notebook data supporting Figure 1 at I-1:111-19; I-1:121 and I-
1:124-28A" constituted scientific misconduct.

Issue: The listed pages were challenged on the basis that Dr.
Imanishi-Kari submitted them as part of the I-1 notebook to
support her "story" about the specificity of BET-1. None of the
data from these pages were published in the Cell paper. Data
from I-1:121, however, were published in the letter of correction
to evidence BET-1's specificity. Ex. H3. ORI argued that the
forensic and statistical evidence demonstrated that the listed
pages were not authentic data from the purported dates. Since
these data were submitted to support the claims made about BET-1,
ORI alleged that fabrication of the listed pages constituted a
false report to investigators.

It is not clear that Dr. Imanishi-Kari ever identified these
pages as support for claims about BET-1 specificity. See
Response to findings in ORI's October 26, 1994 letter at 1
(January 31, 1995). These pages were submitted as part of the I-
1 notebook in response to a subpoena from the Subcommittee for
all data for the Cell paper and related experiments. Ex. H248.
Nevertheless, Dr. Imanishi-Kari did not dispute that intentional
fabrication of data on these pages would be scientific
misconduct; rather, she denied that she fabricated the data.
Id.; R. Br. 45-48.

Analysis: ORI alleged that these pages collectively presented a
"carefully wrought scenario" in the form of a story in which BET-
1 was problematic in March 1985 (I-1:111-113) but a new batch was
prepared and relabelled (I-1:114); somewhat better results were
obtained (I-1:115-118); 96/ and then by March 22, 1985, the
problem was solved and BET-1 was functioning reasonably
consistently with Figure 1 (I-1:119, 121, 125A, and 128A) and did
so from then on. Tr. 168 (Dahlberg); ORI Br. 41-42. The purpose
of fabricating this "story" was, according to ORI, to discredit
the assay in the 17 pages in which BET-1 failed utterly to
discriminate by allotype (R-1:18, dated May 7, 1985). ORI Br.
42. ORI asserted that, in reality, Dr. Imanishi-Kari told
Dr. O'Toole that BET-1 was not specific in March 1985 because the
clonal line was contaminated by a clone recognizing æb, but that
she would solve this by recloning. ORI theorized that Dr. Reis
attempted to reclone but the effort failed and the new clone was
not specific, and this lack of specificity demonstrated at R-1:18
could no longer be blamed on contamination of the clone. ORI
Report 89-90. Premised on this scenario, ORI presented
statistical and document examination findings about the listed
pages to support its claim that they were not authentic, at least
as to the dates when the data were purportedly generated.

Dr. Imanishi-Kari denied that these pages "purport to identify
and solve the problem of `bad Bet' in a five-day period in March
1985." R. Br. 46; ORI Report 89-90, 185.

It is not disputed that Dr. Reis was in the process of developing
new clones to produce BET-1 around April/May 1985. R. Br. 46;
ORI Report 89-90; R-2:2-7; R-2:20. As discussed above, Dr. Reis
and Dr. Imanishi-Kari deny that this recloning was an effort to
solve problems with BET-1 specificity. Hybridomas producing AF6
and BET-1 were each recloned to obtain additional quantities of
the reagent. BET-1 from the new clones was used in subsequent
assays. Since the assays using BET-1 that appear in the
published paper dated from May not March 1985, Dr. Imanishi-Kari
had no reason to create a false impression that specificity
problems were eliminated in March 1985. We fail to see how such
an impression would discredit the assay on R-1:18 where BET-1
completely failed to discriminate, since that page is also dated
in May 1985.

Thus, the allegations that the dates were altered to place the
experiments on these questioned pages in March are not plausible.
The more successful experiments in the questioned pages are
simply more typical of the results obtained in the laboratory
throughout the time frame with BET-1, while R-1:18 is an anomaly
(in its complete lack of specificity) and an example of recurrent
instances where BET-1 was unreliable to varying degrees under
certain conditions. 97/ The questioned pages include examples of
experiments where BET-1 was relatively unspecific (with the
notations about bad labeling) (I-1:111-13); others where it was
specific to some extent (I-1:115-18); and some where it was quite
specific (I-1:119 and 121). The sequence would not have
established that all batches of BET-1 thereafter performed with
complete specificity since the assay at R-1:18 came later, but
only that, when conditions were right, BET-1 could perform well
and that control tests could successfully distinguish experiments
where BET-1 was not working so that the results of those
experiments could be discarded. If Dr. Imanishi-Kari were
attempting to discredit the R-1:18 assay by tampering with the
data record, it is more likely that she would remove other assays
in which BET-1's specificity was problematic and simply treat R-
1:18 as an anomalous failure. 98/ Conceivably, Dr. Imanishi-Kari
might also have annotated the pages with BET-1 specificity
problems to attribute the problems falsely to labelling. 99/
However, the idea that Dr. Imanishi-Kari took data from
unsuccessful experiments with BET-1 from months before or after
and intentionally misdated them to represent falsely that they
were performed in March 1985 is not believable.

Turning to the specific forensic findings, ORI pointed to the
register numbers on the counter tapes appearing on I-1:112
(continuing an assay begun on I-1:111 which is dated March 20,
1985) and I-1:116-17 (part of another assay running from I-1:115-
118 and dated March 21, 1985). The pages contain full-size
counter tapes with the following register numbers: 01447 (I-
1:112); 02799 (I-1:116); and 02800 (I-1:117). Mr. Stewart
testified that he did not see an advance of 1353 counter numbers
in one day in any of the other notebooks in this case. Tr. 3719;
see also ORI Report 70. Mr. Stewart also testified that the ink
intensities on the pages differed in that I-1:111-12 had quite
light ink and I-1:115-118 had medium dark ink. He did not
attribute this change to simply replacing the ribbon because the
ink on the former pages was not light enough for most people to
change and the ink on the latter pages was not dark enough to
represent a brand-new ribbon. Tr. 3718. Dr. Imanishi-Kari did
not disagree that these observations establish that the counter
tapes on these pages were probably not produced one day apart,
assuming they did come from the same counter. R. Br. 45; Ex.
H103, at 115-16. 100/

These tapes were preserved in full-sheet form with register
numbers whereas most of the tapes in the I-1 notebook were cut
into strips. ORI did not provide any explanation why
Dr. Imanishi-Kari would retain the register numbers on these
pages. Removing them would not only avoid providing any basis
for suspicion but would be the more typical pattern. 101/

As discussed before, the dates on the pages in the I-1 notebook
do not purport to represent necessarily the days on which the
assays were performed or the tapes generated. They may represent
a reconstruction of the time frame from protocols while
organizing the raw data at a later date or may be the dates the
pages were put together (possibly more by topic, since they were
obviously related in dealing with tests of BET-1) instead of the
dates of the assays. Dr. Dahlberg acknowledged that the tapes
may have been compiled over a one-day period, although they could
not have been generated in one day. Tr. 160. It is entirely
possible that the date on one or both of the assays is simply
wrong, and Dr. Imanishi-Kari conceded as much. R. Br. 45.

However, we do not think that an inference of intentional
falsification of the dates can reasonably be drawn from these
findings. If Dr. Imanishi-Kari were attempting to falsify data
to discredit the failed BET-1 experiment in the 17 pages, it
would be logical for her to construct a scenario where the "BET-1
problem" was solved in May, after the failed experiment and
before the experiments used in the paper (dated later in May).
By contrast, Dr. Imanishi-Kari had no plausible motive to
intentionally misdate these assays in March (nor any plausible
motive at all to include so many examples of "bad BET" in
supposedly fabricated pages).

The Secret Service attempted to guess the correct dates for these
pages on the basis of finding that the tapes did not match
examples found in other notebooks that bore dates around March
20, 1985. In addition, the I-1:111-12 tapes were found to be
"most consistent" with tapes from January 1985 and the I-1:115-18
tapes were "most consistent" with tapes from December 1985. Tr.
3744-46 (Stewart). While these findings suggest that the dates
on one or both of these assays are in error, we do not find the
effort to date them more accurately based on "consistency" of
ribbon ink very persuasive.

As discussed above, the meaning of "consistency" is only that a
tape happened to survive among those retained at MIT that had a
matching format and printer font and that had ink using the same
formula and at a reasonably similar intensity as the ink on a
questioned tape. If such a tape occurs in any other researcher's
notebook, the Secret Service and ORI extrapolated from the date
in that notebook to find the "correct" date for the questioned
tape. Yet the "consistent" tape may have come from a different
counter or printer, may have been produced with a different
ribbon, and may have been dated incorrectly by the other
researcher. The absence of a "consistent" tape in a given time
frame does not demonstrate that such tapes did not exist.

We therefore find that the tapes at I-1:111-12 and 1-1:115-118
were probably dated incorrectly, but that ORI did not show that
these tapes did not represent authentic experiments or that they
were intentionally misdated in a deliberate effort to
misrepresent the chronology of BET-1 experiments.

ORI offered different forensic arguments as to the remaining
pages. We address first the arguments about I-1:113 and 119
(which concern document examination evidence on when those pages
were compiled). 102/ We then address the document examination
evidence offered in regard to the handwritten BET-1 data on I-
1:121, I-1:125A and I-1:128. However, the main attack on the
last three pages was statistical and is discussed in detail in
relation to the statistical analysis of the June subcloning data
and the charges regarding Table 2.

Page I-1:113 is not dated, but ORI assumed that its placement in
this series of pages was intended to imply a date in March 1985
for the experiments recorded there. 103/ However, the Secret
Service offered several findings regarding this page to support
the conclusion that it was created after February 1986. These
findings did not address the raw data on the counter tapes. They
are based on characteristics of the page onto which the data were
compiled, such as the pad paper and ink used (which were used in
other pages date in 1986) and the connection of ESDA impressions
of the writing on this page with 1986 pages.

We consider the details of these findings below, but even
assuming their accuracy and the validity of the conclusion drawn
that this page was compiled in early 1986, we would find no basis
to infer that the data were fabricated or intentionally falsified
as to date.

The findings about I-1:113 were based on linking it to three
other pages from I-1: 30, 41 and 43 and to a page from another
Dr. Imanishi-Kari notebook: I-3:25. 104/ The Secret Service
found that these pages were connected in that they used pad paper
that contained the same defects 105/ and had writing using the
same black ballpoint pen ink formula. 106/ Ex. H500, at 3. As
discussed above, the Secret Service found nothing unique about
the ink formulas or pad defects that would demonstrate that they
came from a specific time period 107/ or that a single pad or pen
was the source of pages that happen to have the same defect or
ink formula. 108/ Tr. 4185 (Stewart) ("You cannot determine if
it's the same pen. That's impossible."). In addition, the
Secret Service found nothing remarkable "on the surface" about
the same ink formula appearing on 1984 and 1986 documents. Tr.
4188 (Stewart).

However, the Secret Service attributed significance to the
collective findings in light of ESDA results which also linked
some of these pages as having been produced close in time, or at
any rate on top of, each other. Tr. 3974-75, 4188 (Stewart); Ex.
H500, at 2; ORI Br. 45. No direct ESDA connection existed
between I-1:113 and any page from I-3, but the Secret Service
found that an impression of I-1:41 appeared in registration on an
ESDA of I-1:113 (leading to a conclusion that I-1:41 was produced
"prior to" I-1:113). 109/ In addition, an impression of I-3:25
was found on an ESDA of I-1:30, so the Secret Service concluded
that the I-3 page from 1986 was produced before the page from I-1
dated in 1984. Tr. 3276-81, 3284-85 (Hargett). Since I-1:30 was
connected in turn to I-1:41, by the pad and pen commonalities,
the Secret Service extrapolated that all these pages were in fact
produced at around the same time. Tr. 3313-14 (Hargett); Ex.

In addition, Dr. O'Toole testified that I-1:41 and I-1:113 were
shown to her in May 1986 when she met with Drs. Wortis, Woodland,
Huber and Imanishi-Kari. She described I-1:41 as "brand spanking
new" and testified that Dr. Imanishi-Kari told her that she
"deeply resented having to take time to generate" the data to
satisfy Dr. O'Toole. Tr. 985-87. We find it implausible, were
Dr. Imanishi-Kari guilty of having fabricated these pages
immediately before the meeting in order to satisfy Dr. O'Toole,
that she would promptly complain to Dr. O'Toole of the trouble it
involved to fabricate the data. 110/ We find it far more likely
that Dr. Imanishi-Kari was resentful of having to take time to
locate the data, or perhaps to organize the raw data on to the
pages for Dr. O'Toole. 111/ As for the supposedly suspicious new
condition of page I-1:41, that may have resulted from its
relatively recent compilation or its storage in a manilla folder.
R. Br. 87. 112/

After reviewing the contents of these pages, we find it difficult
to credit the claim that Dr. Imanishi-Kari fabricated these data
out of whole cloth to satisfy Dr. O'Toole at the Wortis meeting
(presumably along with I-1:30, 41, 42 and 43, since these are
grouped together by the Secret Service). Page I-1:113 shows
BET-1 behaving in a very sticky fashion, and it is not disputed
that Dr. Imanishi-Kari wrote on the page at the meeting that this
BET-1 had a "bad label." If she were fabricating data, she could
as easily have fabricated only the AF6 data showing endogenous
idiotype-positive hybridomas and omitted the problematic BET-1
results. 113/ She could also have fabricated clear-cut examples
of idiotype-positive endogenous þs that were negative for æ
(which would have been even stronger evidence for her position).
If it was so much trouble to generate data for this meeting, the
data on these challenged pages were certainly not worth the
trouble or risk.

ORI attacked the authenticity of the date (March 22, 1985) on
page I-1:119 as well. Dr. Imanishi-Kari testified that I-1:114,
which is a page from the spiral notebook, was probably the
protocol for I-1:119 which contains results of assays of samples
that are mostly listed on I-1:114. Tr. 5260-61. However, she
said that she believed that she dated I-1:119 with reference to
I-1:114, so that the date might not be accurate but only
estimated based on the assumption that this experiment was the
one referenced on I-1:114. Id.; see also Tr. 5263-65 (on I-
1:115-20); R. Br. 46-47. The Secret Service found that
impressions of parts of I-1:119 showed up on ESDAs of I-1:7, 8,
and 9 (all dated in August 1984), from which they concluded that
I-1:119 was created before the other three pages. ORI Br. 47;
Tr. 3461 (Hargett); Ex. H500, at 1. This evidence might at most
suggest that I-1:7, 8, and 9 were compiled some time after the
dates noted for the experiments on them but does nothing to
demonstrate that I-1:119 could not have been compiled in Spring
1985. 114/

ORI also pointed out that the date on I-1:119 was changed from
12/16 to 3/22/85. ORI charged that the counter tapes on I-1:119
were not consistent with other researchers' tapes from either
"the original or changed date." ORI Br. 47; Exs. H501, at 2 and
H502, at 2-3. As explained elsewhere, we attach no particular
importance to the absence of matching tapes in any particular
time frame among the sample of counter tapes reviewed. Nor do we
see any special significance to a change from December of 1984 or
1985 to March 1985 -- Dr. Imanishi-Kari may have begun writing
the page with one idea of the probable date and either changed
her mind about which experiment the tapes recorded or used the
page for a different experiment than she initially planned to
record. If Dr. Imanishi-Kari were changing the date in an
intentional attempt to mislead, it is far more likely she would
have changed the date to one in May (between the failed
experiment in the 17 pages and the important experiments using
BET-1 that are reported in the paper).

We turn next to I-1:121, which was dated in May 1985 and was
indeed published in support of BET-1 specificity. ORI made a
similar argument that the ESDA of I-1:5 (not dated but associated
with pages dated in August 1984) showed that I-1:121 was produced
first. Dr. Imanishi-Kari pointed to internal evidence for
I-1:121 which ORI acknowledged was "certainly consistent with
the" date on the page, i.e., a reference to "Pool 1" which was
the new cloning of BET-1 prepared by Dr. Reis in late April and
early May 1985. ORI Br. 48. Furthermore, Dr. Imanishi-Kari
explained in her testimony before Congress that these two pages
were connected scientifically in a way that made it perfectly
logical that she "would be reviewing, comparing and/or recording
the data from these two separate experiments at the same time,"
an explanation which she verified at the hearing, noting that she
might have wanted Dr. Reis to have access to normal mouse data on
I-1:5. Ex. H120, at 148; Tr. 5169-71. Mr. Hargett rejected the
possibility of an innocent explanation because taking notations
from page 5 to put on 121 or shuffling the pages to copy data was
not consistent with the finding that I-1:121 was produced on top
of I-1:5. Tr. 3396 (Hargett). Yet Mr. Hargett admitted that, of
all the data appearing on I-1:121, very little actually appears
on the ESDA of I-1:5. Tr. 3453-54. Mr. Hargett acknowledged
that if I-1:121 were actually created on top of I-1:5 (the
assumption that underlies any inference that I-1:121 was produced
"prior to" I-1:5), he would have expected more of the contents of
I-1:121 to show in the impression. Id. Asked if it was possible
that these were loose pages that were simply placed together in
registration briefly when this small portion was written,
Mr. Hargett answered that "[o]n this particular page, that could
be a possibility, yes." Id.

Although ORI also listed I-1:124-128A in this charge because some
BET-1 data appears on I-1:125A and "minimal BET-1 specificity
results" are included on I-1:128, the bulk of the arguments
concerning these pages (known as the "June subcloning" data) are
more relevant to Table 2. ORI Br. 48. The statistical challenge
to I-1:121 is better discussed in the section on the June
subcloning. The premises of the statistical attack are the same
although the page is not part of the same data set. We note here
only that Dr. Imanishi-Kari also pointed out that only 12 of the
72 entries on I-1:121 are related at all to BET-1 and the
statistical analysis did not consider those entries separately.
R. Br. 48. We explain in the later section why we conclude that
ORI did not prove that these pages were fabricated or falsified.

The Panel concludes that ORI did not prove charge two, Figure 1
by a preponderance of the evidence.

B. Table 2

The authors of the Cell paper present Table 2 as support for a
dramatic difference in the frequency of idiotype-positive
antibodies between hybridomas from transgenic and normal mice.
In addition, Table 2 shows that a high proportion of the
idiotype-positive antibodies produced by the transgenic
hybridomas probably were derived from endogenous genes. Since
the transgene carried the æa allotype, any idiotype-positive
antibodies which were not æa could be assumed to be derived from
genes other than the inserted transgene. Thus, for example, the
report in Table 2 that only nine out of 43 transgenic spleen
hybridomas produced idiotype-positive antibodies with the æa
allotype implied that the remaining 34 hybridomas producing
idiotype-positive antibodies were endogenous (one being æb, and
the remainder presumably of other isotypes such as þ or à). 115/
ORI charged that Table 2 exaggerated the frequency of idiotype-
positive endogenous antibodies in the transgenic mice in two
ways. First, that the cut-off used to determine a positive
result in the assays was set too low so that a misleadingly high
number of transgenics were considered positive. Second, that the
results were presented as if they had been obtained with
antibodies from individual clones, whereas in fact the assays
were performed on supernatants from wells which might contain
more than one clone. In addition, ORI challenged the data on
normal hybridomas in several respects.

1. Cut-off questions

Table 2, charge one: ORI charged that Dr. Imanishi-Kari
"deliberately set the cut-off values to produce artificially high
and misleading results" and thereby produced "misleading
statements of the experimental results reported in Table 2."
Charge letter 4.

Issue: Table 2 compared the frequency of idiotype-positive
hybridomas derived from spleen and lymph node cells of normal
mice with those from spleen and lymph node cells of transgenic
mice. In addition, for each category of hybridomas, the
idiotype-positive hybridomas were grouped by testing them for
anti-NIP response (using kappa and lambda light chain probes) and
for æa and æb allotypes. Table 2 reported that 43 (28%) of the
transgenic spleen hybridomas and 129 (68%) of the transgenic
lymph node hybridomas were idiotype positive. By contrast, only
one idiotype-positive hybridoma (less than 1%) was reported for
either set of the normals. 116/ This difference, along with the
allotype information showing that only about a quarter (42/172)
of the transgenic idiotype-positive hybridomas produced the æa
(transgenic) allotype, was critical to the central point of the
paper about the high frequency of endogenous idiotype-positive
transgenic hybridomas. Tr. 2771-72 (Wortis).

The data for idiotype positivity reported in Table 2 for the
transgenic hybridomas came from R-1:31-34 (included in the 17
pages copied by Dr. O'Toole). The determination that a hybridoma
was idiotype positive in that assay was based on a cut-off of
1000 cpm. 117/ The data at R-1:31-34 show the cpm only for those
hybridomas which exceeded 1000, so that negative results were not
retained. ORI argued that this cut-off was too low because the
listed readings cluster around 1000 cpm; ORI contended that the
cut-off should have been chosen at the low point of a bimodal
distribution dividing positives from negatives. Tr. 288-295
(Dahlberg); ORI Report 104-06. 118/ Dr. Imanishi-Kari responded
that she used a different but generally accepted method to set
the cut-off by roughly doubling the background counts. 119/ ORI
admitted that the scientific witnesses, including its own,
considered this method "perfectly appropriate . . .
theoretically." ORI Br. 56. However, ORI denied that this
method was actually applied, because control data on R-1:34 show
a background value of 738, which would yield a cut-off of about
1500-1800 and a much lower number of idiotype-positive transgenic
hybridomas. Dr. Imanishi-Kari argued that the controls
referenced actually applied to a different experiment, also
reported on the same pages, where some of the hybridoma
supernatants were retested ("June 6th retest"). Tr. 4862.

Analysis: As an initial matter, ORI suggested that there was a
dispute about the degree of Dr. Imanishi-Kari's responsibility
for problems with Table 2. Dr. Imanishi-Kari denied that she was
primarily responsible for this table, and ORI charged that this
disavowal was inconsistent with Dr. Imanishi-Kari's
acknowledgment that she helped set the cut-offs, supervised and
worked with Dr. Reis, and provided the serology data in the
drafting process. Tr. 4692, 4869, 5422 (Imanishi-Kari); ORI Br.
54-55. We find that Dr. Imanishi-Kari had overall responsibility
for the presentation of the serological results and that she
worked closely with Dr. Reis, but that this role is not
inconsistent with her relying on Dr. Reis's representations and
interpretations of results from experiments which Dr. Reis

Interpreting the results of a particular assay is at the heart of
scientific judgment. The definition of scientific misconduct
expressly excludes honest differences of interpretation, and ORI
represented that its charges here are premised on intentional and
deliberate acts of misrepresentation by Dr. Imanishi-Kari.
Therefore, we would sustain this charge only if we found that the
cut-off used was inappropriate and was selected in an intentional
and deliberate effort by Dr. Imanishi-Kari to mislead readers,
and not if we found only that other methods of selecting a cut-
off might be acceptable or even preferable. 120/

Because Dr. Reis wrote down only counts 1000 cpm or above, an
after-the-fact reevaluation of the appropriateness of the cut-off
selected based on the distribution of results is difficult. 121/
Fortunately, the argument that a bimodal distribution is the
correct way to set cut-offs is largely moot in light of the
overwhelming weight of the scientific testimony that a low
multiple of the background count was an acceptable method. See
Tr. 1547 (Gilbert), 2772-76 (Wortis), 1943-44 (Eisen). ORI seems
to have recognized this as well since this argument is not
pressed in its post-hearing brief. See ORI Br. 55-60. The real
issue, therefore, is whether the method was in fact used here, as
Dr. Imanishi-Kari testified that it was. 122/ Tr. 5428.

The basis for ORI's conclusion that Dr. Imanishi-Kari did not use
a low multiple of the background count is that control data on
the bottom of R-1:34 show a background value of 738. ORI
rejected Dr. Imanishi-Kari's assertion that these controls
related to the June 6th retest (and that the controls for Table 2
are missing), for several reasons. First, the Table 2 assay was
larger so controls would have been more important for it. By
contrast, ORI argued, the retest would not require so many
controls, especially the controls for AF6, which was not used in
the retest.

While ORI is obviously correct that the main assay would have
required controls, as would the retest, we do not find a basis to
conclude that if one set of controls was lost inadvertently, it
was more likely to be the controls for the retest than those for
the main assay. As for the inclusion of AF6 controls,
Dr. Imanishi-Kari answered that they might well have decided not
to proceed with an AF6 assay in the retest because of the
extremely high background count. Tr. 4861. Dr. Reis
corroborated this testimony. Tr. 2532-33. ORI disputed this
possibility on the grounds that the BET-1 background was also
high and yet that assay proceeded. ORI Reply Br. 28, n.36.
However, the background for the AF6 control was more than twice
as high as the BET-1 background (although the latter was also
higher than ideal). Dr. Imanishi-Kari argued (and ORI did not
dispute) that no other assay in her own or Dr. Reis's notebooks
had a comparable background for AF6, whereas BET-1 had had more
history of high backgrounds. Therefore, it is plausible that
Dr. Imanishi-Kari's and Dr. Reis's scientific judgment was that
it was worth proceeding with the BET-1 retest under the
circumstances, but not with the AF6.

The results reported in Table 2 are consistent with the testimony
that a different set of controls, now missing, was used for the
main assay. In particular, using the high background shown for
BET-1 would eliminate most of the positive results recorded in
Table 2 for æa (dropping from 41 to 7). Yet, Dr. Imanishi-Kari
had no motive to inflate the number of BET-1 positives. ORI
acknowledged that these backgrounds were not used to set cut-offs
for the main assay, but argued that perhaps Dr. Imanishi-Kari and
Dr. Reis just ignored these controls and that, in any case,
applying these high backgrounds would have reduced the number of
æb (i.e., endogenous) idiotype positives (which Dr. Imanishi-Kari
did arguably have a reason to inflate) even more, from 10 to 0.
ORI Reply Br. 28-29. It does not seem plausible that
Dr. Imanishi-Kari and Dr. Reis retained the records of these
backgrounds (and left them in accessible place in the laboratory)
if they knew they applied to the main fusion assay but were
deliberately ignoring them to set arbitrary cut-off levels. If
they were setting the cut-offs arbitrarily to oversell their
results, they had no reason to report higher levels of æa
positive than required even by the recorded background. Had the
controls on I-1:34 in fact applied to the main assay, the
reported results for both the AF6 and BET-1 portions of the main
assay would have been very different.

Dr. Imanishi-Kari also pointed out that Dr. Reis used the same
cut-off in assessing positivity in the "normal" mouse portion of
the assay. R-1:27-30; R. Br. 49. The authors certainly had no
reason to inflate the number of idiotype positives in a normal
mouse. 123/ That the cut-off successfully distinguished real
positives is evidenced by the fact that the normal mouse was
discovered to be transgenic because this assay was
indistinguishable from the high rate of positives found in the
transgenics. 124/ This evidence supports the testimony of Dr.
Reis that the controls were for the retest, especially based on
the higher counts in the controls which were more in keeping with
the results of the retest than of the initial screen. Tr. 2530,
2617. 125/

ORI made two other arguments, however, to bolster the position
that the recorded controls related to the main fusion assay.
First, Dr. Dahlberg and ORI claimed that ESDA results indicated
that the June 6th retest data were not entered on R-1:34 at the
same time as the results for the main fusion assays or the
control data. ORI Br. 58-59; Tr. 252-53. 126/ This assertion is
repeated in ORI's proposed FFCLs. ORI FFCL X.G.6. Not only did
the Secret Service make no such finding, an examination of the
ESDA for R-1:35 directly contradicts this claim. 127/ The ESDA
of R-1:35 contains impressions of both the June 6th retest
results and portions of the control data, but none of the results
of the main fusions assays. This independent observation
strongly supports Dr. Imanishi-Kari's and Dr. Reis's memory that
the controls related to the June 6th retest and not the main
fusion assay. Second, ORI pointed out that the controls included
tests on a NIP coat. An independent assay on a NIP coat was also
done on the main fusion products while no NIP coat assay was
performed as part of the retest. Dr. Imanishi-Kari argued that
these controls on the NIP coat could represent those for the
independent NIP assay of the main fusion experiment or
preparation for a further assay on the retest that was not
performed. R. Br. 51, n.61.

The Panel's review of other assays in the I-1 notebook showed
that many are consistent with Dr. Imanishi-Kari's assertion about
the method she used to set cut-offs (as the NIH Scientific Panel
also noted about the immunoassays which it reviewed). For
example, in the assay at I-1:3-6, the controls showed background
counts of 267-456. It is evident that a 1000 cpm cut-off was
used, since three positives were recorded, with the lowest
positive at 1105 cpm. In support of its position that
Dr. Imanishi-Kari used a 1000 cpm cut-off here arbitrarily in
order to misrepresent the percentage of idiotype-positive
transgenic hybridomas, ORI alleged (in its report, although the
point is not pressed in its brief) that other assays in
Dr. Imanishi-Kari's and Dr. Reis's notebooks demonstrated that a
1000 cut-off was not always used. Tr. 302-04, 552-53 (Dalhberg);
ORI Report 107-10. Of course, Dr. Imanishi-Kari did not claim
that she always used 1000 cpm as a cut-off (which would be rather
arbitrary) but rather that she generally used roughly twice
background but not lower than 1000. Therefore, assays in which a
cut-off above 1000 was selected would be potentially inconsistent
only if they reflected background levels of below 500.

As an example, ORI pointed to the assays on I-1:101-09 (known as
the January fusion). 128/ ORI questioned the authenticity of
these data but nevertheless argued that they showed
Dr. Imanishi-Kari consciously using a different cut-off in that
the totals for the experiment shown on I-1:105 were initially
based on a 1000 cpm cut-off and then were crossed out and new
totals were written based on a 2000 cut-off. It is odd for ORI
to rely on this assay for any purpose while denying its
authenticity. In any case, the revision of the totals, if
anything, supports Dr. Imanishi-Kari's version of how she set
cut-offs. The background counts on I-1:104 ranged from 248 to
820. Hence, relying on the lowest background would yield a 1000
cut-off (since she testified she did not go below 1000) while two
and one-half the higher background would yield 2000. On
reflection, Dr. Imanishi-Kari apparently decided to use the more
conservative figure even though it reduced the level of idiotype-
positivity on the transgenic screen from 79% to 60%. See ORI
Report 107-08; R. Br. 51-52.

ORI also pointed to R-1:71-76 where Dr. Reis performed four
fusions using mice immunized with chicken gammaglobulin, three
fusions with transgenic mice and one with a normal mouse, and a
fusion with a normal unimmunized mouse, the last of which was
part of the normal data reported in Table 2. Cf. ORI Report 108-
09. 129/ The cut-off in the transgenic fusions was allegedly
5000 cpm while 1000 was used for the normal unimmunized portion.
130/ Dr. Imanishi-Kari responded that these transgenic assays
were not analogous to the transgenic fusion assay at R-1:31-34
because, in the experiments on mice immunized with chicken
gammaglobulin, "Dr. Reis was looking for transgene expression,
which would be manifested by very high cpms," whereas in the R-
1:31-34 assay, she "was looking for idiotype positives even from
endogenous genes," which would be manifested by lower cpms and
therefore only be picked up by a more sensitive cut-off level.
R. Br. 52, n.63. (We discuss the relation of the immunized
transgenic and unimmunized normal fusions further in relation to
charges concerning Table 2.) ORI did not contest this
representation in its reply brief, so we do not infer from the
use of a different cut-off in R-1:71-76 that the cut-off in R-
1:31-34 was not set by the method explained by Dr. Imanishi-Kari.

The Panel concludes that ORI did not prove charge one, Table 2 by
a preponderance of the evidence.

2. Wells v. clones issue; "June subcloning" data

Table 2, charges two and three: ORI charged that
Dr. Imanishi-Kari's failure to disclose that "the frequency
values reported in Table 2 were derived from wells, not
individual hybridomas" was "a deliberate misrepresentation of
data." Charge letter 4. Further, ORI charged that
Dr. Imanishi-Kari committed scientific misconduct by her
"deliberate creation and reporting of the fabricated and
falsified data at I-1:124-128A [June subcloning] to support the
prior published results in Table 2. . . ." Charge letter 4.

Issues: As explained in relation to the first charge, Table 2
presented data comparing the frequency of certain characteristics
in the products of normal and transgenic "hybridomas." In the
text discussing this table, the paper stated: "Of the 172
idiotype-positive hybridomas, only 53 were IgM secretors.
The remaining 119 clones produced other Ig heavy chain
isotypes . . ." Cell paper 250 (emphasis added). It is not
disputed that the assays were performed on the supernatants of
parent wells of a fusion without further cloning to ensure that
they represented individual cell lines. This was clarified by
the authors in two letters of correction. Exs. H2 and H3. 131/
Dr. Imanishi-Kari contended that this was a problem in
terminology that was, at most, an error, but which was not
intended to mislead. ORI contended that not only was the
terminology a deliberate misrepresentation, but that
Dr. Imanishi-Kari submitted to NIH fabricated and falsified data
to show that the results of further subcloning supported the
frequency reported in Table 2.

Analysis: We deal first with whether the use of the terms
"hybridoma" or "clone" in the paper constituted a deliberate
misrepresentation. Next, we turn to whether the data ORI
referred to as the "June subcloning" data were intended to
support the Table 2 frequency results and, if so, whether ORI
proved that they were fabricated or falsified.

We note at the outset that the term "June subcloning" data used
by ORI is not technically accurate since the data are presented
as related to the first level of cloning from parent wells. This
shows how easily such terms can be misused. We use the term
"June subcloning" data below to avoid confusion since the parties
and their witnesses repeatedly used that term. We specifically
disclaim any implication that we consider this to be correct
usage of the term.

The terminology used for fusion products was erroneous and
overstated somewhat the frequency results, but was not
deliberate misrepresentation.

The NIH Scientific Panel reviewed the issue relating to
terminology in Table 2 and noted that the data, "contrary to the
impression given in the Cell paper, are the result of studies
performed on supernatant culture fluids from wells containing
hybridoma fusion products prior to limiting dilution cloning."
Ex. H262 (NIH Scientific Panel Report at 5). The effect of
reporting data on uncloned hybridoma fusion products was that the
percentage of idiotype-positivity was "likely lower" than
reported for the transgenic mice, perhaps between 20 and 40%.
Id. at 5-6. The NIH Scientific Panel concluded that the
presentation in Table 2 was "incorrect," but felt that the
results of the June subcloning data supported "the trend seen in
Table 2." Id. (NIH Scientific Panel Report at 6). Therefore,
NIH required the authors to publish the second correction and
include the subcloning data. Ex. H262 (Report to NIH Director,
January 18, 1989, at 13).

Another basis for estimating the significance of the error is to
consider the probable number of clones per well, since the more
wells that had only one clone, the closer the idiotype
frequencies reflect characteristics of individual clones. ORI
stated that "it is unrebutted that [the wells for the Table 2
transgenic fusion] contained, on average, about 1 clone per well
for the spleen wells and 1.5 clones per well for the lymph node
wells." ORI Br. 63; ORI Report 122-24. 132/ ORI's own
calculation was that the reported frequencies of 28% and 68% for
transgenic spleen and lymph nodes respectively would drop to 20%
and 50%. ORI FFCL X.J.12. A number of scientific experts
testified that a reduction of this magnitude would not affect the
central point of the paper (which is in accord with the NIH
Scientific Panel's conclusion that its calculation of the lower
percentages did not undercut the paper). Tr. 2770 (Wortis), 3116
(Marrack); Ex. H299, at 2. 133/ Since the difference in
frequency values reported was not so substantial as to affect the
main point of the paper, we conclude that it is not likely that
the terminology was chosen with an intent to mislead.

Dr. Imanishi-Kari explained the reasoning process of the authors
in believing the frequency data on the products of the primary
hybridomas were significant. Even though "each well might
contain more than one clone . . . the striking finding was that
there were many wells that were idiotype positive and Bet 1 (æa)
negative. This indicated that these wells contained at least one
hybridoma clone producing idiotype and no clone producing
transgene." Ex. H255, at 2. Thus, from this point of view, an
additional clone in one of these wells was insignificant since it
must either be endogenous idiotype negative (and basically
irrelevant to the argument that idiotype-positive antibodies from
the transgenic hybridomas were mostly encoded from endogenous
genes) or endogenous idiotype positive (and hence further support
the argument). While reasonable scientists might differ in
interpreting these data, this summary persuasively demonstrates
that the decision to publish these frequency data was not
intended to deceive readers. 134/

This conclusion is bolstered by our independent review of the
notebooks generated in Dr. Imanishi-Kari's laboratory at the
relevant time. Not only Dr. Imanishi-Kari and Dr. Reis, but also
Dr. Weaver and Dr. O'Toole, frequently were inconsistent or
imprecise in the language used in their laboratory notebooks to
designate the level of cloning dilution with which they were
working in various fusion experiments. Thus, the terms "hybrid"
or "hybridomas" were used often to denote wells potentially
containing mixtures of clones; "subclone" and "clone" were used
indiscriminately, often to describe the same experimental
products. One example observed in Dr. O'Toole's O-2 notebook
uses the term "hybridomas" as a heading, even though products of
primary hybridoma wells are involved; and another experiment in
Dr. O'Toole's notebook dated August 26th (no year) is headed
"First cloning from parent hybridoma wells" and thereafter called
"subclones." 135/ Dr. Baltimore testified that he was aware that
Table 2 did not report on single clones before the paper was
published, but that, in the process of the collaboration in
drafting, he "certainly let it slip by." Tr. 2087-88.
Therefore, the error in terminology may be an outgrowth of
"laboratory jargon" that was not properly edited out. 136/

The uncertainty about precise terminology in this area was
highlighted by the disagreement between scientific expert
testimony and ORI's claim that the term "wells" should have been
used instead of "hybridomas." Dr. Marrack, asked what
"hybridoma" means, testified as follows:

A A hybridoma is a fusion between one cell and
another . . . .

Q And does the term hybridomas necessarily connote single

A No. Because I think we use the word hybridomas, at least
in my lab we do, to describe the process where, for example,
one's done a fusion then we are screening the wells visually
to see whether or not they've got hybridomas in them, and
you will say there's a well with a hybridoma in it, if
something is growing in it at that time. . . . And at that
time the object would certainly not have been cloned.

Tr. 3113-14. Asked if it would have been better to say
"idiotype-producing wells," Dr. Marrack testified that the term
"wells" would be wrong because "the wells are not the growing
organism that you're looking at. The cells themselves that are
in the well are the things that are producing the
antibody . . . . So the word hybridoma applies to the growing
fused cells that are living in the well." Tr. 3114. The point
is not that the terminology used was necessarily correct (at
least, the use of "clones" in the text of the Cell paper was
clearly inaccurate). Rather, the point is that, contrary to
ORI's contentions, it is plausible that mix-ups and differences
in the use of this terminology might occur even among experts in
the field and that the precise use of terms may vary in different
laboratories. Cf. ORI Br. 60-61.

Although ORI alleged that a "typical scientist reading" the Cell
paper would have believed that the frequencies reported were for
individual hybridomas, the scientists testifying at the hearing
were divided on this question. ORI FFCL X.J.8; see, e.g.,
Tr. 3115-16 (Marrack). Dr. Marrack testified that an assay on
this scale could never have been done with individual clones, and
"anybody with common sense would have known that you couldn't
possibly have cloned that many." Tr. 3115-16; Ex. H255, at 2.
ORI rejected this argument as unreasonably assuming that "all
possible readers of the paper are experts in the particular field
of serology . . . [and] have or expect resource limitations" like
those faced by Dr. Imanishi-Kari. ORI Br. 61. While such an
assumption would indeed be unwarranted, ORI's response misses the
point. Not all possible readers might be in a position to
discover the error, but it is less likely that the incorrect
terminology would be selected intentionally to deceive when the
high numbers of hybridomas being reported upon could in itself
cause any expert immunologist to question the table's
plausibility, especially since Dr. Imanishi-Kari can be assumed
to have known that many such experts would be in the audience of
this journal.

Since we do not find the June subcloning data to have been
fabricated and falsified (discussed separately below), we see no
reason to go beyond what the prior NIH Scientific Panel did in
correcting the misimpression created by the terminology in Table
2 and the text. Our review of the science supports the
conclusion that the language used in the paper was confusing and
the percentages were overstated by the implication that the
references were to single clones. Our review also supports the
conclusion that the error did not undercut the central point of
the paper and did not indicate intentional or deliberate

ORI did not prove that the June subcloning data were

Description of the questioned data: The results at issue which
have become known as the "June subcloning" data, as presently
organized, consist of nine pages of the I-1 notebook (I-1:124-
128A) which present the results of tests of supernatants of sets
of 12 "subclones" derived from each of 16 primary hybridoma wells
(from Table 2). The supernatants were tested on an anti-idiotype
coat with four detecting probes: anti-kappa/lambda; BET-1; AF6;
and anti-æ. The actual results are contained in four columns,
one for each detecting reagent, over seven pages. Page I-1:125A
presents results of tests against control proteins. Page I-
1:128A summarizes the assay results. The results for the tests
with the anti-light chain reagent are in the form of strips of
green counter tape in the left-most column. The results in the
other three columns are written by hand.

Scientific arguments

The authenticity of the June subcloning data became an issue
because they were submitted to the NIH Scientific Panel and
published at their recommendation in the correction letter to
confirm the frequency results. Ex. H3. ORI apparently agreed
that, if authentic, the June subcloning data support the Table 2
claim that the "vast majority of idiotype-positive hybridomas
were of endogenous isotype." ORI Br. 63.

The scientific significance of the data set is limited. First,
as noted above, internal evidence of the pauci-clonality of the
Table 2 wells confirms the trend in the results without reference
to any subcloning. In fact, Dr. Dahlberg testified that the
reason that the NIH Scientific Panel was distressed by the
terminology issue in Table 2 was "because they didn't appreciate
at the time what the level of clones per well was." Tr. 323-24.
Second, subcloning from selected primary hybridomas from wells
does not give direct information about the frequency of
characteristics in the parent population. The authors indicated
this in their response to the draft report of the NIH Scientific
Panel. Ex. H259, at 16 (November 28, 1988). While not objecting
to publishing the data, the authors stated that subcloning data
should be a supplement to, not a replacement for, the Table 2
data because the "subclone data are less significant, because
they are a subset." Id. If Dr. Imanishi-Kari were fabricating
data in 1988 to satisfy NIH's concerns, it would seem likely she
would either fabricate data that did represent a population, or
at any rate would not point out the limited value of the data she
had created. 137/ The data and her handling of them (as an
interesting follow-up experiment but not a substitute for the
Table 2 data) are more consistent with the view that she was
exercising scientific judgment than with fabrication.

Dr. O'Toole has asserted in several fora that Dr. Imanishi-Kari
told her that no subcloning of the Table 2 wells was ever done,
and that Dr. O'Toole therefore believed (before any forensic
testing) that the June subcloning data were fake. Tr. 6121-22
(O'Toole); Exs. H276, at 9, H258, at 3, H296, at 1-2, H282, at
12, H290, at 9. Dr. Imanishi-Kari has contended that Dr. O'Toole
"misconstrued" what Dr. Imanishi-Kari told her, "which is that no
serological tests on those wells other than those contained in
the `17 pages' were performed." Dr. Imanishi-Kari's Statement in
Response to Dr. Margot O'Toole's Record of Events at 4 (Part of
Ex. H298). ORI itself apparently was not persuaded by
Dr. O'Toole, since it asserted that "ORI does not contend that no
subcloning of these clones ever occurred; ORI simply contends
that the data at issue is not authentic subcloning data, if and
when such subcloning did occur." ORI Reply Br. 29. 138/

The conclusion that "subcloning" did take place is amply
supported on the record, not only by testimony of
Dr. Imanishi-Kari and Dr. Reis that they performed the
experiments, but also by the testimony of Dr. Baltimore that he
discussed the results with Dr. Imanishi-Kari before the paper was
published. Tr. 2105-08 (Baltimore). In addition, there is
credible evidence that the data now known as the June subcloning
data were available the first time Drs. Woodland, Wortis, and
Huber met with Dr. Imanishi-Kari on May 16, 1989.
Dr. Imanishi-Kari testified that she showed the June subcloning
data to them at that meeting, and all three testified that they
remembered seeing those data. Tr. 4963 (Imanishi-Kari), 2800-01
(Wortis), 1805-06 (Huber), and 1699 (Woodland). Dr. Huber
testified that she "definitely" remembered "that the subcloning
of these hybridomas from Table 2 was discussed" and that "Thereza
showed us radioimmunoassay data from subclones from these
hybridomas," and "to me, that was completely satisfactory." Tr.
1781. Dr. Woodland stated that they were focusing then on
whether the "original typing of the wells was a true depiction of
the constituents of the wells," which seems clearly to refer to
the issue of whether the use of data on parent wells was a fair
representation of the individual hybridomas in them. Tr. 1698.
Further, he specifically remembered that the data, when he saw
them, were in an "MIT notebook" with "brown covers and ruled
pages" which "look like graph paper." Tr. 1699. 139/ ORI
attacked Dr. Woodland's testimony on the basis that he had been
unable to identify the June subcloning data shown to him in 1990
by an investigator. Cf. ORI FFCL III.E.10; Tr. 1739 (Woodland).
However, by that time, the data were compiled into a completely
different format in the I-1 notebook. 140/ Tellingly, however,
the pages are still identifiable as being graph paper from an MIT
notebook of the kind Dr. Imanishi-Kari referred to as her "spiral
notebook," the brown cover of which is in the record. 141/

Furthermore, the interrelation of the June subcloning results
with other data in the notebooks supports Dr. Imanishi-Kari's
contention that these data did represent the results of that
cloning. Thus, certain clones are marked with red dots on R-
1:31-34 and then a note on R-1:35 explains that the red dots
represent selections for further cloning. Many of these selected
clones are the same ones then described on I-1:124. This
correlation supports testimony that the subcloning was a joint
effort by Dr. Imanishi-Kari and Reis. Tr. 4873 (Imanishi-Kari),
2535-36 (Reis).

The subcloning was triggered, in part, by the puzzling
inconsistency between the results of the assays on the main
fusion and the tests of selected supernatants assayed in the June
6th retest, in which some subclones that had been negative on
BET-1 gave positive results on the retest (and the difference
could not be attributed to sensitivity differences). Tr. 4879-
80. 142/ The further subcloning turned out to produce more
inexplicable results. Thus, Dr. Dahlberg pointed out that, among
subclones out of 16 parent wells from which subcloning was
performed, 12 had subclones producing antibodies showing either
two or three different isotypes. 143/ Since each well was
unlikely to have more than one or two hybridomas (because the
clonality of the wells was estimated, as mentioned above, at 1-
1.5 hybridomas per well), these isotype results seem unlikely.
See ORI Br. 63. However, from the estimation technique, it was
predictable that some of the wells would have two or more clones,
which would explain at least some of the diversity in isotype.
Also, Dr. Imanishi-Kari suggested that many of these strange
results may be explainable today as a result of the later
research mentioned, as well as work showing a lot of reverse
binding by antibodies from transgenic mice, which was not
available in 1985. Tr. 4876-79; Ex. H148; R. Br. 55. In any
case, whether or not these explanations are correct, the
likelihood of fabricating data that on their face appeared
impossible at the time seems remote.

Dr. Dahlberg highlighted the improbability of the diversity of
the subclones derived from the original pauci-clonal wells as
further evidence of the lack of authenticity of these data.
Tr. 346-7; ORI Br. 62-63. However, no suggestion has been
offered why creating these then-inexplicable results could
possibly strengthen either the Cell paper or Dr. Imanishi-Kari's
position in the investigation (since ORI claims the June
subcloning data were created in that process in either 1986 or
1988 for the latter purpose). We find that the inexplicable
nature of the results is, if anything, evidence that it is
unlikely that Dr. Imanishi-Kari would fabricate such results.

Thus, a consideration of the scientific issues and the nature of
the data leads us to conclude that the results are likely to be
what they purport to be, absent clear forensic proof to the
contrary. We turn therefore to the forensic arguments presented
by ORI, first the statistical analysis of the data and then the
document examination of the counter tapes by the Secret Service.

Statistical arguments

ORI's statistical analyses of the June subcloning data and
other data are not evidence from which it is reasonable to
infer fabrication here. In general, ORI's statistical
analyses were flawed.

ORI relied on certain statistical analyses which ORI said were
compelling evidence, entitled to great weight, that the
handwritten counts from the June subcloning data were fabricated
and that other "questioned" data sets were also fabricated. ORI
Br. 72. We discuss these analyses primarily with respect to the
June subcloning data, but the key reasons why we find this
evidence unreliable apply to all of the sets of questioned data

The statistical analyses were performed by Dr. Mosimann, a
statistical expert who was employed by ORI and OSI as a
biostatistician and consultant and who, as a member of the OSI
Scientific Review Panel, interviewed Dr. Imanishi-Kari and other
witnesses during 1990. See, e.g., Exs. H103, H104, H116, H118.
Dr. Mosimann's analyses consisted primarily of two types:
testing whether "low background counts" (defined as counts below
600) "fit" a Poisson distribution mixture model and performing a
uniform digit distribution analysis. Other tests, such as
fitting data to a negative binomial distribution model and using
a "spikiness index" (developed by Dr. Mosimann specifically for
this case) were not independent tests. See, e.g., Tr. 845
(Barron), 1633-34 (McClure), 2219-24, 2244-45, 2247-49 (Speed).

ORI's other statistical expert, Dr. Barron, is an associate
professor at American University, where Dr. Mosimann is an
adjunct professor. Dr. Barron performed only limited analyses,
generally using data and software programs provided by ORI. Tr.
837, 842-45, 1192, 1202-04 (Barron). He confirmed that Dr.
Mosimann's calculations were generally accurate, and the
calculations are not in dispute. Dr. Barron did not, however,
agree with Dr. Mosimann's opinion that the statistical analyses
alone support a conclusion of fabrication; instead, Dr. Barron's
opinion was that the statistical analyses would support a
conclusion that the data were not generated randomly, but were
fabricated, only if the scientific and forensic evidence is
compelling. Compare Tr. 801-02 (Mosimann) with Tr. 830-37, 854,
5019 (Barron).

Moreover, Dr. Barron indicated that the validity of the analyses
depended on the validity of the underlying scientific
assumptions, and he expressed no opinion on the validity of those
assumptions. Tr. 835-36, 1187, 1198, 1212 (Barron). As we
explain below, ORI did not prove that its underlying assumptions
were valid, either through its testimony, through its use of
unquestioned data sets as alleged "controls," or through other

Dr. Imanishi-Kari's expert, Dr. Speed, is a professor at the
University of California at Berkeley, and appeared for her
without compensation. He contradicted Dr. Mosimann on the
validity of the assumptions underlying the analyses, on the
validity of some of the techniques, and on the reasonableness of
the inferences drawn from the analyses. Both Dr. Mosimann and
Dr. Speed have considerable experience in applying statistics in
the context of experimental science. Contrary to what ORI
argued, we see no reason to accord Dr. Mosimann's testimony
greater weight based on his experience. (See our note on this
above.) Generally, we found Dr. Speed's testimony to be more
persuasive than Dr. Mosimann's.

To a certain extent, ORI's statistical analyses all flow from
characteristics evident on the face of the June subcloning data
which distinguish these data from the other data selected as
"controls." In this section, we discuss the different
statistical analyses and related evidence in detail and conclude
that there were flaws in the analyses and problems with relying
on the "controls." We emphasize, however, that ORI's conclusions
are premised on treating the handwritten counts in the June
subcloning data as though they purport to be a complete set of
counts transcribed precisely as they appeared on gamma counter
tapes, except for rounding to the nearest tens. This presumption
is unwarranted. Evidence elsewhere in the notebooks indicates
that Dr. Imanishi-Kari was aberrant in her data recording and
rounding patterns, so it is unreasonable to assume that if
transcribing real data, she would have consistently rounded to
tens (and, indeed, two background counts in the handwritten data
end in five, rather than zero, and others may have been rounded
to the nearest hundred). Numbers appear to be missing, moreover,
in a few places for one or two of the æ probes where values are
given for the other reagents. Also, the print is sometimes very
obscure on the counter tapes, so the tapes may have been
difficult to read.

More important, Dr. Imanishi-Kari would have no apparent
incentive for transcribing the results precisely, particularly
with respect to the low counts used for most of the statistical
analyses. Dr. Mosimann acknowledged that a researcher recording
values in this type of experiment would be interested in the
relative magnitude of the readings, rather than the precise
values. Tr. 733, 739, 811, 1143. In this particular experiment,
there are marks such as X's or asterisks indicating particular
values, and this suggests that not all of the values were of
interest to her. Other researchers in the laboratory transcribed
counts from tapes without purporting to transcribe precise
values, so Dr. Imanishi-Kari was not unique in this respect. 144/
In other words, there are equally plausible explanations other
than fabrication for why these data do not look like numbers
precisely transcribed from a gamma counter tape.

Thus, even if the statistical analyses were more reliable than we
find them, we would not infer from those analyses alone that the
June subcloning data were fabricated. In light of the evidence
as a whole in this case, the more reasonable inference is that
Dr. Imanishi-Kari was recording real data from a counter tape,
without intending to be exact, rather than that she was
inventing fake values.

We note that we do not consider it suspicious here that the data
in question are handwritten and that the low values might not be
precisely transcribed. Arrows indicating subclones with positive
values are handwritten to the left of the counter tape on these
pages (the column indicating a screen for antibody with a
kappa/lambda probe). The handwritten data in the three
additional columns (labeled BET-1, AF6, and anti-æ) correspond to
the subclones with arrows. It makes sense to interpret the
arrows as indicating which subclones were producing idiotype-
positive antibodies and which therefore were subsequently tested
with the three anti-æ reagents. If only supernatants from the
highlighted subclones were tested with the three æ probes, it
would be reasonable to handwrite the æ results so they would line
up with the kappa/lambda results for those subclones, rather than
cutting and pasting many small pieces of counter tape. 145/

ORI did not establish that the Poisson distribution mixture
model is a valid model, properly applied here.

The first part of ORI's statistical analyses of the June
subcloning data assumes that gamma counter readings under 600
from different wells of a "fusion experiment" are background
counts which should follow a mixture of Poisson distributions.
146/ Based on this assumption, Dr. Mosimann used a test (the
chi- square goodness-of-fit test) to determine probabilities of
whether certain data sets fit Poisson distribution models of up
to nine mixtures. He applied this test to 265 low script counts
of the June subcloning data (on the theory that these were
background counts), low counts from other questioned data, and
low counts from six sets of unquestioned data from notebooks of
Dr. Reis and Dr. Imanishi-Kari (five sets from tapes and one that
is handwritten) that he treated as "controls."

For the following reasons, we conclude that ORI did not prove the
validity of the use of a Poisson distribution mixture model:

 ORI relied primarily on evidence showing that it is a well-
settled principle that repeated readings from a single,
constant radioactive source over a period of time will
follow a single Poisson distribution. Tr. 629, 633-36
(Mosimann), 1481 (Gilbert), 2189 (Speed); ORI Report, App.
B, at B-7. While this principle is generally accepted, the
issue here is whether a collection of single readings from
numerous sources (such as the wells on a microliter plate in
a radioimmunoassay) should be expected to follow a mixture
of Poisson distributions. 147/

 Contrary to what ORI stated, the Beckman gamma counter
manuals do not state that the counter uses the Poisson
distribution model for counts from multiple sources. ORI
Br. 73, citing Ex. H525, at 41, 51. Rather, the manuals
simply refer to a unique characteristic of the Poisson
distribution: the fact that standard deviation of a Poisson
distribution is the square root of the mean of values in the
distribution. 148/ The user of the gamma counter can thus
estimate the error of a count, the amount by which a single
reading from a sample (as in one well of a microliter plate)
can deviate from the actual, average, counts-per-minute
value of that sample. Ex. H526, at 31; Ex. H525, at 51.
The manuals do not state that low, single counts from the
multiple wells of a microliter plate will follow mixtures of
Poisson distributions.

 Dr. Mosimann admitted that he was not aware of any studies
to determine whether readings from the multiple wells on a
microliter plate fit Poisson mixture models. Tr. 5903-04,
5912. While Dr. Mosimann described the Poisson distribution
as quite standard, he was referring to an experiment showing
that multiple readings from a single, constant radioactive
source fit a single Poisson distribution. Tr. 633-36.
Dr. Barron admitted that whether the low counts from the
questioned pages should follow Poisson distributions was a
serious question. Although he had looked in some textbooks
to "see what I could find," he described the mixture version
of the Poisson distribution as "hard to find anywhere" and
he cited no scientific source establishing the validity of
the mixture model. Tr. 1198. ORI referred to a two-mixture
model in statistical literature; Dr. Speed described this
reference as the only example of a Poisson mixture model of
this kind that he has seen in the literature, and stated
that he did not believe that statisticians commonly use
Poisson mixture models as ORI used them in this case. ORI
Br. 73; Tr. 2195.

 Dr. Speed testified that there was no compelling scientific
reason why gamma counter readings from different wells
should fit a Poisson mixture model, in contrast to the
strong empirical and scientific reasons why repeated counts
from a single radioactive source should fit a single Poisson
distribution. Tr. 2216-18, 6084-88. We do not find his
testimony to be outweighed by that of ORI's scientific
witnesses. While Dr. Gilbert is a physicist and a Nobel
prize-winning biologist to whose opinion we would ordinarily
give great weight, the testimony elicited by ORI from Dr.
Gilbert about applying the Poisson mixture model to the
assays here did not clearly set out either an empirical or
theoretical basis for his opinion, that directly relates to
the issues here. 149/ Dr. McClure's testimony on this point
was conclusory, without a clear foundation. See, e.g., Tr.
1622-23 (McClure).

 The testimony of ORI's witnesses as a whole exhibited
considerable uncertainty about the theoretical underpinnings
for the use of the mixture models. For example, Dr. McClure
testified that mixtures were used because there were three
different assays performed and different reagents used. Tr.
1622-23 (McClure). Dr. Mosimann discussed mixtures in the
context of different readings resulting from different
"cooking" times of different wells on the same plate
containing the same biological preparation. Tr. 738-39
(Mosimann), see also Tr. 1540 (Gilbert). Also, Drs.
Mosimann and Gilbert disagreed over whether non-background
count from multiple sources would be expected to follow
Poisson distributions. Tr. 1132-34 (Mosimann), 1542

Dr. Mosimann's conclusion that low counts from certain data sets
from Dr. Reis and Dr. Imanishi-Kari (which ORI called
unquestioned data) "fit" Poisson mixture models of nine or less
mixtures does not establish the validity of the model, for the
following reasons:

 Only one of the controls "fit" a Poisson mixture model with
a chi-quare probability greater than 50% (specifically,
50.7%), and the others had chi-square probability values
less than 20% (including two which had chi-square values
less than 5%, although these sets were later found to fit
mixture models through computer simulations). Exs. H413,
H414, H754, H806; Tr. 5671 (Mosimann); ORI letter enclosing
statistical tables supporting Exs. H413-H414, September 28,
1995. While chi-square probabilities greater than 5% were
sufficient so that Dr. Mosimann's definition of "fit" would
apply, ORI did not show that a statistician would ordinarily
conclude from these probability values that the underlying
model was valid. 150/ See, e.g., Tr. 6004-06 (Speed).

 Historical controls such as those used here are generally
not as trustworthy as independent contemporaneous controls.
Tr. 6006-10 (Speed). Also, control data sets should be
comparable data sets in order to be valid controls. Tr.
1199-1201 (Barron). Each of the "controls" here is data
from a gamma counter reading of a radioimmunoassay, as the
June subcloning data purports to be. Only one of the
controls is transcribed from a tape, however, and that
transcription purports to be an exact transcription, with no
rounding at all. R-2:114-124. Some of the controls would
have involved fewer preparations and reagents than the June
subcloning assays; if we adopt ORI's various theories of
reasons to expect Poisson mixtures, this would suggest that
these control data sets would more likely fit a mixture
model of less than nine mixtures than the June subcloning
data, even assuming the mixture model is reliable generally.

 Dr. Mosimann provided no scientific explanation for why he
selected particular data sets for controls. With respect to
a different analysis, he indicated he chose "fusion
experiments" from the same period and same laboratory. Tr.
717-18. The controls here, however, include two assays on
mouse sera, rather than supernatants from fusion products.
I-1:48-58; R-2:114-124. Also, the record includes other
unquestioned data sets for assays of fusion products for
which no Poisson mixture analysis was presented to us (such
as Dr. Weaver's data used as a control for different
statistical analyses). Ex. H402. ORI did not explain why
analyses of these data sets using the Poisson distribution
mixture model were not presented to us.

Even assuming that low counts from different wells in all
radioimmunoassays should follow Poisson mixture models, we would
not find ORI's analyses reliable as a basis for inferring
fabrication. Dr. Speed's testimony, which we found convincing,
raised other significant questions about the reliability of ORI's

Dr. Speed testified that a significant amount of subjective
judgment is involved in determining how to "bin" data to perform
the chi-square goodness-of-fit test used by Dr. Mosimann and in
deciding how many mixtures of Poissons to use. Tr. 2193-94,
2205-12. ORI did not rebut this testimony directly, but defended
the reasonableness of Dr. Mosimann's judgments, in part with
testimony from Dr. Barron. Tr. 5033-37 (Barron). Dr. Barron
also testified, however, that selection of bins is "to some
extent . . . arbitrary" and that he could not be sure he would
have made the same choices as Dr. Mosimann. Tr. 1182-83, see
also Tr. 5657 (Mosimann). Dr. Barron also said that one of the
difficult problems is knowing how many mixtures to use in the
model and that, while Dr. Mosimann tried to fit the data to
possible mixtures from one to nine successively, the reason Dr.
Mosimann did not try more than nine mixtures was because of
computer limitations. Tr. 1189-90. ORI did not establish that
using only nine mixtures or less was scientifically appropriate
in this case.

Even assuming the judgments made here were reasonable, however,
the record indicates that a different exercise of reasonable
judgment may have given different results. Dr. Mosimann defended
the judgment to "bin" the data by tens because most of the low
counts appeared to have been rounded to tens. Tr. 1233-34.
Dr. Speed said this binning was inconsistent with the general
rule that numbers must be binned so that there is a minimum
expectation of five values in each bin, and that even a variant
of the rule cited by ORI (80% of the bins should be greater than
or equal to five values and all should have at least one value)
was violated with respect to nine out of ten data sets analyzed
by Dr. Mosimann. 151/ Tr. 2334, 2939-40; Ex. H414. Dr. Speed
presented evidence that if the rule were followed and bins of 30
were used instead, one of the questioned data sets would fit the
Poisson distribution model. Tr. 2207-10; Ex. R59. In rebuttal,
Dr. Mosimann (1) asserted that it is not always necessary to
follow the rule of 5 and argued that applying the rule of 5 here
lessened the effectiveness of the chi-square test; (2) provided
evidence of computer simulations to overcome the problems the
rule was intended to address; and (3) questioned Dr. Speed's use
of bins of 30. Tr. 1226-29, 5657-60, 5671-80, 5900-02; Exs.
H414, H754, H806. This rebuttal was not as persuasive as Dr.
Speed's response. More important, it did not effectively
undercut the reasonableness of following the general rule and
binning at 30, to achieve a minimum expectation of five.
Moreover, with respect to two control sets, the computer
simulations reached results different from the chi-square
analyses. The fact that different reasonable analyses can reach
different results raises substantial questions about relying on
such analyses to establish fabrication of data. 152/

Dr. Mosimann tried to buttress the chi-square analysis of whether
the June subcloning data fit to Poisson mixture models by
examining whether these data "fit" a negative binomial model, by
developing a "spikiness index," and, later, by his computer
simulations. None of the additional tests performed goes to the
validity of the underlying assumptions.

Dr. Mosimann's rationale for using the negative binomial was not
any independent basis for believing that the data should "fit"
the negative binomial model. Rather, it was based merely on the
fact that the negative binomial is related to the Poisson
distribution. Thus, we find that it is not an independent test
and is based on the same questionable assumptions as the Poisson
mixture model. We also note the following about the negative
binomial model:

 Dr. Speed testified that there is no scientific reason to
think that the data here should fit a negative binomial
model. Tr. 5998.

 While ORI's witnesses suggested that data with any number of
Poisson mixtures should fit the negative binomial, Dr.
Speed's testimony was that use of this model was not
equivalent to testing the June subcloning data against
possible mixtures with more than nine components. Tr. 759,
1139-41 (Mosimann), 1189-90 (Barron), 5996-6000 (Speed).
Indeed, ORI's own analysis would belie such a conclusion
since one of the unquestioned data sets (R-2:114-124) fit a
Poisson mixture, but not the negative binomial. Ex. H414.

 Dr. Mosimann's own testimony raises questions about his use
of the negative binomial model. He characterized the
Poisson mixture models as multimodal, and criticized Dr.
Speed's use of larger "bins" in analyzing the data on the
grounds that it could cause a multimodal distribution to
become unimodal. Tr. 5656-60. He described the resulting
unimodal distribution as not descriptive of the process
going on in the microliter plates. Tr. 5900-02. However,
he also testified that the negative binomial distribution is
unimodal. Tr. 5812.

On the whole, we consider the negative binomial analyses an
unsuccessful attempt to compensate for the problem that ORI's
computer could generate Poisson distributions only up to nine
mixtures. ORI thus did not effectively rule out the possibility
that Dr. Imanishi-Kari's data would fit a Poisson distribution
model of more than nine mixtures.

Similarly, Dr. Mosimann in effect acknowledged that he used the
spikiness test because he was concerned with certain aspects of
the use of the chi-square goodness-of-fit test (which gives
credence to Dr. Speed's concerns). Tr. 766, 1232, 5668, see also
2341 (Speed). Dr. Mosimann's "spikiness index" measures the
exposed "sides" or "walls" in a histogram of values such as gamma
counter readings; a graph with marked, separated peaks and
valleys will have a higher relative spikiness index than one with
a flatter, more even distribution of values. Tr. 748-53
(Mosimann). Dr. Mosimann reported that the questioned data
exhibited spikiness greater than either the unquestioned data or
simulated data based on Poisson mixture and negative binomial
models. Tr. 767-71; Ex. H416.

The spikiness test is a untested method invented specifically for
this case, not a commonly accepted statistical technique. Dr.
Barron said that he considered spikiness an "interesting"
statistic and "an imaginative, reasonable way of getting around
the notion that the arbitrariness of the chi-square test might
lend some criticism" to Dr. Mosimann's work. Tr. 1190. Dr.
Barron said that he had not heard the term "spikiness" until he
read the ORI Report and that the spikiness test was not a
standard statistical test. Tr. 843-44, 1181. In any event, as
Dr. Speed noted, the spikiness index merely quantifies a feature
evident from graphs of the June subcloning data.

The computer simulations, moreover, do not compensate for basic
flaws in applying the Poisson mixture model to the June
subcloning data. The simulations do not establish the underlying
assumptions. Moreover, the simulations were based on using a
five-mixture model for these data, which Dr. Mosimann had chosen
as the closest fit to the June subcloning data based on his chi-
square results, which, as discussed above, involved considerable
subjective judgment. Exs. H754, H806.

ORI's uniform digit distribution analysis is not evidence
from which we would infer fabrication.

The second set of analyses that Dr. Mosimann performed on the
June subcloning data was called "uniform digit analysis" or
"uniform digit distribution analysis." This analysis assumes
that randomly generated digits will follow a discrete uniform
distribution. Studies co-authored by Dr. Mosimann found that
people trying to generate three-digit lottery numbers that would
look as though they were randomly generated could not effectively
do so, because their numbers reflected a digit preference. For
this case, Dr. Mosimann analyzed the frequency of the appearance
of digits from 1-9 in the June subcloning data (high and low
counts) and in the "control" data, and determined that, unlike
digits from the control data, digits from the June subcloning
data did not fit the uniform digit distribution model. He
concluded that the June subcloning data were fabricated because
they showed a preference for digits 1, 3, 7, and 8 (although
another of Dr. Imanishi-Kari's handwritten data sets showed
different preferences). Exs. H409, H410, H411.

While ORI presented this analysis as a commonly accepted
statistical technique for determining fabrication of data, the
statistical authorities referred to in the record do not support
this conclusion, for the following reasons:

 These authorities support only a conclusion that a departure
from uniformity, in many cases, indicates a personal
preference for digits and therefore some form of human
intervention. Ex. H200, at 3. As we discussed above, the
handwritten counts in the June subcloning data do not
purport to be data with no human intervention, and we do not
find from the circumstances here that the obvious human
intervention makes it more probable than not that the data
were fabricated.

 These authorities, as described in Dr. Mosimann's own
article, are based on an expectation of uniformity in what
are referred to interchangeably as "terminal" or "error" or
"rightmost" digits that do not contain any information. Ex.
H200; Tr. 1141-43 (Mosimann). The analyses performed here,
however, were not on the rightmost digits, but were on the
rightmost digits that were not a zero and that were not the
leftmost digit. H636, H637, H638; Tr. 784-85 (Mosimann).
ORI presented no evidence that analysis of these digits (as
opposed to "rightmost" digits) is a commonly accepted
statistical technique.

 The previous analyses by authors other than Dr. Mosimann are
described as analyzing digits that do not contain
information. The judgments made here about what digits were
not significant digits (that is, which digits did not
contain information) are questionable, since this analysis,
for example, would consider the 8 in 800 to be significant,
but not the 7 in 27,000. See, e.g., Tr. 786 (Mosimann),
2234 (Speed).

 ORI did not provide testimony from Dr. Barron, ORI's other
statistical expert, to support a conclusion that the
particular analyses used by Dr. Mosimann were a commonly
accepted statistical technique for determining fabrication
of data. Instead, Dr. Barron testified that he would not
infer fabrication from the statistical tests used here
unless there was other compelling evidence. Tr. 830, 5019.

ORI's reliance on "control" data sets here is also misplaced,
since most of those sets are data directly from counter tapes
which do not contain transcribed or rounded numbers, as the June
subcloning data do. Ex. H420. Dr. Mosimann tried to overcome
the rounding problem by first subjecting the data to a rounding
protocol. The record shows, however, that this rounding protocol
does not in fact mimic Dr. Imanishi-Kari's rounding behavior, as
shown by a set of data which are handwritten, but for which the
counter tape is also available (I-1:20-21; I-1:12-19). 153/ Tr.
2226-30 (Speed); Exs. H419, R61. Dr. Mosimann's assertion that
the computer rounding protocol was not intended to emulate
Dr. Imanishi-Kari's rounding behavior exactly and was not, for
example, concerned with whether she rounded up or down, was not
consistent with his testimony that the effects of the computer
rounding protocol were quite consistent with Dr. Imanishi-Kari's
known rounding behavior. Tr. 1125, 5694.

In sum, there are some questions about the uniform digit analysis
as applied here, but, even accepting this technique as valid, we
would not infer from this analysis alone that the questioned data
are fabricated. These statistical analyses fail to show that the
handwritten data were the result of intentional fabrication
rather than idiosyncratic behavior in recording or rounding data
where the digits studied were admittedly of little scientific

The Panel concludes that the statistical analyses are not
reliable evidence that the June subcloning data or other
questioned data were created by fabrication or falsification as
alleged by ORI.

Document examination arguments

ORI's only document examination attack on the June subcloning is
related to the green counter tapes on the left-hand side of the
pages (except I-1:124A and 128A which have no tapes). 154/ The
Secret Service found no match for the printer ribbon ink with
tapes from dates around those on the pages in the I-2 notebook
which they believed came from the same printer. 155/ On the
other hand, the Secret Service found a "full match" between these
tapes and tapes derived from a Maplethorpe "notebook" with dates
between November 26, 1981 and April 19, 1982 (as well as with the
green tape at I-1:103 discussed in relation to the January fusion
below). ORI Br. 63; Exs. H501, at 2, H502, at 3, H503.

Fabrication would have been more likely as an explanation
for the green tapes in the June subcloning if such tapes did
not appear in other researchers' notebooks until 1986 or
later rather than gradually disappearing much earlier.

ORI's argument that tapes containing part of the June subcloning
data were probably from before 1985 and most likely as old as
1981 or 1982 is less than persuasive. Obviously, it would be
much more probative of fabrication of data purporting to be from
mid-1985 if ORI had shown that such tapes did not become
available until after the experiments were supposed to have
occurred. 156/

ORI had two different possible scenarios for the timing of the
alleged fabrication of these pages. One theory was that
fabrication might have occurred in 1988. This was based on the
fact that the question of whether the Table 2 hybridomas had been
subcloned was raised during the first day of a 1988 interview of
Dr. Imanishi-Kari by NIH representatives in Boston and the June
subcloning data were produced the next day. ORI Br. 62 and n.43.
A second theory was that the fabrication occurred in 1986 to deal
with Dr. O'Toole's charges when meeting with Dr. Wortis (although
ORI suggested that this was less likely because Dr. O'Toole was
not shown the June subcloning data then). Id. at 23-24, 62,
n.43. In either case, ORI argued that Dr. Imanishi-Kari
fabricated the June subcloning (and the January fusion experiment
discussed later) "from green tapes she had lying around" and
looked for "tapes with numerical values that she could use, in
the process paying no attention to the color of the tapes or the
numerical interrelationship of the numbers within the tapes."
ORI Reply Br. 20. Then, she "made up" the handwritten numbers in
the June subcloning data "probably from scratch." Id. This
description which ORI called "straightforward" does not explain
why Dr. Imanishi-Kari would not simply write down all the
"numerical values that she could use" instead of allegedly doing
so for some assays and then searching for tapes to construct
others. Cf. ORI Reply Br. 20. Nor does it address the somewhat
peculiar result that what allegedly came to hand in such a search
for tapes undertaken in 1988 (or even 1986) were so many tapes of
a color allegedly not available after 1984.

It was not disputed that Dr. Imanishi-Kari stored loose counter
tapes, pages, and folders in piles around the laboratory and
office, on desks and window sills. It was, of course, not
impossible that she would find tapes five or more years old among
this assortment. 157/ However, there was no testimony that
subclonings and fusions did not occur in her laboratory during
the intervening years, so that tapes useable to show fusion
results from much more recent times were probably available, if
she had been attempting to find tapes to use for this purpose.

The so-called "full match" with counter tapes from 1981 or
1982 is meaningless.

The finding of a "full match" projects a misleading confidence
that a clear connection exists between the questioned tapes and
the "match" tapes and that the possibility of other, more timely
matches has been excluded. Neither proved to be the case,

The "full match" finding does not prove any definite link between
the June subcloning tapes and the older tapes. The fact that a
"full match" is supposedly made to tapes covering as wide a time
span as six months evidences that this matching process does not
yield any precise information about dating. 159/ ORI stated that
this imprecision was not "surprising, particularly given the
testimony . . . that supplies are bought in bulk, and batches of
a particular kind of supply typically will have the same or
similar composition. . . . Thus, one would expect the same ink
formula to show up throughout a particular batch of ribbons,
while tapes from the same production run would appear to be very
similar." ORI Reply Br. 16-17. Hence, ORI stated, "recurrent
matches" over a "fairly extended period" could occur because "the
ink, tape, and font variables would stay the same over a long
period, and, necessarily ink intensity would match on a recurring
basis." ORI Reply Br. 17. However, if the same batch of ribbons
and rolls of green tape could be used recurrently over 21 months,
there is no basis to doubt that ribbons with the same ink formula
and full or partial rolls of green tape might recur over the
succeeding months. 160/ ORI acknowledged that at most the
evidence on matches "identifies the era" when the tape might have
been generated, but relied mainly on finding no contemporaneous
matches as evidence of fabrication. ORI Reply Br. 17.

The absence of a "contemporaneous match" is not probative
because most of the tapes from the contemporaneous period
were not available to be tested and the dating of the June
subcloning is not precise enough to make the comparison that
was tested meaningful.

The evidence on the absence of a match to contemporaneous tapes
is also unpersuasive, however. Mr. Stewart's conclusion that the
tapes from I-2 demonstrated that the I-1:124-128 tapes could not
have come from the same time period (June 1985) depended entirely
on both sets of counter tape having been generated using the same
printer (not merely the same type of counter). This assumption
was based on tapes showing characteristics of "what I'm calling
the same printer, like the same font and I have no evidence that
there was an additional printer that would produce a font like
that." Tr. 4155 (Stewart). This assumption was unwarranted,
however, as discussed in detail elsewhere. While Dr. Imanishi-
Kari may well have used only two counters from her laboratory in
generating most of the data in I-1, it does not follow that
printers were never substituted that had similar fonts, nor does
it follow that tapes in notebooks other than I-1 were generated
only by those two counters. (Dr. Imanishi-Kari stated that she
sometimes used other counters and that counters were available on
the second, third and fifth floors of the same building. Ex.
H101, at 53.)

In assessing the significance of not finding a "contemporaneous
match," the points discussed elsewhere about the limited set of
tapes available to search for a match and the limitations of the
search conducted are even more important. First, as discussed
above, even had all the tapes in all the notebooks been tested by
ink analysis, it is unlikely that most of the tapes generated
during the relevant time frame were retained to begin with, and,
of those retained by researchers, most were probably not
available through MIT in 1989. Second, the existence of ink
matches from the general time frame of the June subcloning
experiments was not excluded; we have no evidence that all other
existing tapes generated at the relevant time were compared. In
searching for a match, the Secret Service did not test all the
tapes in all the more than 60 notebooks presently in the record,
but rather looked first at yellow tapes in the I-2 notebook from
dates that narrowly framed the dates on I-1:124-128. Finding
that the ink on those tapes was not a match to that on the green
tapes on I-1:124-128, the examiners "as an afterthought" tested
green tapes 161/ in the 26 which it had obtained when the ink
analysis tests were conducted looking for a match. Tr. 3775,
4015-17 (Stewart).

Dr. Imanishi-Kari argued persuasively that the actual tapes in
the June subcloning were probably generated after the dates (June
20-22) on the pages. Specifically, she testified that these
dates were probably when the supernatants were collected, since
subcloning could not begin until 10-14 days after the June 6th
retest, and the size of the assay indicated the experiment
results would not be produced for some time after that. R. Br.
56-57; Tr. 5268-69. It is thus possible that the tapes were
generated for the June subcloning after the I-2 tapes either with
a change of ribbon and (for a short time) paper or using a
substitute printer with its own ribbon and paper. Also,
attributing significance to the failure of tapes dated in June in
the I-2 to match the June subcloning dates requires an
assumption, that the dates in the I-2 notebook were precise, for
which no evidence was presented.

Mr. Stewart testified that the examiners later looked through the
additional notebooks, but did not find any green tapes subsequent
to January 1984. Tr. 3811. He did not indicate whether any
additional yellow tapes from June, July or August 1985 (besides
those in I-2) were found in those notebooks or tested to
determine if they might have had the same ink used in the same
printer (based on font).

Thus, it is possible that other tapes with matching ink formulas
produced on the same printer existed but were not found. It is
also possible that the use of a substitute printer could account
for the failure of the June subcloning tapes to match yellow
tapes from nearby dates. As mentioned, a back-up printer was
used during down time three or four times a year, according to
Mr. DeHaro who was responsible for connecting it. Tr. 4450-52.
It is possible that the substitute printer continued to use a
roll of green tape after most of the tape had been used up from
the main stores and such a substitute would obviously have had a
different ribbon which could have had a different ink formula
from that in the regular printer.

ORI rejected this scenario because a match was also found between
the ink formula in the June subcloning green tapes and one of the
green tapes used in the January fusion experiment discussed in
the next section. ORI argued that to account for these matches
(and the finding that the ink formula on other green tapes in the
January fusion experiment does not match these tapes) would
require "not one, but two, printers sitting in the basement," of
which "Printer 1" was used to generate one set of January fusion
green tapes and then broke down and was replaced by "Printer 2"
to generate the other January fusion green tapes which then broke
immediately, since it was used only for that part of the January
fusion tapes. Then, in June 1985, Printer 2 is brought back up
to generate the June subcloning tapes. Neither is otherwise used
again, at least with the same green tape. See ORI Br. 90-91.
162/ ORI calls this scenario "preposterous," but it is also
entirely overstated. What is required is only that a substitute
printer with green tape was used in the January fusion and the
ribbon changed in the midst of that experiment and then used
again in the June subcloning with a ribbon whose ink matches one
but not the other of the January tapes (by that time it may or
may not have been the same ribbon). 163/

The convergence of less common tape color between two questioned
experiments and the divergence of ribbon ink formula within one
of those experiments is certainly peculiar, but the theory that
Dr. Imanishi-Kari created this circumstance by using only old
green tapes to fabricate the two experiments would also be
peculiar. ORI calls this the "one simple explanation that
accounts for all of the evidence," but in fact fabrication
requires an equally odd set of coincidences and motivations,
e.g., in Dr. Imanishi-Kari's finding these unusual tapes to use,
unnecessarily creating bizarre anomalies in the data which could
have been tailored instead to what was needed to support her
position (in the case of the June subcloning results), and using
tapes from several sources to create an elaborate and interlinked
experiment used for nothing else, just to lend verisimilitude to
a few negative normal results on unquestioned tapes that could
have simply been presented as a separate normal experiment (in
the case of the January fusion). ORI Br. 91.

In light of all the other evidence making fabrication unlikely,
we find that the forensic evidence is not sufficiently persuasive
to demonstrate that the June subcloning tapes are more likely
than not the product of intentional fabrication.

For the reasons explained above, and considering all the relevant
evidence, we conclude that ORI did not prove charges two and
three, Table 2, by a preponderance of the evidence.

3. Data on normal hybridomas; January fusion

Table 2, charges four and five: ORI charged that the normal
control mouse data reported in Table 2 were "deliberately
falsified and fabricated in part" which constituted scientific
misconduct. Charge letter 5. In addition, ORI charged that
Dr. Imanishi-Kari committed scientific misconduct by fabricating
"experimental data reported at I-1:101-109" and submitting those
data to "PHS investigators as support for the results reported in
Table 2." Id.

Issues: In addition to presenting frequency data on transgenic
hybridomas, Table 2 compares those results to figures for spleen
and lymph node hybridomas from normal mice. 164/ The very low
idiotype positivity figures (1/144 and 0/100 respectively) shown
for the normal hybridomas were important in that they provided
assurance that the results for the transgenic hybridomas
represented a meaningful phenomenon. According to
Dr. Imanishi-Kari and Dr. Reis, the normal data were derived from
two separate experiments currently located at R-1:76-80 and I-
1:106-107 (after the original mouse used as a control for the
Table 2 experiments proved to be mistyped as a normal mouse when
in fact it was transgenic). 165/ Tr. 4847-48 (Imanishi-Kari),
2546 (Reis). ORI challenged both sets of data. In relation to
R-1:76-80, ORI argued that the assay was actually run on a
different idiotype reagent than that reported in the Cell paper
and that the reagents were not interchangeable. In addition, ORI
pointed out that the raw data at R-1:76-80 showed 12 unreported
results above the cut-off. Dr. Imanishi-Kari testified that, as
she later noted on the page, these results were discarded because
they were false positives caused by preexisting radioactivity in
the counting tubes ("tube counts"). Tr. 4851. ORI also alleged
that the results reported for allotype (i.e., that none of the
normal hybridomas were positive for either æa or æb) were false
because there was no record of testing the normal hybridomas by
allotype. ORI Br. 81. In relation to I-1:101-109, the bulk of
the evidence of fabrication related to the transgenic fusions
(which were not published) based on statistical and document
examination arguments, but ORI inferred that the related normal
data at I-1:106-107 must also be fabricated by association, and
also presented some arguments in relation to the normal data
pages. All of these arguments are discussed together in the
section dealing with the January fusion data.

Analysis: We first address whether the normal control data at
R-1:76-80 were deliberately falsified or fabricated in Table 2.
Next, we turn to whether the data called the January fusion,
I-1:106-07, were fabricated as part of the support for the normal
control data in Table 2.

One of the initial triggers for Dr. O'Toole's concerns was the
discovery in the 17 pages of the results for the "normal" control
mouse. This concern was satisfactorily addressed because
Dr. Imanishi-Kari showed that the mouse had been proved (after
molecular analysis) to have been a mistyped transgenic and this
explained why it had levels of idiotype-positivity similar to
other transgenic mice. See Tr. 203-04, 301 (Dahlberg); I-1:133.
While apparently accepting the evidence of mistyping, ORI
continued to argue that normal mice might show higher levels of
idiotype-positive antibodies than the authors represented and
that the difference might be significant in light of findings
suggesting that the reported level of idiotype-positive
endogenous antibodies in transgenics was overstated. Narrowing
the gap between the transgenics and the normals from both
directions might undercut the Cell paper's conclusions.

Dr. Imanishi-Kari testified that the source for the normal data
to replace those discarded because of the mistyping was normal
data from two other experiments: R-1:76-80 and I-1:106-07
(January fusion). Tr. 4847-48. Dr. Imanishi-Kari previously
stated that Dr. Reis constructed Table 2 earlier in 1985 in
preparation for a talk (before the Cell paper was drafted) and
that Dr. Reis "was the one actually who collected data for
table 2 . . . [a]nd she went over the data that was in my
file . . . and that was the data she took to put in this
table 2." Ex. H103, at 112; see also Tr. 2666 (Reis). This
account is supported by the testimony of Dr. Reis that she
incorporated the data from the two sources and by a notation on
R-1:76 in Dr. Reis' handwriting totaling results from both
sources. Tr. 2546-47; see also Tr. 4848 (Imanishi-Kari). Dr.
Reis wrote these totals before she left the laboratory in January
1986 and returned to Brazil. Tr. 2502-03, 2520, 2545-47 (Reis),
4848 (Imanishi-Kari). ORI attacked both sets of normal data as
falsified or fabricated.

ORI did not prove that the normal data at R-1:76-80 were
intentionally falsified as to the anti-idiotype reagent.

ORI challenged the data from R-1:76-80 on the grounds that the
Cell paper said that the only anti-idiotype reagent used was an
anti-17.2.25 guinea pig antibody reagent. Cell paper 257. ORI
argued that a different anti-idiotype reagent was actually used
in the assay at R-1:76-80, and the experimental record then
altered to misrepresent this. ORI Br. 76. There is no question
that the entry after "anti-idiotype coat" on R-1:76 has been
changed and the notation "GP [guinea pig] anti-17.2.25" entered
in Dr. Imanishi-Kari's handwriting. R. Br. 58. The question is
whether ORI proved that the change represented a deliberate and
intentional effort by Dr. Imanishi-Kari to falsify the coating
reagent used in the published normal data. We conclude that ORI
did not, for the following reasons.

The Secret Service used infrared technology to produce a blow-up
of the writing under "GP anti-17.2.25" in an effort to determine
what was originally written in that space. No previous entry was
legible, and Dr. Dahlberg acknowledged that the images do not
look consistent with ORI's theory that the rabbit reagent was
previously listed there. Ex. H549; Tr. 367-68 (Dahlberg). In
fact, he testified that the altered entry might not have been a
coating designation at all, since Dr. Reis tended to put her
reagent notations at the end of experiments, and might well have
been "an entirely different irrelevant notation that was replaced
with the guinea pig anti-id coat." Tr. 368-69. 166/

Although the original entry could not be discerned, ORI argued
that it must have been rabbit anti-124/40, because that anti-
idiotype reagent was used in other experiments in the laboratory
including transgenic fusion assays at R-1:70-75. ORI suggested
that these assays constituted the "transgenic portion" of the
same experiment so that it would be surprising to have different
coating reagents used. ORI Br. 77-78; Tr. 1363-64 (Davie);
compare R-1:73 and 75 with R-1:76. Dr. Imanishi-Kari denied that
the assays at R-1:70-75 were part of the same experiment as the
assays at R-1:76-80. R. Br. 59, n.70. We found that the R-1:70-
75 assays related to transgenic and normal mice immunized with a
different antigen (chicken gammaglobulin or CG) from that used in
the idiotype system involved in the Cell paper (NIP). Accord Tr.
2542-43 (Reis). By contrast, the assays at R-1:76-80 related to
an unimmunized mouse. Tr. 2544 (Reis).

Dr. Imanishi-Kari argued that, even assuming that the coat was
altered from rabbit anti-124/40 to guinea pig anti-17.2.25, the
change was simply a correction. R. Br. 57-59. Both Dr. Reis and
Dr. Imanishi-Kari testified that they went over Dr. Reis's
results almost daily, and that Dr. Imanishi-Kari would commonly
write in corrections and observations. Tr. 2553-54, 2602 (Reis),
4849 (Imanishi-Kari). Dr. Imanishi-Kari argued, and Dr. Wortis
agreed, that the corrected coating was more scientifically
credible in light of the results on the controls which showed
reactivity almost twice as great with the 17.2.25 antigen as with
P5.40.3 (the antigen to 124/40). 167/ R. Br. 57-59; Tr. 2778
(Wortis), 4850 (Imanishi-Kari).

Dr. Imanishi-Kari argued that, for the purposes for which she
used the two idiotype reagents, they could be substituted without
any substantive effect on the results of the assays. R. Br. 64-
65, n.79. Consequently, she contended that she would have no
reason to alter the coating reagent, since it would make little
difference which one was actually used. ORI presented a great
deal of testimony to the effect that the two reagents were not
identical and could be expected to have differing patterns of
reactivity. See, e.g., Tr. 249, 371-72 (Dahlberg), 1343-45
(Davie). However, Dr. Imanishi-Kari did not claim that they were
not different but that both reacted strongly to the idiotype and
thus could identify idiotype-positivity effectively. R. Br. 64-
65, n.79. 168/ ORI argued that this position was inconsistent
with the claim that the very high counts recorded for the 17.2.25
control protein in the assay confirmed the coating reagent as
guinea pig anti-17.2.25 rather than rabbit anti-124/40. ORI
Reply Br. 31. However, Dr. Imanishi-Kari's point with regard to
the controls at R-1:76 was not that the absolute counts for
17.2.25 was very high but that the 17.2.25 control elicited the
strongest reaction, whereas rabbit anti-124/40 would be expected
to give its strongest reaction to its own antigen protein.

Whether the interchangeable use of two anti-idiotype reagents was
a good practice scientifically or was based on sufficient
demonstration of comparable specificities, it is clear on the
record before us that Dr. Imanishi-Kari's laboratory used both
reagents frequently and may have interchanged them
unintentionally at times. For example, ORI agreed that
Dr. Imanishi-Kari and Dr. Reis used the rabbit anti-124/40 as a
"proxy" for the guinea pig anti-17.2.25, citing as an example I-
1:12-21 where the rabbit reagent was noted in the heading to the
experiment but the data were summarized as on VH 17.2.25. ORI
Br. 78, n.55. Dr. Dahlberg testified that many examples could be
found in the notebooks of use of one idiotype reagent in one
assay and the other in the next, which supports the
interpretation that the laboratory did treat the two as
interchangeable (or that great care was not always taken in
specifying which was used). Tr. 93. Even if we found that any
of the experiments published in the Cell paper definitely used
the rabbit anti-124/40, we would not find that ORI proved that
the failure to disclose that a second anti-idiotype reagent was
used was anything more than honest error. 169/ The strong
evidence that both could be used to identify idiotype-positivity
(even though their reaction to particular determinants differed)
and the absence of any persuasive demonstration that the use of
the rabbit anti-124/40 would have significantly altered the
interpretation of the data in the Cell paper make it unlikely
that the omission reflected any intention to mislead readers and
also make it unlikely that Dr. Imanishi-Kari intentionally
tampered with the coating reagent notation on R-1:76 to disguise
the use of rabbit anti-124/40 in one particular instance.

The Panel concludes that ORI did not prove that the anti-idiotype
reagent was intentionally misrepresented in relation to the
normal control data in Table 2.

ORI did not prove that Dr. Imanishi-Kari intentionally
misrepresented the true number of idiotype-positive normal

Table 2 reports only one idiotype-positive result among the
normal hybridomas, but 12 additional counts over the cut-off
level of 1000 appear among the normal spleen hybridoma results on
R-1:76-80. Dr. Imanishi-Kari testified that Dr. Reis informed
her that the other positive results turned out to be the result
of contamination of the tubes by radioactivity. R Br. 52; Tr.
4850-52, 5093-94 (Imanishi-Kari). When questions arose,
Dr. Imanishi-Kari added a note to R-1:80 explaining that
"according to Moema [Reis] only no. 68 turned out to be real
positive. The others were tube counts." Since Dr. Reis put
Table 2 together, with Dr. Imanishi-Kari reviewing it, the
inclusion of only one positive in Table 2 evidences that Dr. Reis
(who had done the actual experiments on these hybridomas)
believed that the others were not real positives. Cf. Tr. 4688.
ORI did not show any reason that Dr. Imanishi-Kari could not
reasonably rely on Dr. Reis's interpretation of these results.
On the contrary, Dr. Imanishi-Kari's experience with normal
hybridomas in other fusions gave her a reasonable basis to expect
a very low rate of idiotype-positivity and hence to find the tube
count explanation credible.

Furthermore, it is undisputed that Dr. Reis wrote the totals in
the notebook showing only one positive result from this normal
fusion before she left the laboratory in January 1986. In order
to believe that Dr. Imanishi-Kari was making a "spurious" claim
of tube counts, we would have to conclude that Dr. Reis conspired
with Dr. Imanishi-Kari months before any questions were raised
about the Cell paper to disregard results which she knew were
true positives. See R-1:80. We do not find this plausible,
especially in light of the relatively slight effect that
reporting them as positive would have had (i.e., raising the
number of idiotype-positive normal hybridomas from 1/244 to
13/244). We find it more likely that the contamination occurred
as Dr. Imanishi-Kari noted, or at least that Dr. Reis believed
that it had, especially since several witnesses corroborated that
such tube counts occur frequently in the laboratory. See Tr.
4582-83 (Igras), Tr. 3118-19 (Marrack) (always caused by
"somebody else's graduate student"); cf. Tr. 376 (Dahlberg)
(admitted tube counts do happen, although alleged it was "very
rare"). Our conclusion is further supported because ORI did not
point to any scientific research suggesting that the results
which Dr. Reis reported were not consistent with the very low
level of idiotype-positive antibodies generally found in tests of
normal hybridomas. Ex. H230, at 2.

ORI alleged that the claim that all but one of the positive
readings for the normal hybridomas at R-1:76-80 was due to tube
counts was inconsistent with the absence of any evidence that the
transgenic results in R-1:71-75 were retested for tube counts.
As noted above, the assays on those pages are not a transgenic
portion of the same experiment, so the conditions resulting in
tube counts in the normal unimmunized assay may not have applied.
Dr. Reis testified that they did not feel it necessary to check
the transgenic results because the transgenics had been
prescreened, so that only those which had positive scores on two
screens were counted as positive. Since it would be "quite
impossible that you have by chance" a tube count problem twice
independently causing false positives for the same samples, they
did not retest the transgenics again. Tr. 2550. Her testimony
was ambiguous as to which transgenic fusion she was referencing,
but, in context, appears to refer to why they were not concerned
about tube counts in the transgenic portion of Table 2, the data
for which were at R-1:31-34 (and which she proceeded to explain
was a rescreen). Tr. 2550, 2612-14 (Reis). As noted, it does
not appear that the transgenic fusions at R-1:71-75 are part of
the same experiment as the normal fusion at R-1:76-80.

The Panel concludes that it is more likely than not that
Dr. Imanishi-Kari reported what she believed to be the real
number of idiotype-positive normal hybridomas and did not
intentionally misrepresent them.

ORI did not prove that the reporting of negative allotype
results for the normal hybridomas in Table 2 was an
intentional misrepresentation.

ORI argued in its brief that Table 2 presented intentionally
falsified data in another respect. In addition to indicating the
total number of idiotype-positive hybridomas, Table 2 presented
the total number of hybridomas of each category (normal and
transgenic spleen and lymph node) that were idiotype positive
"plus" positive for æa or æb (allotypes). No positives were
reported for any of the normal hybridomas for these allotype
categories. It is not disputed that the normal hybridomas were
not tested for allotype. ORI argued that the listing of
negative results for allotype for the normal data in Table 2 was
therefore "clearly false," and was too explicit a representation
to have been honest error or inadvertent omission, so that it
must be found to be intentional falsification. ORI Br. 81.

We do not agree. The table did not report allotype alone but
only the number of hybridomas that produced antibodies that were
both idiotype-positive and positive for one of the æ allotypes.
It is obvious that those hybridomas which were found to be
negative for idiotype could not fit this category even if tested
in an allotyping assay. Moreover, Dr. Imanishi-Kari admitted
that this one category should have been marked as "not
applicable" rather than "0/144." R. Br. 59.

While Dr. Imanishi-Kari did not test the one idiotype-positive
normal spleen hybridoma for æ allotype, it is understandable that
there would be little reason to ascertain the allotype of the
rare idiotype-positive normal antibody, since it was known
already that æ antibodies in the normal mice were only æb. Cell
paper 247-48. Thus, the evidence barely supports a conclusion
that Table 2 was misleading in this respect, much less that it
was intentionally false.

The Panel concludes that ORI did not prove that this late-
discovered error resulted from intentional falsification and

ORI did not prove that the January fusion data were

ORI charged that the data set it called the "January fusion" data
was fabricated. As described above, Dr. Reis and
Dr. Imanishi-Kari stated that Dr. Reis obtained from
Dr. Imanishi-Kari the results from the normal fusion and combined
it with her own later normal results (R-1:76-80, discussed
above). Dr. Reis specifically identified the data on I-1:106-107
as the "experiment that I used to, to sum the results with my
fusion results." Tr. 2547. ORI's allegation that
Dr. Imanishi-Kari submitted the Cell paper with these specific
numbers and only when challenged by Dr. O'Toole felt compelled to
create a set of data with the matching totals for negative normal
results to add to those on R-1:76-80 is implausible. This theory
would require both Dr. Reis's blind acceptance of numbers to add
to her own totals at the time and active conspiracy in
affirmatively recognizing the pages at the hearing.

As noted above, the only connection that these data had with the
Cell paper was that some of the normal data were used in Table 2.
The data ORI found to be fabricated, however, are data reported
as a series of assays on transgenic lymph node and spleen
hybridomas. In other words, ORI's theory is that
Dr. Imanishi-Kari fabricated these data merely to lend some
credence to normal data, which ORI thinks are also fabricated.
ORI's statistical and document examination findings on this data
go almost exclusively to the transgenic data, however. Moreover,
the statistical findings relate to the yellow tapes on I-1:102 to
104, and the document examination findings relate mostly to the
green tapes on those pages.

ORI's statistical analyses of the January fusion data (yellow
tapes) are based on misconceptions of what that data purported to
be. ORI's understanding of the data was inconsistent with
Dr. Imanishi-Kari's explanation of the assays which she had
provided in March 1988, and with internal evidence in her data as
a whole. Ex. H245, at 3; I-1:102-104. In particular, the
statistical analyses depended on viewing these data as comparable
to control sets of data from Drs. Reis and Weaver, but these data
sets were initial tests for particular antibodies, whereas the
yellow tape results are presented as a "rescreen" of supernatants
from wells that previously tested positive on the same coat with
the same probe. The statistical analyses assumed that, in any
"fusion experiment" (irrespective of the type of experiment or
the experimental conditions), positive and negative values should
be randomly dispersed. The evidence shows that this expectation
was not valid. Moreover, much of the testimony was based on the
conception that a cyclical pattern shown in a plot of assigned
values measuring correlation of the counts according to their
positions on the yellow tapes (which is stronger if the tapes are
rearranged) shows an unexpected pattern in the fused cells as
arranged in wells. The plating of the supernatants from selected
cells in wells for testing, however, would not necessarily
correspond to the original plating of the cells or to any
particular position on the tapes. Once the supernatants were
plated on a 96-well plate (as the I-1 notebook and the parties'
arguments indicate they likely were), it would be logical to test
them in an orderly fashion, arranging them in either eight or 12
well groups, working from one side of the plate to the other. In
either of the most likely orders of the supernatant wells
relative to the tape counts, the relationship of the actual
values to values of neighboring wells does not show a pattern
inconsistent with how the data are presented.

The document examination findings relate largely to the finding
that one piece of green tape containing results for one of the
transgenic fusions has ink of a formula different from the ink on
the rest of the tapes for that fusion. While we agree that this
is odd, we find that innocent explanations are possible and that
fabrication would be a farfetched explanation in light of the
insignificance of the data involved and the indicia that the
entire fusion experiment is genuine. We find that the minor
points made about page I-1:107 are of no consequence in
establishing fabrication.

In this section, we first describe the January fusion data and
related experiments, and explain why there are internal indicia
from the notebook that these data are real data rather than
fabricated data and that the test on the yellow tapes is a
rescreen. We then discuss the statistical analyses and why we
conclude that these analyses are not reliable evidence of
fabrication. Finally, we discuss the Secret Service document
examination analyses and why we conclude that they are not
persuasive evidence from which we would infer fabrication.

The January fusion data on the yellow tapes were a rescreen,
and the January fusion data have internal indicia that they
are real, rather than fabricated.

The notations on the notebook pages are consistent with
Dr. Imanishi-Kari's description of the data in 1988, but ORI
ignored that description in some of its analyses. An analysis of
the data itself supports a conclusion that it is more likely that
these are real data than that they were fabricated as ORI
alleged. Further, we find that the RIA data on the yellow tapes
on I-1:102-104 were a "rescreen."

The January fusion data pages questioned by ORI do not appear in
the notebooks as reporting an isolated experiment, but are
clearly part of an extensive series of interrelated experiments.
ORI presented no forensic evidence of fabrication of the rest of
these experiments (and, indeed, used some of the data sets as
"unquestioned" controls for other statistical analyses). Nor did
ORI elicit testimony from any of the witnesses who appeared who
were also working in the laboratory in this period that Dr.
Imanishi-Kari did not in fact do these experiments. The pages
with related experiments include the following:

 An extensive series of tests of different mouse sera which
indicate whether mice offspring were transgenic or normal.
I-1:48-58; I-1:75-79.

 A retest (dated 1/17) of some of these mice "for fusion."

 A fusion protocol (dated 1/19) showing: (1) four sets of
fusions, each of which involved both spleen and lymph node
cells from mice, specifically, two mice which tested as
clearly transgene positive in the retests, one which tested
normal in those tests, and one mouse (BG/Jack) which is
described as a normal in the protocol, but not clearly
identified as a mouse that was retested; and (2) notations
about the results of the fusions (which we discuss in part
below). I-1:101.

A protocol (dated 2/4-2/7) stating: "Test hybrids with
à 17.2.25 à [kappa/lambda]* when there is too many and then
à [kappa/lambda]* à æ*." I-1:101.

 Six pages of data (that ORI calls the January fusion data)
presented as RIAs dated (2/5 to 2/7) of supernatants of some
of the hybridomas identified as coming from the spleen or
lymph node fusions from the mice identified on the fusion
protocol, or of controls. These pages have two double
columns, each with yellow tapes for a test with an à
[kappa/lambda]* probe on a "G.P. à 17.2.25" coat and green
tapes for a test with an à æ* probe, on a "G.P. à 17.2.25"
coat. I-1:102-107.

 Two pages with an ELISA (dated 2/7/85) for isotype of (1)
hybrids from the January fusion, (2) control proteins, and
(3) hybrids from a different fusion. I-1:108-109.

These experiments are interrelated in numerous ways. For
example, the fused cells were from mice identified as transgenic
or normal in the sera retest and/or initial sera test. The four
sets of fusions which the protocol indicates resulted in
uncontaminated growth are assigned the following designations in
the protocol: I-1 (lymph nodes from the first transgenic mouse),
I-2 (spleens from the second transgenic mouse), II-1 (spleens
from the first normal mouse), and II-2 (lymph nodes from the
second normal mouse). The first three of these designations are
noted in red on the sera retest. All of the designations appear
on both the RIAs and the first two appear on the ELISA. The
total number of growth wells for particular sets of hybridomas
(noted in the fusion protocol) either show up as notations on the
RIAs or are the total number of wells tested in the corresponding
assay. The fusion protocol states that the fused cells were
plated in "24 wells (PE)," and the controls for one of the
radioimmunoassays includes "No growth wells" with "PE" in a
circle beside it. The numbers for the hybridomas tested on the
ELISA correspond to numbers for transgenic hybridomas with very
high results for antibody production on the radioimmunoassay.

In addition, there are various markings for particular results of
the assays, such as stars, asterisks, checks and question marks.
Some of the pages are very neat and other have numbers crossed
out or written over. One notation indicates that well number 199
was not done in order on one assay, but the number is assigned to
the last value on a tape.

We also note that the ELISA on I-1:108-109 has some isotyping
results on the January fusion hybridomas that are ambiguous in
similar ways to the ELISA on Table 3 hybridomas that we discuss
below. Dr. Imanishi-Kari explained that she did not use the
experiments related to the January fusion because there was
contamination with fungi. Tr. 5249-51 (Imanishi-Kari); Exs.
H245, at 3, H103, at 109-10. The fusion protocol on I-1:101
indicates that spleen hybrids from one fusion and lymph node
hybrids from another were contaminated.

Overall, the Panel finds that the interrelatedness of the
questioned January fusion data with other pages, the complicated
nature of the fusion, and the various markings on the pages are
indicia that the data are real. These indicia do not fit with
ORI's scenario that Dr. Imanishi-Kari faked the RIAs on the
transgenic hybridomas at I-1:102-104 (and by implication the
normal results at I-1:106-107) because she needed 36 normal mouse
results in addition to Dr. Reis's normal data. We find it
implausible that Dr. Imanishi-Kari exerted a level of shrewdness
and effort beyond that which a fabricator would likely devote to
data of no importance to the paper, other than to lend some
validity to the data for 36 normals reported in the paper. The
ambiguity of the normal data itself undercuts the idea that the
data were faked using such a degree of shrewdness and effort.
Moreover, ORI's scenario would require us to believe that
Dr. Imanishi-Kari had the foresight to fabricate notations
relevant to questions about why she did not ever report the
results from the transgenic fusions and about whether the lymph
node hybridomas tested on I-1:102-104 were prescreened for
idiotype-positive antibodies. As we discuss next, the
prescreening issue arose at the hearing.

The question of whether the RIA results on the yellow tapes on I-
1:102-104 were an initial screen of the hybridoma wells for
particular antibodies (as ORI's statistical analysis assumed) was
raised at the hearing by Dr. Speed. Dr. Speed stated his
understanding that this test was a rescreen only of wells that
had previously tested positive on a screen with the same probe,
and that the removal of no-growth wells preceded this screen.
Tr. 2175-80. ORI disputed this (although part of ORI's analysis
assumed some type of prescreen). Internal evidence in the
notebooks is consistent with Dr. Imanishi-Kari's description that
the fusion wells were screened for growth, and that the
supernatants from the wells with growth were screened on an anti-
idiotype coat with a kappa/lambda probe, before the similar
screen on I-1:102-104. Our reasons are:

 The fusion protocol indicates that each set of spleen or
lymph nodes from the fusion was plated on "12 plates 24
wells (PE)" which would be a total of 288 wells for each
set. I-1:101.

 The protocol for the RIA indicates that the hybrids should
be tested with a kappa\lambda probe on an anti-idiotype coat
"when there is too many and then" with a kappa\lambda probe
and a æ probe. I-1:101.

 The fusion protocol has the following notation for the set
of hybridomas tested on I-1:102-104: "260 wells growth."

 A notation in the lefthand margin on I-1:102 indicates "260
hybrid" and "138 [kappa/lambda] +." This notation and the
numbering of the January fusion data are consistent with
prescreenings for growth and for idiotype-positive
antibodies (using a kappa/lambda probe), followed by testing
on an anti-idiotype coat with a kappa/lambda probe and with
a æ probe.

 The pages at I-1:102-104 contain: (1) a total of 138 counts
(rather than the 260 for which growth was indicated), all
with high values on the yellow tapes in the kappa/lambda
column, except a few counts; (2) script numbers between 1
and 260 assigned to the counts (which ORI agreed were well
numbers), with some numbers skipped; (3) five "negative
hybrids" used as controls on I-1:104 which have well numbers
that are skipped in the 138 counts; and (4) "no growth
wells" used as controls on I-1:104, with the notation "PE"
(which refers to the original fusion growth plates) and with
no assigned well numbers.

Thus, we conclude that the internal evidence in the notebook as a
whole indicates that the January fusion data are real, and that
the RIA with the kappa/lambda probe was a rescreen of only
hybridomas which had previously tested positive on the same coat
with the same probe, as Dr. Speed said. We next turn to ORI's
statistical analyses of the January fusion data.

ORI's statistical analyses are not reliable evidence that
the January fusion data were fabricated.

ORI performed two statistical analyses of the data on the yellow
tapes on I-1:102-104. ORI examined the correlation of the counts
on the tapes, and also evaluated the likelihood of a run of 15
positive wells at the end of an experiment.

In the first analysis, Dr. Mosimann applied a technique called
serial correlation analysis. Serial correlation analysis
attempts to quantify the relationship of numbers in a series.
Tr. 695 (Mosimann), 2162 (Speed). Dr. Mosimann plotted the
counts from the yellow tapes and from two sets of "control" data,
as well as the serial correlation "coefficients" he determined
from his analysis of each of these three data sets. Plots of
the serial correlation coefficients for the data on the yellow
tapes from I-1:102-104 show patterns in the sequential
relationships of the counts on the tapes that are generally
cyclical; the cycle was stronger when I-1:102 and I-1:104 were
analyzed together than when the counts from I-1:103 were
included. Ex. H401. The control data sets were from W-2:33-38
(Weaver data) and R-1:27-29 (Reis data). No cyclical patterns
were found in the serial correlation coefficients for the Weaver
and Reis data. Dr. Mosimann said that one would expect in a
"fusion experiment where cells are just sort of combining
randomly" that there will be "no relationship of numbers in the
series with numbers following." Tr. 703-709. ORI concluded that
the yellow tapes were not from a fusion experiment and that the
yellow tape from I-1:103 had been inserted between the tapes on
I-1:102 and 104.

Second, Dr. Mosimann conducted a permutation analysis which
showed that the probability of seeing a run of 15 consecutive
positive wells at one particular place (the end) was small. ORI
Report 121; Tr. 724 (Mosimann). Dr. Mosimann's premise of a run
of 15 positives at the end is based on the well numbers (246-260)
assigned to counts at the end of the yellow tape. The actual
counts on the tape show that one of the wells (number 250) is
clearly negative for idiotype positivity, so the run of positive
values at the end of the tape is only 10. Dr. Mosimann thought
the well numbers from 1 to 260 indicated wells from the original
24-well fusion plates, and that the skipped numbers were supposed
to be numbers of wells negative for growth. Tr. 708, 713-715
(Mosimann). He expressed his opinion that, in "constructing" the
data, Dr. Imanishi-Kari needed to get 138 positives on the assay
out of the 260 wells, and at the end of the construction suddenly
found out she was running out of numbers, and that explains why
the last 15 well numbers are consecutive. Tr. 722.

The Panel finds that neither of these analyses is reliable. The
basic flaw in both analyses is that ORI found that Dr. Imanishi-
Kari had misrepresented data, based on a misconception about how
she had represented the data. The yellow tape counts are
represented as being from a "fusion experiment" only in the sense
that the supernatants analyzed in the assay are from cells
resulting from a fusion. Counts are represented as being from an
assay where most or all of the counts would be positives, because
the supernatants had been prescreened on the same coat with the
same probe. No representation is made about any particular order
in which the cells from the fusion were plated, the well numbers
were assigned, the supernatants were plated, or the wells for the
assay were put into tubes for counting. Dr. Speed recognized the
importance of the assay being a rescreen and expressed his
opinion that the analyses did not show that the data were
fabricated. Tr. 2179-81. The testimony of ORI's witnesses is
not directly relevant to the specific assay at issue here.

ORI argued, however, that we should nonetheless conclude that the
yellow tapes are from a different experiment because: (1) Dr.
Mosimann has more expertise than Dr. Speed in analyzing "fusion
experiments"; (2) testimony from ORI's scientific experts and
analysis of the "controls" established that gamma counter
readings of individual wells in "fusion assays" should be
independent of readings for other wells, no matter what reagent
is used; (3) the serial correlation analysis shows a cyclical
pattern inconsistent with a fusion assay, even if one would not
expect random dispersion; (4) this pattern is also evident in the
counts themselves; and (5) the run of 15 well numbers shows

ORI's arguments lack merit, for the following reasons:

 Dr. Mosimann testified that his expectation of randomness
was based on his experience with "fusion experiments." Tr.
703, 705-09. However, when asked whether he would expect
the result in each well of a fusion experiment to be random
with respect to the neighboring well, he admitted that he
was "not necessarily qualified to speak in detail on this."
Tr. 708. Moreover, his articles on "fusion experiments"
relate to cell growth in wells and indicate that an
expectation of randomness depends on experimental
conditions. Exs. H794-A, B. The analysis in the articles
is premised on certain assumptions, including the following:
"After the fusion, the cells must be dispersed (this should
be verified by microscopical observation) and randomly
distributed in the original cell suspension by thorough
mixing." Ex. H794-B, at 37. This statement contradicts
(rather than supports) the notion that randomness should be
expected, regardless of experimental conditions. Dr.
Mosimann admitted that he did not know whether the cells in
the January fusion were thoroughly mixed. Tr. 5829.

 Dr. Imanishi-Kari presented unrebutted testimony that such
thorough mixing was difficult in her experiments, given the
delicate nature of the hybrids. Dr. Marrack, who indicated
that she has probably done 30 or 40 fusion experiments,
provided a detailed description of the physical process of
conducting a fusion. She explained how and why clumps could
occur and stated that things never look random on individual
plates because of the way things distribute themselves.
ORI attacked Dr. Marrack's testimony on the grounds that she
never saw the January fusion data or analyzed the plates on
which the assays were conducted. None of ORI's witnesses
analyzed the plates either, however. Dr. Marrack's
testimony that clumping of cells in the flask can lead to
non-random behavior on the plates undermines ORI's general
assumption, whether she saw the January fusion data or not.

 In contrast to Dr. Marrack's detailed description of the
fusion process, the testimony ORI presented was very
general. ORI's witnesses did not address the mechanics of
the process or the physical realities described by Dr.
Marrack or contradict her description.

 The testimony ORI presented from its scientific experts on
randomness in "fusion assays" (as opposed to plating of
fused cells) was conclusory and appeared to be based merely
on general impressions. Tr. 392 (Dahlberg), 1415-16
(Davie), 1506 (Gilbert), 1618-19 (McClure). Dr. Dahlberg
noted that in Weaver's assay "there are relatively few
positive values . . ." and testified that, based on his
experience with fusion assays, he would expect results with
negatives interspersed like the Weaver data. Tr. 390-91.
Similarly, Dr. Davie contrasted the January fusion with
"other fusions that have been looked at" which are
characterized by stretches of negatives followed by very
high values, because there is no relationship. Tr. 1369.
Thus, their expectations were related to fusion assays that,
like the Weaver data, were an initial screen for a
particular antibody characteristic, not a rescreen assay of
wells already scored as positive in a prior identical

 Dr. Speed testified that he had observed non-random-
appearing behavior in assays on microliter plates. Tr.
5955, 5966-67. His opinion that he would not expect
randomness in fusion assays was also based on analyzing the
Reis data, which showed that the three highest values were
each 12 positions apart. Tr. 2166. Dr. Mosimann
suggested that these were "outlier" values that a
statistician might disregard. Tr. 5624-26, 5831. Even if
these high values may be characterized as outliers and
disregarded (which is questionable), it is significant that
the highest values appear in positions that have some
relation to the plating of the cells in 24-well plates and
the testing the supernatants in 96-well plates. Contrary
to what ORI suggested, Dr. Speed did not claim that the Reis
data had the same serial correlation pattern as the yellow
tape counts from the January fusion, but only that his
analysis of the Reis data displayed a statistically
significant serial correlation that is not consistent with
random data. Tr. 2166-75 (Speed); Ex. R57. ORI did not
effectively rebut that point.

 Even if ORI had shown that the "control" data sets were from
assays like the one in question (which ORI did not show),
any differences between the data sets could mean nothing
more than that the fusions leading to the Reis and Weaver
data were done under the experimental conditions assumed in
Dr. Mosimann's articles, and the January fusion was not.

 Contrary to what ORI suggested, Dr. Mosimann's serial
correlation analysis does not necessarily show an unexpected
relationship between the wells in a fusion experiment or
assay. The analysis merely shows a cyclical pattern in the
serial correlation coefficients of counts on tape, based on
their sequential position on the tape. ORI presented no
serial correlation analysis showing the relationships
between the values based on the various possible placements
of the corresponding wells on a plate. Dr. Mosimann's
results are inconsistent with an assumption of no
relationship between wells in a fusion experiment only if
the sequential position on the tape necessarily corresponds
to a meaningful position on a well-plate.

 Contrary to what ORI argued, the January fusion data on the
yellow tapes do not "demonstrate a strong cyclical pattern
in which the cell counts in sequential wells steadily rise
to a peak, then steadily fall to a valley . . . in a cycle
of 12 radioactivity values." ORI FFCL X.L.iii.11. Our own
visual observation and examination of the data show that,
while there is some general trend of rising and falling,
there are a significant number of interruptions in this
trend, and the placement of the peaks is not consistently in
the middle values of a cycle of 12.

ORI's theory that Dr. Imanishi-Kari took the yellow tapes from a
different experiment was also undercut by the fact that ORI
provided no satisfactory evidence of another type of experiment
that would have led to the serial correlation pattern observed on
the yellow tapes. Dr. Dahlberg ultimately proposed a theory that
the pattern could have been caused by drying on the edges of
plates in an experiment under experimental conditions with an
expectation that every well would be the same. Tr. 6345-47; Ex.
H690. He contrasted this with what he would expect from a
fusion--a random pattern of growth and a random pattern of high
and low values. As explained above, however, the January fusion
data analyzed were presented as a rescreen, for which one would
expect all high values. We fail to see why the experimental
conditions described by Dr. Dahlberg could not as reasonably have
been present in the fusion assay as represented, as in any other
type of experiment. Thus, we simply do not find ORI's arguments

The Panel also finds faulty ORI's reasoning, based on Dr.
Mosimann's permutation analysis, that "the unusual compression of
positive well numbers at the end of the series is consistent with
the need to assign a predetermined number of positive wells
(i.e., 138/260) to the numbers on the tapes." ORI Br. 88. This
argument depends on an assumption that there was a predetermined
total so that Dr. Imanishi-Kari was forced to insert more
positives at the end in order to arrive at the required totals.
But there is no basis to think that, if she had been fabricating
these data, she would have had to match any "predetermined
number." These were results for transgenic hybridomas from the
January fusion (not the normals used in the paper) and were not
published anywhere. There was no external constraint that
compelled her to use a predetermined total rather than simply
count the number of positive counts from the tapes and use that
total. If she fabricated the data on these pages, she also
fabricated the totals and had no need to create a run of 15

In any event, we find that ORI's permutation analysis is not
reliable evidence that the data were not genuine, because:

 That analysis again assumes a random distribution of fusion
products into the growth wells, and fails to consider the
testimony of Dr. Marrack, which indicated that clumping of
cells could have contributed to a run of 15 positives by
increasing the likelihood that the last wells would receive
multiple cells per well. Supernatants from these wells
would thus be more likely to contain idiotype-positive

 Dr. Marrack's testimony supports the possibility of non-
random behavior anywhere on the plates. In its brief,
however, ORI assumed that such clumping would be more likely
to cause runs at the end of the plate, and then attempted to
show that this would not have been the case with the January
fusion. ORI asserted that the run of 15 positives was
followed by 28 negatives, because the original fusion was
plated into 24-well plates, for a total of 288 wells. ORI's
position assumes that the wells in the 24-well plates were
numbered 1 to 288, and these numbers stayed with
supernatants from those wells throughout the remainder of
the assays. This was contradicted by Dr. Imanishi-Kari's
testimony that well numbers were assigned only to wells
positive for growth. Tr. 4889. More important, page I-
1:104 shows "No growth wells" used as controls with no well
numbers assigned to the counts for these controls and a
circled "PE." The PE refers to the 24-well plates mentioned
in the fusion protocol on I-1:101.

 Dr. Speed found statistically significant runs of positives
and negatives in the Weaver data and a statistically
significant occurrence of 15 positive wells out of the last
18 wells in the Reis data. Tr. 2182, 2974, 3014-18; Ex.
R62; ORI Report 113. ORI discounted these findings, arguing
that (1) the Weaver data had "very few negatives," so a run
of positives would not be surprising, and (2) that the
probability of finding 15 positive in the last 18 wells of
the Reis data was <.001, and thus much more likely to occur
than the run of 15 at the end of the January fusion data,
which had a probability of <.0001. ORI Br. 88, n.66; ORI
Reply Br., App. C, at xvii. ORI's arguments ignore that
these occurrences were still statistically significant at
the commonly accepted .05 threshold, the standard ORI
employed to analyze Dr. Imanishi-Kari's data.

The Panel concludes that the statistical evidence relating to the
January fusion is not reliable evidence that the data are

ORI did not prove that the document examination findings
relating to the January fusion data demonstrated that the
data were intentionally fabricated or falsified.

ORI acknowledged that the "bulk" of its document examination
evidence on the January fusion data "relates to the transgenic
portion" at I-1:102-104, although a few document examination
findings related directly to I-1:107 (which contained a part of
the normal data). In fact, the findings deal only with an
inconsistency in ink formula on the green tapes in one of the
transgenic fusions, that for the lymph nodes. We therefore
address first the findings that ORI considered most significant,
i.e., that one portion of green tape on I-1:103 was not
consistent with the green tapes on the I-1:102, 104, 106, 107 and
the rest of 103. We then address the more minor findings on I-
1:107 (i.e., that an ESDA of page I-1:31 contained impressions
from I-1:107 and that the two tapes on I-1:107 differed in the
intensity of ink).

The finding of a different ink formula on one of the green
counter tapes on I-1:103 is puzzling but does not prove

As explained in part earlier, the results of the assays on the
transgenic lymph node fusion are presented in two double columns
across three pages. The left side of each double column contains
yellow tape with results on the anti-kappa/lambda assay. The
right side of each double column contains green tape with results
on the anti-æ assay. The yellow and green tape results are lined
up to show results on the same 138 samples side-by-side. The
Secret Service examiners tested ink samples from multiple
locations on pieces of green tape from the left and right side
double columns on each page. Their central finding was that two
formulas of ribbon ink occurred on the green tapes for the
transgenic lymph node anti-æ assay. One formula was found only
on the green tape on the left side double column of I-1:103. The
green tapes on the remainder of I-1:102-104 all used a second
formula of ink, which also matched that on the normal fusion at

The green tape with the different formula on I-1:103 is not, as
the Secret Service described it, one piece of tape. Instead, we
find that the last three readings are on a separate piece of
green tape, the bottom edge of which exactly matches the top edge
of the green tape in the right side double column on that page.
The Secret Service findings that the green tape in the left and
right side double columns on I-1:103 have different ink would
thus imply that the ink changed in the middle of a single
continuous strip of counter tape without a break in ink intensity
to show a new ribbon. This raises the possibility which was
not addressed at the hearing of inconsistency in ink formula
within a ribbon.

The implication drawn by ORI from the different ink formula used
on a page in the middle of the presentation of data is that such
a difference could not innocently occur as it might if the ribbon
or printer were changed at the end of an assay. But the order in
which the samples are presented here need not reflect the precise
order in which the samples were tested in the gamma counter.
Testing may have been interrupted; one plate may have been run as
a whole and the other tested later; or a portion may have been
saved after an interruption and completed at the end of the
assay. See R. Br. 36, n.43. There is no particular reason to
assume that the normal assay occurred after the transgenic or the
transgenic spleen after the transgenic lymph nodes simply because
of the current order of the tapes or pages.

The Secret Service found other instances of green tapes that
matched each of the two ink formulas found at I-1:102-107. For
the reasons discussed in the overview of document examination
methods, we do not find that these matches provide useful
information about the likely dating of these experiments. They
simply demonstrate that both ink formulas were present in ribbons
used at the laboratory over a wide time frame. As discussed
above, ORI posited changes in ribbon or substitution of printers
to explain similar anomalies in unquestioned data, so innocent
explanations of the puzzling findings on ink formula are

We find such innocent explanations more plausible in light of the
nature of the data being challenged (again one of two transgenic
fusions submitted only because they were associated with a normal
fusion which was the source only of some of the negative results
in Table 2). In focussing so narrowly on the details of ink
formula, ORI also loses the bigger context of the substantial
evidence from surrounding data (some of which is detailed above)
that this entire experiment was genuine.

The document examination findings concerning I-1:107 do not
prove fabrication.

Mr. Hargett testified that I-1:107 (dated February 7, presumably
1985) was produced prior to I-1:31 (which was dated October 30,
presumably 1984) because they were in registration. Tr. 3311.
However, he acknowledged later that only a few notations from I-
1:107 actually appeared on the ESDA of I-1:31 and that, as to
this page, it was "a possibility" that the registration was
"produced as a result of placing the paired pages in registration
by compiling the individual loose pages." Tr. 3464-66.

ORI also pointed out that the intensity of the ink across the
green tapes on I-1:106 and 107 (the normal spleen and lymph node
fusions) differs; the numbers on 106 and in the first column on
107 are fairly light in ink intensity, and those in the three
right columns on 107 are somewhat darker. See ORI FFCLs All of the values for the samples tested were
negative, but Dr. Dalhberg argued that the negative values on the
green tape with the lighter ink intensity averaged higher than
the negative values on the tapes with the darker ink intensity.
Tr. 6329-32; Dahlberg Decl., Att. 7. The difference in ink
intensity is noticeable, but not dramatic, and again, we cannot
be certain of what the order was in which these samples were
tested, whether the testing was performed consecutively or
interrupted, or whether some toggling or manipulation of the
ribbon occurred to try to get a little more use from it. As for
the difference in background reading levels for negative samples,
Dr. Imanishi-Kari testified credibly that they can vary as an
effect of whether samples were tested using new or previously-
used tubes or of whether the immediately preceding experiment had
very high radioactive readings. Tr. 6457-58. Thus, their
difference in background values could support a conclusion that
part of the experiment was counted at a different time.

In light of the evidence as a whole, the Panel concludes that the
document examination evidence relating to the January fusion is
inadequate to support a conclusion that the data were more likely
than not to have been fabricated.

The Panel concludes that ORI did not prove charges four and five,
Table 2, by a preponderance of the evidence.

C. Table 3

1. Idiotype-positivity of Table 3 hybridomas

Table 3, charges one and two: ORI charged that including a
hybridoma in Table 3 implied that it produced idiotype-positive
antibodies but that Dr. Imanishi-Kari had "experimental evidence"
that eight of the 34 hybridomas listed in Table 3 were not
idiotype-positive. Charge letter 5-6. Therefore, ORI charged
that their inclusion in Table 3 constituted falsification by
deliberately failing to report that fact. Id. In addition, ORI
charged that, for three of the eight allegedly idiotype-negative
hybridomas, the Cell paper contained false affirmative statements
that they were idiotype positive. Id. at 6.

Issues: Table 3 listed 34 specific hybridomas and then provided
information on the serological results (in a column labelled "Ig"
for immunoglobulin) and molecular analyses (in four columns)
for each hybridoma. Cell paper 252. Dr. Imanishi-Kari provided
the information on isotype in the Ig column. Tr. 4693. For some
of the hybridomas, both heavy and light chains are identified,
while for others only a heavy chain isotype is listed. In
addition, four hybridomas are not identified as to isotype at
all; the notation "n.d." is listed in the Ig column for these
four hybridomas.

No explicit assertion appears in the Cell paper that all the
hybridomas listed in Table 3 produce idiotype-positive
antibodies, nor does Table 3 report any findings as to idiotype.
ORI argued that, in light of the central point of the paper that
a high frequency of endogenous idiotype-positive antibodies were
found in transgenic mice, it would have been pointless to include
molecular and serological characterizations of idiotype-negative
hybridomas and that readers would therefore have assumed
inclusion in Table 3 implied idiotype-positivity. ORI Br. 93-94.
In addition, the Cell paper contained two statements that
expressly referred to specifically named hybridomas as idiotype
positive. Of the hybridomas mentioned in those statements,
three were among the eight in Table 3 for which ORI alleged that
the experimental record did not support the claim of idiotype-

Dr. Imanishi-Kari denied that it was essential to the paper that
every hybridoma in Table 3 be assumed to be idiotype-positive,
but asserted that she believed all of them were shown by
sufficient experimental evidence to be idiotype-positive, except
those which she listed as "n.d.," or not determined. R. Br. 63-

Analysis: We look first at whether a claim of idiotype-
positivity was in fact made in the paper as to the eight
hybridomas at issue. We then examine the experimental record to
determine if Dr. Imanishi-Kari could have interpreted the data
scientifically as supporting such a claim, if made.

No claim about the serology of the n.d. hybridomas was
implied in the paper; all but one of the four other
questioned hybridomas were expressly identified as idiotype

ORI argued strongly that, despite the absence of any express
claim to that effect, the paper implied that all the hybridomas
in Table 3 produced idiotype-positive antibodies. ORI Br. 93;
ORI Reply Br. 33-34. In part, ORI relied simply on the emphasis
in the paper on the significance of finding idiotype-positive
antibodies in hybridomas which did not express the transgene.
See, e.g., ORI Br. 92-93 and ORI Reply Br. 33-34, and cites
therein. While it is undisputed that this finding was central to
the paper, it does not necessarily follow that every hybridoma
discussed in Table 3 must therefore be understood to be
represented as idiotype positive. The fact that in several
locations, as ORI pointed out, the authors specify the idiotype-
positivity of individual hybridomas could be read as implying
that those not specifically mentioned were not necessarily
idiotype positive.

ORI also argued that, "while Table 3 does not expressly state
that the 34 hybridomas are idiotype-positive, neither the table
nor the paper make any sense if they are not read as being
idiotype positive." ORI Br. 93; Tr. 1490-91 (Gilbert).
Dr. Gilbert also testified that the table did not identify which
hybridomas were not idiotype-positive, if some were not, and the
molecular analyses on those which were not would have little
meaning. Tr. 1490-96. However, Dr. Dahlberg testified that the
significance of Table 3 would not be impaired if a few of the
hybridomas were not idiotype-positive, so long these were so
labelled. Tr. 515-16; see also Tr. 1407 (Davie). Dr.
Imanishi-Kari took the position that experimental data supported
idiotype positivity for all the Table 3 clones except those which
were identified as "n.d." in the Ig column.

The Panel finds that the dispute about whether Table 3 implied
idiotype positivity is of very limited importance. Of the eight
hybridomas listed in the first charge, four had the n.d. notation
(L3.6.3, L3.13.6, L3.18.5, S1.3.2). We discuss these hybridomas
in more detail in relation to charge three for Table 3.
Generally, we find that the n.d. notation meant that little or no
serological information was available about these particular
hybridomas. Both Dr. Gilbert and Dr. Dahlberg testified that
the table would not lose its meaning by including some idiotype-
negative hybridomas if they were labelled. We conclude that the
n.d. label, while less than explicit, was adequate to make any
implication that these hybridomas were idiotype-positive
relatively meaningless in relation to these four.

Of the remaining four hybridomas listed in charge one, three are
among those expressly named as idiotype-positive (L4.2.6, L3.3.2,
S1.2.6), as noted in charge two. Consequently, the only
hybridoma for which a claim of idiotype positivity might be
implied but is not expressly made is L4.4.3. Since we find
experimental support for idiotype-positivity for all four
hybridomas, we need not reach a final conclusion about whether
inclusion in Table 3, in and of itself, was intended to imply a
claim of idiotype-positivity. We discuss these hybridomas

The experimental record included data which
Dr. Imanishi-Kari could scientifically interpret as
demonstrating idiotype-positivity in all four hybridomas
remaining at issue (not listed as n.d.).

All four of the relevant hybridomas tested positive in an ELISA
assay which Dr. Imanishi-Kari indicated had been done on an anti-
idiotype coat using isotype probes. I-1:83-88; R. Br. 64. In
addition, each of the hybridomas tested as positive for idiotype
in other experiments. R. Br. 64. ORI nevertheless contended
that experimental evidence did not exist, based largely on two
premises: (1) the four hybridomas tested as idiotype negative in
ascites experiments at I-1:48-58 which ORI treated as
definitive and overriding any conflicting results; and (2)
forensic evidence undercuts the reliability of the ELISA and
other assays supporting idiotype-positivity.

The negative results on one set of ascites assays did not
conclusively establish that these hybridomas were idiotype

On the first point, ORI stated that the "assumption underlying
these charges is that tests on ascites assays are a much more
definitive indicator of . . . idiotype positivity than are tests
on supernatants." ORI Br. 95. It was not disputed that ascites
fluids can contain a much higher concentration of antibodies and
yield clearer results on assays than supernatants containing
antibodies of cultured hybridomas. ORI Br. 95; Tr. 2743
(Wortis). Our review of the ascites results at I-1:48-58 found
negative or at least borderline results for these four
hybridomas. There are two problems with ORI's use of the
results on I-1:48-58 as conclusive in ruling out idiotype-
positive antibody production in these hybridomas.

First, the ascites fluid for each hybridoma requires individual
preparation and expansion, which may not successfully produce
antibody at all or in sufficient quantity. Tr. 2743 (Wortis),
4760-61 (Imanishi-Kari), 1407 (Davie). ORI acknowledged that
ascites results were more definitive than supernatants only
"assuming a properly working assay." ORI Br. 95. Dr.
Imanishi-Kari asserted that the antibody production in the
ascites experiments on the hybridomas at issue in the assay at I-
1:48-58 may have failed. R. Br. 65. However, in its reply
brief, ORI asserted that the I-1 notebook contains evidence of
only one instance of ascites production (on I-1:36) and that it
must have been successful for these four hybridomas because
ascites tests on those hybridomas were done repeatedly (as shown
on a number of pages including I-1:31, 35, 41, and 43).

However, our review of the I-1 notebook does not support ORI's
assumption that ascites were made only once and that these
ascites produced antibody. Page I-1:36 lists 12 hybridomas for
which ascites were prepared, including the four at issue, and
describes protocols (dated 10/26-10/30) for two experiments to be
performed on the ascites fluids on particular subsets of those
hybridomas. The experiments on I-1:31-35 (dated 10/30-10/31)
appear to correspond to those protocols. Later pages, including
I-1:40 (dated 11/23/84) and I-1:44 (dated 11/20-11/21), contain
some evidence of further preparations or at least purification of
ascites from these hybridomas. These preparations may have been
unsuccessful in terms of antibody concentration and may have been
used in the ascites experiments at I-1:48-58. Because the
records are not adequate to track a specific preparation or
purification of ascites used in a particular assay, we have no
basis to conclude that the products of the same ascites
preparation were used for all the ascites assays.

Second, positive results for idiotype were obtained for the same
hybridomas in other assays at I-1:31, I-1:34-35, I-1:41, and I-
1:43. R. Br. 64; Tr. 4759-69 (Imanishi-Kari). ORI disregarded
these results, arguing that some of the results were obtained on
a rabbit anti-124/40 coat rather than the guinea pig anti-17.2.25
and that some of the readings which Dr. Imanishi-Kari interpreted
as positive should be considered negative or ambiguous. ORI
Br. 96-97. We find that these assays, assuming the data are
authentic, provide experimental evidence adequate to support the
conclusion that it was more likely that Dr. Imanishi-Kari made a
judgment at the time that all four hybridomas produced idiotype-
positive antibodies than that she intentionally misrepresented
these hybridomas as idiotype positive. We therefore turn next to
the question of the authenticity of the data.

ORI did not prove by a preponderance of the evidence that
the assays supporting idiotype-positivity for these
hybridomas should be disregarded for forensic reasons.

Finally, ORI argued that the assays which conflicted with its
claim of idiotype negativity for these hybridomas should be
disregarded for forensic reasons. ORI Br. 96. First, ORI has
charged that the ELISA was actually done on an anti-
immunoglobulin coat which would provide no information about
idiotype-positivity. We discuss this charge in detail in
relation to charge six for Table 3 and concluded that it is
without merit. In regard to these charges, ORI argued that even
if the ELISA was a genuine assay for idiotype, its results would
be less reliable than ascites results. ORI Br. 95, n.70. We
turn next, therefore, to ORI's charge that ascites results at I-
1:41 and 43 were fabricated in 1986 and that ascites results at
I-1:31, 34 and 35 are "problematic." ORI Br. 96.

The main document examination attack on I-1:41 and 43 is based on
their supposed relation, along with I-1:30 and I-1:113, to
certain pages in the I-3 notebook, dated in 1986. This
association was based on the pages having been written on pad
paper with the same defect and containing pen inks in some
entries that match in formula. In addition, I-1:41 was linked by
ESDA to I-1:113 (and an ESDA linked I-1:30 to I-3:25, which
reinforced the claim of a relation between this group of pages in
I-1 and some pages in I-3). We addressed above the flaws in
ORI's effort to use these commonalities as a basis to infer that
the I-1 pages were fabricated.

ORI's claim that I-1:31 was "grossly out of order" was based on
Secret Service findings that impressions of I-1:107 (dated
February 7, 1985) appear on an ESDA of I-1:31 in registration,
while impressions of I-1:31 appear on an ESDA of I-1:98. ORI Br.
96; Ex. H500, at 2. ORI concluded that I-1:31 was prepared after
I-1:107. The material in registration is only a heading entry
and little else from I-1:107 appears on I-1:31. The fact that so
little of the writing was impressed on I-1:31 suggests that
registration was not indicative of necessarily having been
written in order on a single pad. Tr. 4212-15 (Richards). The
few entries from I-1:31 that appear in impressions on I-1:98 are
not in registration. As discussed elsewhere, conclusions about
the order in which these pages were written cannot reasonably be
drawn from the sketchy ESDA impressions, in light of the
limitations of the ESDAs in determining whether a page receiving
impressions already contained writing so that impressions were
made merely by laying a page on top of another while adding

In any case, the Secret Service offered no evidence that the raw
data (i.e., the counter tapes) on I-1:31, 41 and 43 were not
genuine or could not have been generated in October 1984. In
light of our discussion about Dr. Imanishi-Kari's practices in
organizing data, there is no reason why the pages might not have
been put together in 1985 or even later without affecting the
authenticity of the data on them. Furthermore, the protocol on
I-1:36 (an unchallenged page on which ORI relied in discussing
ascites preparation) ties in closely to the experiment on I-1:31
and suggests that the October 1984 date is probably correct even
if the page was compiled retrospectively.

We conclude that ORI did not prove by a preponderance of the
evidence that the data supporting the idiotype-positivity of the
four hybridomas named in the charge letter were fabricated.

The Panel concludes that ORI did not prove charges one and two,
Table 3, by a preponderance of the evidence.

2. Claims about "n.d" notations

Table 3, charge three: ORI charged that Dr. Imanishi-Kari's
"deliberate making of false statements" by noting "n.d." (meaning
"not determined") in the column on isotype for four hybridomas in
Table 3 when data showed them to be negative for any antibody
production was scientific misconduct. Charge letter 6.

Issue: ORI argued that including these four hybridomas in Table
3 implied that they had been determined to be producing antibody
(and, in fact, as discussed above, to be producing idiotype-
positive antibody). ORI Br. at 98. ORI interpreted the notation
"n.d." to mean that the antibody from these hybridomas had not
been tested for isotype.

ORI treated the including the n.d. hybridomas as making two
positive assertions: that these clones produce antibody and that
the antibody is idiotype positive. We find that n.d. may just as
reasonably be read as a negative assertion: nothing is
determined about the serology of these hybridomas. This reading
was supported by Dr. Wortis who testified that the n.d. notation
implied that these hybridomas were simply irrelevant because "we
don't know what is actually being produced by those clones." Tr.

However, even if we read the n.d. notations as asserting that
antibodies were present and that what could not be determined was
only the isotype of those antibodies, we do not find clear
evidence that the assertion is false, much less intentionally and
deliberately falsified. ORI's claim that the hybridomas were
proven not to produce antibody was entirely based on an ELISA
assay at I-1:83-88. The summary of the results states that the
ELISA was performed on an anti-idiotype coat using probes to test
for various heavy and light chain isotypes. I-1:83.
Consequently, the ELISA would provide information on the isotypes
of all antibodies captured by the anti-idiotype coat and would
provide no information about the isotype of idiotype-negative
antibodies. The four hybridomas listed in Table 3 as n.d. for
isotype were tested in the ELISA as negative for all isotype
probes. Based on this assay, therefore, it would not be
possible to determine if the hybridomas were producing no
antibody or the antibody which they produced was not captured by
the idiotype coat.

ORI believed that this assay demonstrated that the hybridomas
produced no antibody because ORI accepted the assertion by
Dr. O'Toole that this ELISA was actually performed using an anti-
immunoglobulin reagent which would capture all antibodies. If
that were the case, negative results on all the heavy and light
chain probes would seem to rule out the presence of antibody
(assuming no problems in the preparation of the samples or the
functioning of the reagents). However, for reasons discussed in
detail in a later section, we find that the ELISA was performed
as recorded on an anti-idiotype coat. We therefore conclude that
this assay did not show that the four hybridomas were not
producing antibody.

Dr. Dahlberg agreed at the hearing that the charge that the use
of n.d. was incorrect was "based totally" on the assumption that
an ELISA using an anti-immunoglobulin coat had found no antibody
production. Tr. 519-20. In its brief, ORI also intimated that
the charge should be sustained because including the four n.d.
hybridomas "suggested a level of idiotype positivity that was not
present." ORI Br. 98; see also ORI FFCL XI.C.ii.12.
Dr. Imanishi-Kari testified that these four were not idiotype-
positive, based on the ELISA results. Tr. 4754, 4779. We
discuss elsewhere the dispute about whether Table 3 generally
implies idiotype positivity for those hybridomas not specifically
referenced in the text as idiotype positive. If the reader
understood these four additional hybridomas to be idiotype-
positive antibody producers, the reader might draw an implication
from the molecular results in Table 3 that the heavy chains were
endogenous since the molecular results indicated that the
transgene was not being expressed. ORI did not prove, however,
that the central point of the paper would have been undercut if
no information at all were provided on these hybridomas.
Overall, we do not find it plausible that Dr. Imanishi-Kari was
intentionally misleading readers to believe she had obtained four
more hybridomas that produced idiotype-positive antibody while
listing them as n.d. and therefore offering the reader no
information about their serology. ORI understated the
pointlessness of such a misrepresentation in acknowledging that
it would make the hybridomas "less interesting." ORI Br. 98. If
Dr. Imanishi-Kari deliberately intended to mislead readers, it is
more likely that she would have put an endogenous Ig for these
hybridomas and explicitly stated that they were idiotype

The Panel therefore concludes that ORI did not prove charge
three, Table 3, by a preponderance of the evidence.

3. Double producers

Table 3, charges four and five: ORI charged that
Dr. Imanishi-Kari failed to report in the Ig column of Table 3
that nine hybridomas reported as expressing only an endogenous
heavy chain also expressed a æ heavy chain, and this was
deliberately misleading and a falsification. ORI also charged
that the statement in the Cell paper that "[a]ll of our
hybridomas did secrete Ig, yet none produce two Ig heavy chains"
was a deliberate false statement since there was experimental
evidence that 13 hybridomas were double producers. ORI asserted
that these acts constituted scientific misconduct. Charge letter

Issue: ORI claimed that Dr. Imanishi-Kari had experimental
evidence that 13 of the Table 3 hybridomas were double producers,
because two heavy chains (æ and either þ or à) were scored
positive in the ELISA at I-1:83-88 for these 13 hybridomas and
some of the hybridomas (reported in Table 3 as endogenous) also
scored positive for transgene æ in the assay at R-1:19-24 or
other assays. Dr. Imanishi-Kari said that she did not draft the
double producer language in the text, but agreed with it, so long
as the proper definition of double producer is used. She said
she had ruled out the existence of double producers by subcloning
or relying on the molecular data. Dr. Imanishi-Kari said that
she did not rely on the assay at R-1:19-24 because BET-1 was not
working specifically in that assay and because there was a strong
possibility of contamination. ORI argued that she was
responsible; that some of her subcloning data were fabricated;
that, in any event, the subcloning data are ambiguous or
nonexistent for five of the clones; and that she could not rely
on the molecular data. ORI argued that Dr. Imanishi-Kari should
have reported in Table 3 that both heavy chains scored positive
in the ELISA.

Analysis: For reasons we explain in part elsewhere, we conclude
that ORI did not prove that any subcloning data were fabricated.
As we explain in this section, the record as a whole supports a
conclusion that Table 3 and the statement about double producers
reflect an honest difference in interpretation or judgment of
data, rather than that they are deliberately false, for the
following reasons:

 ORI's findings ignore the textual and experimental context
in which the statement about double producers was written
(possibly in editing of the paper and likely not by
Dr. Imanishi-Kari). The text relates to whether more than
one heavy chain was "stably transcribed," and does not imply
that no other heavy chains were ever detected. Draft
language supports testimony of Dr. Imanishi-Kari's witnesses
that, at the time, she would have approached the question
from the viewpoint that she would need proof of stable
double producers before reporting more than one heavy chain
for a hybridoma.

 ORI's proposed definition of "double producer" would include
a well containing more than one cell line. This definition
was not supported, even by ORI's own experts. Dr.
Imanishi-Kari's definition was supported by key witnesses,
as was her view that subcloning would be the primary way of
determining whether there was a true double producer or
merely more than one cell line per well.

 Internal evidence in the data supports the view that
Dr. Imanishi-Kari did not disregard evidence of possible
double producers but that she and Dr. Reis did considerable
work to more fully characterize hybridomas tested by the
ELISA. Dr. Reis corroborated Dr. Imanishi-Kari's testimony
about why they did not interpret the assay at R-1:19-24 as
ORI did. When Dr. O'Toole first asked questions in 1986
about double producers, she was told by Mr. Albanese that
subcloning was done.

 ORI did not establish that Dr. Imanishi-Kari had a motive to
deliberately misrepresent whether she found double
producers. There is conflicting testimony on whether such a
finding would undercut the central thesis of the paper
(depending on whether the endogenous heavy chains were
associated with idiotype-positivity). Even if the thesis
would have been undercut, however, data proving the
existence of double producers would have been worth

 The data as a whole are consistent with Dr. Imanishi-Kari's
assertion that she made a judgment based on the serological
and molecular data that she did not have proof of double
producers. ORI raised questions about interpretation of
some of the subcloning and other data and about the
soundness of relying on the molecular data for some of the
hybridomas. Some of ORI's interpretations have no merit,
while others raise serious scientific questions. The issue
here is not whether the judgments were sound, but whether
this is indeed a case where honest judgments were made,
rather than a case of deliberate falsification. In
evaluating the latter issue, ORI ignored the fact that some
of the hybridomas which ORI questioned vigorously were
chosen for gene sequencing and analyzed in detail in the
Cell paper which shows that the authors did not simply
disregard seemingly conflicting results.

 The dispute about double producers was confused by lack of
clarity in what Dr. O'Toole was alleging, but we agree with
others who have sought to evaluate the dispute that it is at
heart a scientific one.

We discuss the basis for each of these conclusions below.

ORI's findings ignored the textual and experimental context.

The paragraph in the Cell paper that addresses double producers
is in the "Discussion" section and states:

Many transgene hybridomas from either M95 or M54 transgenic
mice retain the transgenic DNA but do not stably transcribe
the transgene (Figure 3 and Table 3). One might have
expected that even if many clones expressing endogenous
genes were selectively expanded, they would be double
producers of heavy chains, as has been observed with TNP-
specific æ + [kappa] transgenic mice (Rusconi and Kohler,
1985). All of our hybridomas did secrete Ig, yet none
produced two Ig heavy chains. Apparently, the transgenic
DNA transcription is repressed by some mechanism that spares
the endogenous rearranged Ig heavy chain genes.

Cell paper 256 (emphasis added). An earlier draft of the paper
(shown to Dr. O'Toole for her comments) reads much differently.
Ex. H227, at 21. This draft contains a description related to
Dr. Weaver's earlier work, in which he found "that the æ
transgene is expressed in all transgenic A-MuLV- transformed pre-
B cells. . . ." Id. This draft also states that it "is probably
significant that we have never observed double-producers." Id.
ORI provided no specific evidence about who drafted either
version; changes may have been attributable to editing to make
the discussion more succinct. We find it significant that the
earlier version is consistent with what Dr. Imanishi-Kari
testified her approach would have been at the time: that she
needed proof of double producers to report more than one heavy
chain for a hybridoma.

Dr. Imanishi-Kari's explanation of the experimental climate was
also supported by Dr. Baltimore, who testified that the statement
about double producers was made in the context of discussing
allelic exclusion (the concept that the expression of one Ig
heavy chain prevents the expression of a second chain). Tr.
2061-63 (Baltimore), 4695-96 (Imanishi-Kari). This concept was
based on previous work by Dr. Baltimore (a Nobel prize winner)
and supports a conclusion that Dr. Imanishi-Kari would have
evaluated her data as showing double producers only if she had
conclusive evidence. Also, the paragraph as a whole indicates
that the kind of double producer being discussed is a very
specific one: one cell producing two antibody heavy chains, one
a æ and another a þ or à.

Finally, ORI's analysis of this question completely ignores the
reference in the paragraph at issue to whether the transgene was
"stably transcribed." Based on their previous work, the
authors would have expected the transgene to be transcribed, and
Dr. Reis worked with subclones looking for stable æ producers.
Cell paper 255; Tr. 2522 (Reis); Dahlberg Decl. 13. Dr.
Gilbert's testimony supports the idea that the focus should be on
"cells with fixed properties as opposed to changing ones." Tr.
1552-53 (Gilbert).

ORI's reliance on certain data as showing double producers
is based on an erroneous definition of double producers and
on a misreading of the data.

ORI proposed that we use the following definition of double
producers in evaluating these charges:

Operationally, a culture or clone of a cell producing two
different heavy chains, May be due either to co-expression
by a single cell clone of two different heavy chains, or to
the presence of more than one clone of cells in the culture.

Glossary 12 (unnumbered). Dr. Imanishi-Kari proposed the
following definition:

A clone (or single cell) producing two different heavy
chains or two different light chains. Double producer and
co-expressor are synonyms. Finding more than one clone in a
culture does not amount to having a double producer. A
culture (well or flask) may be "double positive" due to the
presence of more than one clone of cells. This should not
be confused with true double positives.

Id. ORI's key scientific experts, Drs. Davie and Gilbert,
supported Dr. Imanishi-Kari's definition rather than ORI's. Tr.
1400 (Davie), 1524-25 (Gilbert); see also Tr. 1957 (Eisen), 2058-
59 (Baltimore), 2369-70 (Weaver), 2749 (Wortis), 511-12

ORI determined that 13 hybridomas from Table 3 were double
producers because they "scored positive" on the ELISA at I-1:83-
88 for either þ and à and "scored positive" for æ. Results of
the ELISA are scored in columns on I-1:83-86 for 146 hybridomas,
some of which were tested twice. Column headings include the
following heavy chain reagents: anti-æ, anti-þ1, anti-þ3, anti-
þ2B, anti-þ2A, and anti-à. In each column, readings are recorded,
ranging from +/- to +++. These scores indicate
Dr. Imanishi-Kari's judgment about whether the degree of color
she observed in each well indicated a reaction.

Nine of the 13 hybridomas ORI said were double producers on the
ELISA were reported in Table 3 as expressing endogenous heavy
chains (L3.10.6, L3.4.4, L3.9.4, L4.13.2, L4.5.2, L4.7.2, L4.8.2,
S1.3.1, and L4.10.1), and four were reported as expressing æ
heavy chains (S2.14.4, L3.14.5, L4.14.3, S2.15.6). ORI said it
did not dispute that the ELISA was authentic isotyping data
(although alleging that the coating reagent for the ELISA was
falsified). ORI also said that seven of the nine hybridomas
reported as having endogenous heavy chains scored positive for
BET-1 on the assay at R-1:19-24. ORI also cited other data for a
few hybridomas, which ORI said indicated that they were transgene
æ producers. ORI Br. 101, n.79.

Contrary to what ORI suggested, a positive score on the ELISA for
more than one heavy chain would not by itself be evidence of a
double producer, under the definition agreed to by Drs. Davie and
Gilbert. The possibility would exist that the well (although
assigned a clone number such as L4.7.2) contained more than one
cell line. Moreover, contrary to what ORI's findings implied,
some of the hybridomas were scored +/- for one of the two heavy
chains, not as an unambiguous positive. A few of the hybridomas
at issue were retested in the ELISA, with different results the
second time.

Dr. Imanishi-Kari has testified that she did not consider her
ELISA results to be fully reliable because she had to read the
assay by eye since she did not have an ELISA optical scanner.
Tr. 4679-80. Dr. Dahlberg accepted as true that she did not have
a scanner. Tr. 492-94. To read the ELISA, she had to determine
the degree of color intensity, which involves judgment. The fact
that there are many +/- or + results indicates that her initial
reading was very conservative. A summary column on each of the
ELISA pages (which is next to the last column) indicates one or
more heavy chains for each hybridoma; these notations were
apparently made some time before Dr. Reis did her experiments in
May 1985. R-1:19; Tr. 2518-20 (Reis). While two heavy chains
are recorded in this column for some of the 13 hybridomas at
issue, for others only one heavy chain is recorded. Some of
these are hybridomas with only +/- or + for the second heavy
chain. Thus, her evaluation of the ELISA results some time prior
to May 1985 was not the same as Dr. Dahlberg's.

ORI also is mistaken in relying on the assay at R-1:19-24 as
showing double producers for seven hybridomas which ORI said
showed positive for æ on this assay. ORI Br. 101. First, a æ
result alone would not establish a true double producer, as
discussed above. Second, Dr. Imanishi-Kari testified that she
did not consider the results of this assay reliable because BET-1
was not working well and because there was possible
contamination. Tr. 4827-29. Dr. Reis corroborated this
testimony. Tr. 2514-18. ORI tried to rebut the contamination
explanation through testimony by Dr. Dahlberg that contamination
would have had different effects. This testimony made
assumptions about the type of contamination that are not
warranted. More important, however, the reagent problem alone
provides a reason not to rely on the positive results for
transgene æ.

ORI did not show that the subcloning data are fabricated or

The testimony of virtually all of the scientific witnesses
recognized subcloning as the technique that would ordinarily be
used to distinguish between a single clone that was a double
producer and multiple clones in a well. Tr. 511-13 (Dahlberg),
1401 (Davie), 1531, 1553 (Gilbert), 2750, 2757 (Wortis), 1781
(Huber), 2059 (Baltimore). The record supports a conclusion that
Dr. Imanishi-Kari recognized this and did subcloning of all but
three of the 13 clones ORI said were double producers. ORI did
not challenge all of the subcloning data, but challenged the
following pages based on document examination evidence: I-1:97-
99, 119, and 121. ORI also relied on statistical evidence for I-
1:121. We discuss elsewhere the evidence on I-1:119 and I-1:121,
and why we conclude that ORI did not prove that those pages were
fabricated. In this subsection, we discuss testimonial and
documentary evidence that supports a conclusion that subcloning
was in fact done and other reasons why we conclude that ORI did
not prove that pages I-1:97-99 were fabricated.

As noted above, ORI did not claim that the ELISA was not real
isotyping data. On the ELISA, red circles appear around certain
of the hybridoma numbers on the left of the pages, including
eight of the 13 at issue. The following notation appears in red
ink on I-1:86: "Subclone again the circle over." Thus, there is
internal evidence that Dr. Imanishi-Kari recognized the
importance of subcloning for most of these clones to determine
whether the two heavy chains were from one or more cell lines.
Dr. O'Toole claimed that she saw the ELISA at the Wortis meeting,
but she did not claim that the red circles or the notation were
not on the ELISA then. The red circles correspond to hybridomas
listed as being subcloned, frozen, and transferred in 24 wells on
pages I-1:95, 95A, and 96. There is other evidence that
subcloning was in fact done. The notation "subclone" is next to
hybridoma L3.9.4. on I-1:25 and 27. Also, Dr. O'Toole testified
that, when she questioned Mr. Albanese in May 1986, after
discovering the 17 pages, about whether he had detected some æ
related to hybridomas reported as having endogenous heavy chains,
he indicated yes, but that subcloning was done. Tr. 1007-09.
Dr. O'Toole's contemporaneous notes of her conversation with Mr.
Albanese refer to subcloning. Ex. H19 (Dr. O'Toole notebook) at

The analysis of subclones is shown in the notebooks as done in
part by Dr. Reis and in part by Dr. Imanishi-Kari. Dr. Reis was
trying to find stable æ producers. Tr. 2522 (Reis); Dahlberg
Decl. 13. Dr. Imanishi-Kari's work included using reagents to
detect þs. ORI did not allege that the Reis data were
fabricated, and in fact, relied on it as showing that some
hybridomas were double producers. If the Reis data on subcloning
were the only subcloning data, and Dr. Imanishi-Kari's were
fabricated, this would suggest that Dr. Reis conspired with
Dr. Imanishi-Kari to hide evidence of double producers--a
scenario which we find on the whole to be implausible,
particularly since Dr. Reis left her notebook in an accessible
place in the laboratory when she left for Brazil.

The evidence that the need for subcloning was recognized at the
time, and that Dr. Reis did some of the subcloning, undercuts a
scenario of fabrication of the subcloning data in general.

The evidence on which ORI relied as showing that pages at I-1:97-
98 are fabricated (and which ORI said was unrebutted and
undisputed) is based on the Secret Service analyses.
Specifically, ORI relied on findings that: (1) the original date
on page 97 was 10/12, but it was changed to 1/10/95 (and there
were similar date changes for related pages); (2) the tapes on
these pages are "most consistent" with the tapes being produced
in October 1984; and (3) page 96 (which ORI called a "summary of
results") was "produced prior to" I-1:36, which is dated 10/13.
ORI said that this showed that the data were fabricated because
it showed the original date was 10/12. Based on pages I-1:28-29,
ORI concluded that hybridomas L3.5.2 and L4.13.2 were not
available for subcloning until 10/8 and L3.10.6 was not available
until between 10/8 and 13. ORI then concluded that supernatants
from subclones could not have been available by 10/12, citing
Dr. Imanishi-Kari's testimony that it takes at least 10 days to
grow up a subclone for testing. See ORI Br. 103-04; Tr. 4821

This evidence is unpersuasive, for the following reasons:

 The numbers such as L3.5.2 and L3.10.6 derive from cloning
of the parent wells shown as being done starting 8/30/84 and
9/18/84, so there is no reason to believe that subclones of
these clones could not have grown sufficiently to be tested
by 10/12. The mere fact that the records indicate no
freezing of a few of these clones until around 10/12 does
not necessarily imply that they had not grown sufficiently
to subclone 10 days prior to this date.

 The date changes on I-1:96-98 are obvious. ORI briefs
suggested no motive Dr. Imanishi-Kari would have to falsify
the dates other than to suggest that she first put 10/12,
then realized that this was not credible because of the
freezing dates so she then changed to a later date. Since
the freezing theory has no merit (and, in event, would not
explain why a fabricator would make an obvious date change
to these pages instead of redoing the freezing lists), ORI
did not establish any reason why it makes a difference when
the tests were done.

 While ORI asserted that the Secret Service concluded from
the ESDA of I-1:36 that page I-1:96 was "produced prior to"
I-1:36, this assertion was wholly unsupported. Mr. Hargett,
one of the Secret Service examiners, testified that the
statement in their report to that effect was an error and
the correct finding, which was also in the report, was that
I-1:96 was produced "on top of" I-1:36, which implies
nothing about the order in which they were produced. Tr.
3238; Ex. H500, at 2. This is clearly the correct
conclusion. Page I-1:36 is a spiral notebook page, and
therefore not from the same pad as I-1:96. An independent
review of the ESDA shows that impressions from I-1:96 that
appear on the ESDA of I-1:36 are not in registration with
the page or each other. Thus, there is no basis for
concluding which of these pages was produced before the

 ORI also relied on other ESDA analyses, which ORI said
showed that I-1:98 was produced after a page dated 10/30/84.
ORI FFCL XI.B.iii.b.5. If true, this would be consistent
with the January dates now on the pages. We do not,
however, rely on these analyses. ORI misstates what the
ESDA for I-1:98 shows, since the impressions do not appear
in registration. Compare ORI FFCL X.B.iii.b.5 with Tr.
3479-80 (Hargett). We find they are inconclusive as to the
date or dates the data were in fact recorded on I-1:97-99
(which, as noted above, could differ from the date or dates
the tests were actually performed).

 The Secret Service finding that the tapes are "most
consistent with" other tapes produced in October 1984 is
similarly inconclusive.

Thus, we conclude that ORI did not prove that subcloning data
were fabricated or falsified.

The data are consistent with a judgment having been made
that there were no double producers.

Dr. Imanishi-Kari has said that she made judgments based on a
combination of the serological and molecular data. Contrary to
what ORI argued, we do not find her statements on this to be
inconsistent, nor do we agree that her statements suggest that
her decisions were necessarily based on a "convergence of both
types of data, each leading to the same conclusion." ORI Br.
106. She has also been consistent in stating that she found some
tests to be more reliable than others.

ORI effectively admitted that, if the subcloning data are not
fabricated, the data would support a conclusion that some of the
13 hybridomas were not double producers. ORI Br. 103. ORI
argued that for some of the hybridomas there is no evidence they
were subcloned, and that, for others, the subcloning results are
ambiguous or incomplete. ORI challenged whether Dr. Imanishi-
Kari could reasonably rely on the molecular data, generally and
with respect to particular hybridomas. For other hybridomas, ORI
argued that the laboratory results were "discrepant." ORI FFCL

Dr. O'Toole and others posited that a low-level expression of æ
might not give a clear result on a molecular assay and that the
molecular assays were not as sensitive as the serological assays.
ORI said the key issue, however, was the timing of the subcloning
results relative to the molecular results. ORI pointed out that:
(1) it is undisputed that hybridomas tend to lose chromosomes
over time and through subcloning; and (2) Dr. Imanishi-Kari and
Dr. Wortis had said that other phenomena--class switching (also
termed "translocation") and transactivation--can change isotype
expression over time. ORI Br. 102, citing Tr. 4716-17, 4723-24,
4807, 4877 (Imanishi-Kari), 2780-81 (Wortis), 2108 (Baltimore),
2827 (Capra), 1503 (Gilbert), 345 (Dahlberg); Ex. H150 (Journal
article at 3311, 3323).

The testimony shows some difference of opinion over the extent to
which it would be reasonable generally to rely on molecular data
in making judgments about double producers. Dr. Davie noted that
Dr. Weaver was simply assuming all of the hybridomas were "clean,
single heavy chain expressors" and was not doing his studies to
address the double producer issue. Tr. 1379-80. Dr. Gilbert
expressed concern that low level æ transgene expression might be
missed in a molecular assay if a small amount of æ on the
molecular test were ignored. Tr. 1496-97. On the other hand,
Dr. McClure testified that a combination of serology and
molecular analysis could tell "what protein of a particular
idiotype is being produced, and where that protein could be
coming from in terms of its gene expression." Tr. 1639-40. Dr.
Baltimore said that there is no general answer to the question of
whether the molecular assays would be more or less sensitive than
the serological assays, because it depends on a variety of
technical things, such as the nature of the probes and the amount
of radioactivity. Tr. 2060. He described the S1 nuclease test
as a "precise measure of whether the transgene is expressed" and
the 17.2.25 molecular test as a "somewhat less precise measure
[that] should overlap" the S1 nuclease test. Tr. 2055-57. Dr.
Eisen said there was no reason here to think that the molecular
assays were less sensitive than the serological ones. Tr. 1955-

Overall, the record supports a conclusion that, in some
circumstances, it might be reasonable to rely on a combination of
molecular and serological data in evaluating whether double
producers exist. Whether a connection could properly be drawn
between the particular molecular and serological assays here is a
more serious scientific issue, given possibilities such as loss
of chromosomes and class switching. ORI did not establish,
however, that Dr. Imanishi-Kari would have fully understood these
issues and their implications at the time of the Cell paper, and
the record indicates to the contrary. The papers discussing
these phenomena submitted by ORI were not published until the
early 1990's. See, e.g., Ex. H151; Cf. Tr. 420-22 (Dahlberg).
Moreover, Dr. Wortis indicated that it would be possible to trace
cloning and subcloning and to connect it to the molecular data to
make determinations of whether there were double producers. Tr.

In any event, the soundness of making judgments about double
producers based on a combination of serological and molecular
data is not what is at issue here. The issue before us is
whether the Cell paper reflects judgments or interpretations of
data which, if in error, were honest error rather than deliberate
falsification. Based on our examination of the hybridomas for
which ORI said (in one place or another) that the subcloning did
not exist or was ambiguous, we conclude that this is a case where
honest judgments were made based on the data, not a case of

Our examination differed from ORI's for various other reasons,
including the following:

 In evaluating the data for these hybridomas, ORI examined
the data from the perspective that Dr. Imanishi-Kari was
obliged to report a hybridoma as a double producer even if
she had only an ambiguous reading for a second heavy chain
or unambiguous readings that could indicate two cell lines
in a well, unless she had definitive evidence to rule out
the second heavy chain. As discussed above, this approach
ignores the textual and experimental context which suggests
that she likely had a different approach.

 As discussed below, ORI treated a +/- or a + on the ELISA as
a "positive score" for æ, whereas the record indicates that
Dr. Imanishi-Kari considered the ELISA readings as not
wholly reliable and these scores may have been reevaluated
as negatives for some hybridomas, including for some where
there was consistency in results for all hybridomas from one
parent well.

 ORI relied on positive BET-1 results for the assay on R-
1:19-24, whereas the record shows that Drs. Imanishi-Kari
and Reis likely disregarded these results because they
considered them unreliable.

 ORI disregarded the subcloning results on pages ORI found
were fabricated but did not prove were fabricated.

 ORI relied on results of assays by Dr. Reis that showed some
production of æ after subclones had grown up, but may have
been interpreted as not showing stable transcription of æ in
a single cell line.

 ORI disregarded a few molecular results because Dr. Weaver
no longer has the underlying data. Based on the record as a
whole, we concluded that Dr. Imanishi-Kari likely either
relied on Dr. Weaver's representation of his results at the
time, or merely misunderstood them.

 ORI failed to take into account all of the information in
the record, including the S1 nuclease results and sequencing
data for some of the hybridomas which appears in the Cell
paper, and interpreted other data in a way which is not the
only reasonable way (such as using a cut-off for ascites
results different from what Dr. Imanishi-Kari likely used).

We do not consider it necessary to discuss each of the hybridomas
at issue in detail, since our conclusions are consistent with
that of the NIH Scientific Panel and since ORI conceded that it
would not have found scientific misconduct if it had not found
that data were fabricated. We make a few general observations
here about subsets of hybridomas that ORI particularly
questioned, and why we conclude that the data are consistent with
Dr. Imanishi-Kari having made judgments, as she said she did,
based on a combination of the serological and molecular work.

For hybridomas which were not subcloned, the æ results on the
ELISA were ambiguous and the isotypes listed in Table 3 were
consistent with molecular results, including negatives on the S1
nuclease. Where the hybridomas were subcloned, the subcloning
could be read as confirming either that a weak result for the
second heavy chain in the ELISA was a false positive or that
there were two cell lines in the well. The molecular data either
as maintained or as previously reported are consistent with a
judgment having been made that there was no stable double
producer, and that the molecular results corresponded to the
hybridoma cells with the serological results for the heavy chain
reported. For all of the hybridomas where ORI alleged that the
laboratory results were discrepant (L4.13.2, L3.4.4, and L3.9.4),
subcloning separated out a stable þ producer, which was
consistent with the molecular results, and sequencing was done
which is reported in the paper. See Cell paper, Figs. 5, 6,
and 7.

ORI particularly questioned the reporting of only a þ heavy chain
for hybridoma L4.13.2 (even though the serological results seemed
to definitively rule out æ production) because there was some æ
detected on a molecular assay. ORI ignored the fact that this
hybridoma was not only sequenced, but was described in detail in
the Cell paper, including the fact that the RNA hybridized to the
VH portion of the æ transgene. Cell paper 252-53, Figs. 4, 5, 7.
Sequences of other hybridomas at issue, including L.4.5.2, were
also reported. Id. This reporting further undercuts ORI's
finding of intentional falsification.

ORI did not establish that Dr. Imanishi-Kari had a motive to
deliberately falsify the identification of the heavy chain
in Table 3 or the statement about double producers.

ORI did not support its position that Dr. Imanishi-Kari had a
motive to deliberately misrepresent the data on whether there
were double producers. ORI's position that the central thesis of
the paper would be undercut by evidence of double producers is
contingent on there being no evidence of idiotype-positive þs,
which was based on viewing the ELISA as being done on an anti-
immunoglobulin coat and other data being fabricated. Drs. Davie
and Eisen testified that the central thesis would not be undercut
by finding double producers if there were also evidence of
idiotype-positive þs. Tr. 1405-06 (Davie), 1957-59 (Eisen). Dr.
Gilbert also seemed to recognize that finding idiotype-positive
endogenous heavy chains on an anti-idiotype coat would be the
"killer experiment" crucial to the paper's central thesis,
although he appeared to suggest that this would not be enough.
Tr. 1492-98. Dr. Gilbert's opinion on this whole issue, however,
was influenced by the fact that he considered the forensic and
statistical evidence to be evidence of fabrication; moreover,
Dr. Gilbert testified that he "reviewed a small amount of
laboratory notebook data in this matter," so his scientific
judgments might have been different if he had reviewed all of the
data. Tr. 1465, 1504-05. Dr. Baltimore testified that finding
double producers would not undermine the central thesis of the
Cell paper that introduction of the transgene altered the
endogenous repertoire of gene expression, and Dr. Dahlberg
agreed, on the condition that "there are id positive þs in high
frequency." Tr. 528-29 (Dahlberg), 2058-59 (Baltimore); see also
Tr. 4806 (Imanishi-Kari).

Moreover, in the experimental context, a finding of double
producers might itself have been worth reporting.

This dispute is, and always was, a question of scientific
interpretation or judgment.

The question of whether the idiotype-positivity observed for the
fused hybridomas was coming from the transgene or endogenous
genes was raised by Dr. O'Toole early on, and is related to the
question of whether Dr. Imanishi-Kari was detecting æ heavy
chains and ignoring them. These questions may not have been
fully analyzed from Dr. O'Toole's perspective because the
scientists who first considered Dr. O'Toole's allegations thought
that she was positing æ/þ heterodimers. It is uncontroverted
that such heterodimers do not exist. Dr. O'Toole has alleged
that these scientists have misrepresented her point, that the æ/þ
heterodimer theory was "concocted to discredit her," and thus
implied that they are involved in a conspiracy against her.
Tr. 1060. In her memorandum to Dr. Eisen, however, Dr. O'Toole
specifically referred to heterodimers, and the memorandum can be
reasonably read to raise the possibility of æ/þ heterodimers.
Ex. H231. Based on hearing all of the testimony presented here,
we reject the notion of a conspiracy and conclude that these
scientists honestly felt Dr. O'Toole was raising only scientific
issues (even if they misunderstood what those issues were), that
a central issue to Dr. O'Toole was æ/þ heterodimers, and that
Dr. Imanishi-Kari had satisfactorily responded.

ORI took the position that it was misleading not to report a
second heavy chain detected at any time. We determine, however,
that Dr. Imanishi-Kari made a valid point that such reporting
could have been misleading. In context, reporting second heavy
chains without further explanation might have suggested that the
authors had evidence of stably transcribed transgene æs and þs
from a single cell clone. A complete explanation of the ELISA
results and why she did not consider them reliable, as well as of
all of the further serological and molecular testing, would have
taken substantial additional space in the Cell paper. While Dr.
Gilbert said he considered it "bad practice" not to report the
second heavy chain, he explained that he viewed such failure to
report as the authors substituting their "subjective judgment"
for that of the readers. Tr. 1553-54. Even if we were to find
that the failure to report the second heavy chain were bad
practice here, that would not be tantamount to a finding of
deliberate falsification.

In sum, we think that this dispute is, and was, a scientific one.

The Panel therefore concludes that ORI did not prove charges four
and five, Table 3 by a preponderance of the evidence.

4. ELISA assay

Table 3, charge six: ORI charged that Dr. Imanishi-Kari had
deliberately reported falsified evidence (I-1:83-88) to
investigators to support prior published research and that this
false reporting constituted scientific misconduct. Charge letter
at 7.

Issue: ORI found that Dr. Imanishi-Kari had falsified the ELISA
results for the Table 3 hybridomas by changing the date on I-1:83
(from 10/12 to 12/14) and by adding the notation in the heading
on that page (GP à 17.2.25 coat) to indicate that the assay was
done on a guinea pig anti-idiotype (anti-Id) coat, when in fact
the assay was done on a anti-immunoglobulin (anti-Ig) coat. ORI
relied on forensic evidence that the date was changed and that
the heading was added at a different time from the data. ORI
also relied on testimony from Dr. O'Toole that she had seen the
ELISA in May 1986 without the heading and had been told that this
was an isotyping assay on a rabbit anti-mouse immunoglobulin
(RAMIg) coat. ORI presented testimony from scientific experts
(primarily Dr. Dahlberg) about why they would conclude that the
ELISA was done on an anti-Ig coat, rather than an anti-Id coat.
Dr. Dahlberg also testified about why he thought data on idiotype
and NIP frequency in the January 1985 grant application were
inconsistent with the ELISA having been on an anti-Id coat.
Dr. Imanishi-Kari alleged that it does not matter when the
experiment was done or when the heading was added. She asserted
that the heading was correct, as shown by internal evidence in
the data, especially the controls. Dr. Imanishi-Kari denied that
Dr. O'Toole saw the ELISA in May 1986 and was told it was on a
RAMIg coat. Dr. Imanishi-Kari said that, as indicated in the
grant application, the frequency data ORI cited were from an RIA,
not from the ELISA, and that she would not have trusted the ELISA
for this purpose.

Analysis: The Panel concludes that ORI did not prove that the
ELISA was falsified. Specifically, we conclude that: (1) the
ESDA analysis does not show that the anti-Id heading was false;
(2) the experimental record and context support a conclusion that
this ELISA was done on an anti-Id coat; (3) Dr. O'Toole's
testimony on this issue is not credible; (4) ORI did not prove
that the controls were falsified; (5) ORI's scientific arguments
about the ELISA coat are based on faulty premises; and (6) the
date change on I-1:83 does not show that the ELISA was falsified.
We explain the basis for these conclusions below.

The ESDA analysis does not show that the anti-Id heading on
the ELISA was false.

Page I-1:83 has the heading "GP à 17.2.25 coat (Eliza J-K)."
This page and the next three pages are summaries of an ELISA with
isotype probes on supernatants of 146 hybridomas (some of them
tested twice). Pages I-1:87-88 are grids corresponding to wells
on a 96-well plate; some of the spaces on the grid have various
types of markings in what appears to be a gold marker pen (open
or closed circles--some of them smeared, + or +/-) or pluses in a
blue pen. Red marks indicate what parts of the grids relate to
tests for various heavy chains. Pages I-1:83-86 summarize these
results in columns with various heavy chain headings. Each well
was graded, with - meaning no color and other gradings (+/-, +,
++, or +++) according to the color intensity. Other columns
summarize some other data. Pages I-1:89-92 summarize isotyping
results for the same hybridomas on a companion ELISA presented as
being on a NIP-BSA coat.

After reviewing the entire original I-1 notebook, Dr. Weaver's
notebook, and all of the testimony and related documents, we
conclude that the ELISA summarized at I-1:83-88 was done on an
anti-Id coat, rather than on an anti-Ig coat.

We note first that ORI did not allege that the ELISA ever had a
heading showing that it was on an anti-Ig coat. ORI alleged that
ESDAs of page I-1:84 indicate that the heading was put on the top
line of I-1:83 at a "later time" than the data (or as Dr.
Dahlberg also put it "after the inquiry process was initiated,"
implying that it was added after the Wortis meeting). Tr. 443,
446. As discussed above, the ESDA analyses are very limited
and cannot determine whether the heading was added before or
after the rest of the data, or how soon before or after. See Tr.
3322 (Hargett). Contrary to what Dr. Dahlberg stated, the Secret
Service did not find that I-1:83 and 84 were in registration when
the data on the hybridomas were entered. Tr. 447-48 (Dahlberg).
Mr. Hargett testified merely that the heading was done in two
separate movements or at two different times and that the heading
was put on separately from the body of the page. Tr. 3322, 3532.

After examining the ESDAs, we find that: (1) the eight column
headings for isotype are in registration (possibly entered when
the pages were on the same pad); (2) the data entries for the
hybridomas are not in registration; (3) when the "GP à 17.2.25
coat" was entered, I-1:83 and 84 were in a position relative to
each other that was close to registration; and (4) the pages were
clearly out of registration when "(Eliza J-K)" was entered.
Thus, the forensic evidence is consistent with the coat heading
being added at the same time as the data; at most, the evidence
supports an inference that the heading and the data were added
after the column headings and after the page was torn off the
pad. In any event, no matter when the heading was added, the key
issue is whether it is false.

The experimental record and context support a conclusion
that this ELISA was done on an anti-Id coat.

The experimental record and context support a conclusion that
this ELISA was done on an anti-Id coat, for the following

 The protocol for the ELISA appears on I-1:94. ORI did not
question this page. The protocol is in pencil and is
unreadable on the xeroxed copies. The protocol states:

Test on Eliza J-K AP Ab on à 17.2.25.
96 wells supernatants (Tereza's sup)
Test also on NIP-BSA plates.

Thus, unchallenged entries directly support the way both
ELISAs (anti-Id and NIP-BSA) are presented.

 ORI presented testimony to show that it would be a "logical
early step in the investigation of the panel of hybridomas"
to do isotyping on an anti-Ig coat. ORI Br. 109, citing Tr.
1494 (Gilbert). That it might be logical to do an assay one
way, however, does not make it illogical to do it another
way. Moreover, the following make it more plausible that
the ELISA would have been done on an anti-Id coat: (1) Dr.
Weaver had done an initial screening of the parent wells for
the hybridomas on an anti-Ig coat that showed virtually all
wells producing antibodies; (2) Dr. Weaver's experience with
pre-B cells created an expectation that all transgenic
hybridomas would express the transgene, and therefore that
an anti-Id coat would capture virtually all antibodies of
interest; and (3) Dr. Imanishi-Kari had screened the parent
wells for idiotype-positive antibodies, and the parent wells
for the vast majority of the hybridomas tested in the ELISA
were highly idiotype-positive. W-2:22-30 (unnumbered);
Tr. 5325 (Imanishi-Kari); I-1:11-21 (unquestioned pages).

 The ELISA assay would have been very time-consuming, and the
results would have been difficult to read. (It is
undisputed that Dr. Imanishi-Kari did not have an ELISA
optical scanner, so she had to read the results by eye.)
Since Dr. Imanishi-Kari's primary interest was in idiotype-
positive antibodies and since the parent wells had been
prescreened, we think it unlikely that she would have done
such a complicated and difficult-to-read experiment on an
anti-Ig coat, when that would not have given her very useful
additional information.

 Dr. Reis testified that when she was in the laboratory, the
plate coatings mainly used were the anti-17.2.25 or the
antigen, and that they also sometimes used NIP or chicken
gammaglobulin. Tr. 2525. ORI proposed that we find that
anti-mouse Ig was "frequently in use" in Dr. Imanishi-Kari's
laboratory, but ORI's record citations do not support this.
ORI FFCL XI.C.iii.19. The only tests ORI cited were tests
using a goat anti-mouse Ig coat (not a rabbit anti-mouse Ig)
by Dr. Weaver, who did most of his assays in Dr. Baltimore's
laboratory. W-2:19-20; Tr. 2380 (Weaver).

 Dr. Imanishi-Kari presented unrebutted testimony that the
control data on I-1:86 showed that the main assay was done
on an anti-Id coat, specifically, the use of a NIP coat for
the controls (which would not have been necessary if an
anti-Ig coat had been used), and the fact that no reaction
with the à probe was recorded for J558 (an à that is a
negative control for NIP). Tr. 1944-55 (Eisen), 2739-42
(Wortis), 2831-33, 2843 (Capra), 3126-29 (Marrack). Dr.
Imanishi-Kari's brief also pointed out that hybridoma
L3.13.3 tested positive for æ on a Weaver Northern blot
(with a result too strong to be a sterile æ transcript) but
negative on the ELISA, and these results are inconsistent
with the ELISA being on an anti-Ig coat. R. Br. 78, citing
Ex. H22, at 107. ORI did not address this point in its

The authenticity of the ELISA is further supported by the very
ambiguity of the heavy chain results on which ORI relied in part
in claiming that double producers existed. If Dr. Imanishi-Kari
was willing to tamper with the ELISA heading, she would certainly
have been more likely to remove the problematic results with the
heavy chain probes.

Dr. O'Toole's testimony on this issue is not credible.

ORI contended, however, that Dr. Imanishi-Kari was stuck with
this page and could make only small alterations because
Dr. O'Toole had seen it at the May 1986 Wortis meeting.
Specifically, Dr. O'Toole testified that (1) she remembered
Dr. Imanishi-Kari showing the ELISA during the Wortis meeting;
(2) the ELISA was represented as being on a RAMIg coat at that
meeting and at other meetings; and (3) after a discussion at the
Wortis meeting to the effect that idiotype-positivity and þ heavy
chains would have to be shown in the same assay to support the
Cell paper's claims and that the ELISA did not do this, Dr. Huber
had "groaned" and said this was a "serious problem." Tr. 989-91,

Even if things happened as Dr. O'Toole described them, we find it
hard to understand why Dr. Imanishi-Kari would risk falsifying a
coating reagent, but hesitate to change the heavy chain results
for a few critical hybridomas. In any event, we do not credit
Dr. O'Toole's claims that she saw the page without the coating
reagent and that Dr. Imanishi-Kari said that the ELISA was done
on a RAMIg coat. Our reasons are as follows:

 As discussed above, there is internal evidence that the
ELISA was done on an anti-Id coat, and use of this coat
would be more consistent with the experimental record as a

 After hearing Dr. O'Toole and the other witnesses testify
and examining all of her statements over the years, we
question the accuracy of Dr. O'Toole's memory and her
increasing commitment to a partisan stand. She had
previously asserted that she saw only two pages of data (I-
1:113 and 41) at the Wortis meeting. Tr. 1060-63. She did
not mention seeing the ELISA until after she likely had
heard about ESDA results, but attempted to explain this by
saying that questions about the ELISA simply did not arise
until later. Ex. H276; Tr. 1038-41, 1064-67. We find it
hard to believe that she examined the ELISA carefully enough
to notice the heading, but did not refer to the ELISA
results when positing her theory in her memorandum to Dr.
Eisen that idiotype-positivity was due solely to the
transgene. See H231, at 3.

 No one corroborated Dr. O'Toole's testimony that
Dr. Imanishi-Kari had admitted at the Wortis meeting and
other meetings that the ELISA was on a RAMIg coat. Such a
statement clearly would have been against Dr.
Imanishi-Kari's interest. None of the other attendees at
the Wortis meeting recalled seeing the ELISA at that
meeting, and Dr. Imanishi-Kari denied showing it. Compare
ORI FFCL XI.C.iii.14 with Tr. 2802 (Wortis), 1787 (Huber),
4975 (Imanishi-Kari). Dr. O'Toole attempted to bolster her
assertion by referring to the coffee stain on I-1:87. Tr.
990. ORI pointed out that the stain is not visible in the
xeroxed copies, and Dr. O'Toole testified that she had not
seen the original since the Wortis meeting. Id. We find it
implausible that she never saw the original laboratory
notebooks during the many years she collaborated with ORI,
its predecessor, and the Subcommittee. In fact, she later
admitted under questioning that she had seen the originals
at the U.S. Attorney's Office and in meetings with either
OSI or ORI. Tr. 1074-75.

 ORI asserted that, if the ELISA were on an anti-Id coat, it
would have been presented as such at the Wortis meeting to
respond to Dr. O'Toole's concern about whether there was
evidence of idiotype-positive þs (as what Dr. Gilbert called
"the killer experiment"). ORI Reply Br. 40. We find it
credible that the ELISA was on an anti-Id coat, but not the
focus of the Wortis meeting, for the following reasons: (1)
the participants of that meeting understood Dr. O'Toole to
be positing æ/þ heterodimers and countering this theory
would require evidence of hybridomas that were not only þ
positive on an anti-Id coat, but were clearly æ negative;
(2) the other scientists had found subcloning to be critical
in resolving the question of whether idiotype-positivity was
coming from þs or æs; and (3) Dr. Imanishi-Kari had reason
to regard the ELISA results as not definitive.

ORI did not prove that the controls were falsified.

ORI acknowledged that Dr. Imanishi-Kari's point about the
controls on I-1:86 was "scientifically correct." ORI FFCL
XI.C.iii.63. ORI attempted to counter this point with (1) Dr.
Dahlberg's interpretation that the control results were in a
particular place on the grid at I-1:88, and were not accurately
transcribed to I-1:86; and (2) his interpretation of the ESDA of
I-1:87. Dahlberg Decl. 11 and Att. 12; Tr. 6371, 6436
(Dahlberg). Neither of these interpretations is well-founded.

First, after comparing the summary data and grids, we reject Dr.
Dahlberg's interpretation that the last line of the lower
righthand grid on I-1:88 was for the controls, but the controls
were not accurately transcribed. This particular grid is
difficult to read because it is smeared, with smudges of gold
marker ink that suggest some recording of results that were not
highlighted with pluses in blue pen, and a few spaces with blue
pluses. An analysis of the pattern of the grids on I-1:87-88
overall, however, indicates that there are summarized data for a
set of 12 hybridomas for which there is no corresponding raw data
grid; there is one grid which corresponds to part of the kappa
light chain results; and there are lambda light chain results
summarized, for which there is no grid. There is nothing that
clearly connects the lower righthand grid or any part of it to
any of the summarized data, including the controls. Since the
summarized data that clearly correspond to extant grids are
accurately transcribed (as Dr. Dahlberg acknowledged), we see no
reason to posit that one part of the largely unreadable grid
corresponds to the controls but was not accurately transcribed.
Dahlberg Decl. 11.

Concluding that grids are missing for the controls does not
undercut Dr. Imanishi-Kari's position that the controls were done
as shown on the data summary on I-1:85. Some of the grids for
other results are missing (probably because they were destroyed
by the coffee spill evident on pages I-1:86, 87, and 88). Also,
the exact same controls are recorded at I-1:92 for the ELISA on
the NIP coat, and these controls include the J558. Moreover,
even under Dr. Dahlberg's theory about where the controls appear
on the grid, the notations would be consistent with the use of
six controls, including one for à, and the readable results
appear inconsistent with the pattern that would be expected if an
anti-Ig coat had been used.

Second, after carefully examining the ESDAs, we decline to infer
from the ESDAs that the box and the indication of NIP coat were
placed on I-1:86 after the page was originally created and that
some of the controls (including J558) were added at a much later
time than the other entries. ORI FFCL XI.C.iii.a.21, 24 and 25.
Page I-1:87 is one of the pages of raw data grids, recorded on a
type of paper different from the summaries, so there is no reason
to expect them to have been in registration during the
transcribing of various parts of the data. The ESDAs of I-1:87
show faint impressions of: (1) some of the data for heavy chain
probes and light chain probes, not in registration; (2) the
third, fourth, and fifth of the six control proteins listed on I-
1:86 (that were clearly needed no matter what the original coat
was); (3) the box that surrounds all six control proteins; and
(4) the arrow and "NIP coat" notation. The position of these
entries shows only slight movements of the two pages relative to
each other. The three control proteins are in the box, but not
in registration with each other or the surrounding box. The
impressions of the controls that do appear are very faint, thus
suggesting that other control entries simply made no discernible
impression. The box, which did appear in the impressions, was
drawn in a size and shape to fit the full set of controls.

As discussed above, ESDAs never indicate how much time has passed
between the recording of particular entries, nor do they always
reveal every entry made at the same time. Positing a meaningful
passage of time is even more tenuous if no entries are made on a
page while in registration with another page from the same pad.
The ESDAs here do not show any data being added to pages from the
same pad in registration. Nor do the ESDAs show only the
critical entries (the J558 control, the box around the six
controls, and the "NIP coat" notation) being added when pad page
I-1:86 was on top of grid page I-1:87, which would be more
consistent with ORI's scenario. The particular ESDA here is
consistent with a contemporaneous recording of the critical
entries. Thus, we decline to draw the inference of fabrication
of the controls that ORI would have us draw.

ORI's scientific arguments about the ELISA coat are not

We reject ORI's other scientific arguments about the ELISA coat
because they are based on ORI's questionable interpretations of
the data. We do not address all of the parties' rather
convoluted arguments on these questions. The key reasons we do
not adopt ORI's interpretations are as follows:

 Dr. Dahlberg testified that the ELISA results on I-1:83-86
could not have been results on an anti-Id coat because they
are too similar to the ELISA results on the anti-NIP coat,
and this correlation is inconsistent with the frequency data
in Table 2 of the Cell paper. See Tr. 444-45, 449-50. He
subsequently acknowledged in effect that if the first part
were done on an anti-Ig coat, and the second part on an
anti-NIP coat, the results would be even less likely to be
so parallel. Tr. 6355. His reliance on the fact that the
Table 2 results did not show such high correlation between
NIP-binding and idiotype positive antibodies is misplaced,
because the tests were not on comparable sets of antibodies.
As discussed above, the Table 2 RIAs were on supernatants
from a complete set of pauci-clonal parent wells from a
fusion. These ELISAs were on supernatants from a set of
hybridomas derived from only four plates of multi-clonal
parent wells, and supernatants from the parent wells for
most of these hybridomas had previously tested positive for
both idiotype and NIP. I-1:11-21; Tr. 429-30 (Dahlberg).
Testimony by Dr. Imanishi-Kari and other expert witnesses
confirms that there are other plausible explanations of why
a correlation between idiotype-positivity and NIP-binding
might be higher in this experiment than in other
experiments, in particular, because different detecting
reagents were used. Tr. 1400 (Davie), 3129-30, 3152-53
(Marrack), 4736-41 (Imanishi-Kari); see also Ex. H299, at 3

 Dr. Davie testified that he questioned the ELISA being on an
anti-Id coat because there are positive results in the tenth
(rightmost) column, testing the hybridomas with the
idiotype, and he questioned how one could have idiotype-
bearing antibodies that were also anti-idiotypic. Tr. 1375-
76, 1426-28. ORI asserted that this would be a startling
result that Dr. Imanishi-Kari would have mentioned in a
January 1985 grant application, but did not. Cf. ORI FFCL
XI.C.iii.42, 43. Dr. Imanishi-Kari presented convincing
evidence about why she would not have considered such a
result to be "startling." Tr. 2478-79 (Kearney), 2843-44
(Capra), 4730 (Imanishi-Kari); see also Tr. 2031 (Eisen).
Contrary to what ORI argued, the fact that she called the
results "interesting" does not mean she would have directly
referred to them in the grant application. The grant
application did report that she had detected 29 hybridomas
that reacted to the idiotype, and indicated her interest in
doing more work on whether the hybridomas would react with
various idiotype reagents. Ex. H6, at 141. This
application does not specifically state that 29 hybridomas
reacted to both idiotype and anti-idiotype reagents.
However, ORI's argument that she would have mentioned any
finding that some hybridomas were positive for both the
idiotype and anti-idiotype is undercut since the grant
application: (1) refers to findings in the ELISA that 29
hybridomas reacted to the idiotype; (2) lists four of these
hybridomas as reacting to the anti-idiotype; and (3) does
not mention the dual reactivity as a startling result.
Ex. H6, at 130; I-1:93-94.

 Dr. Dahlberg testified that, if the ELISA was on an anti-Id
coat, all of the hybridomas scored positive on the ELISA
would be idiotype-positive and the frequency would be 80%,
which is much higher than the frequency reported in Table 2
of the Cell paper. Tr. 444, 456-61. This alleged
discrepancy is not significant, however, since: (1) the
Table 3 hybridomas were not a complete set of hybridomas,
but only a subset grown from four thawed plates, and most of
the subset came from idiotype-positive, multi-clonal parent
wells; (2) reading the ELISA by eye required judgment; (3)
Dr. Dahlberg calculated frequencies for the ELISA by
considering hybridomas that scored as merely + or +/- as
hybridomas that "scored positive" on the ELISA.

 Dr. Dahlberg also suggested that, if the ELISA was done on
an anti-idiotype coat, Dr. Imanishi-Kari would have reported
in the Eisen grant application the 80% idiotype positivity
shown on the ELISA, rather than the numbers from an RIA
indicating 40% idiotype positivity. Tr. 456-57. Given the
judgments involved in reading the ELISA by eye and the
ambiguous results shown for it, however, we find credible
Dr. Imanishi-Kari's assertions that she considered the ELISA
less reliable for this purpose than the RIA. Moreover, if
only the approximately 70 hybridomas with +++ results on the
ELISA for at least one heavy chain are counted as positive
for idiotype, the ELISA results are not significantly
different from the results in the grant application.

Finally, we find that, in determining that the ELISA was a test
for isotype only, ORI placed too much weight on Dr. Weaver's
statement in June 1990 that "the isotype analysis preceded the
idiotype analysis which preceded the allotype analysis . . . . "
Ex. H116, at 29. This statement does not address the issue of
what coat was used for any isotyping work. Moreover, Dr. Weaver
noted that he did not do the serological work, that this was just
his recollection, and that he was not sure. Ex. H116, at 29-30.

The date change on I-1:83 does not show that the ELISA was
deliberately falsified.

Based on Secret Service analyses, ORI alleged that Dr. Imanishi-
Kari had falsified the date on I-1:83. Mr. Hargett testified
that he found that "what appears to be a 12 for the month was
really a 10, appears to be a 10" and that it was his opinion that
the date was altered from a 10 to a 12. Tr. 3321-22 (Hargett),
3674-75 (Stewart). Dr. O'Toole testified that the date was not
on the page when she saw it in May 1986. Tr. 991.

We first note that, as Mr. Hargett acknowledged, the date change
from 10/12 to 12/12 is "obvious." Tr. 3533-34 (Hargett). The
ORI Report questioned why, if Dr. Imanishi-Kari did the ELISA on
an anti-Id coat in October, she would not have reported it in the
grant application submitted October 31. ORI Report 157. The
implication was that she had a motive to falsify the date because
she falsified the anti-Id coat heading, and then needed to
explain why she did not report the ELISA in the October
application. This motive depends on the allegation of
falsification of the heading, which we rejected above. Moreover,
the ESDA of I-1:84 shows both the "GP à 17.2.25 coat" heading and
the 10/12 date, but not the date change, so this theory is

Dr. Imanishi-Kari testified (apparently based on the protocol on
I-1:94 for the ELISA on the idiotype coat reported in the tenth
column on I-1:83-86) that it is possible that she did the main
assay in October and the assay on the idiotype coat in December.
Tr. 5214-20. We do not need to resolve this question because ORI
did not prove that it is more likely that the date change was a
deliberate falsification, than an honest change. We note,
however, that the following indicate that the ELISAs may have
been done or summarized in December: (1) the protocol has a
12/12 date; (2) the 12/12 "Summary of Subcloning" at I-1:80-82
does not mention ELISA results; (3) the notation in red ink on I-
1:86 ("Subclone again the circle over") may indicate she had
already done some subcloning; and (4) the date on the protocol on
I-1:95 for subcloning the circled hybridomas (which is in red
ink) is 12/14.

ORI relied on similar obvious date changes for the NIP-coat ELISA
to bolster its arguments. Specifically, ORI relied on
indentation analyses which ORI said showed that I-1:89 was
"produced prior" to I-1:32-34, which were dated 10/30 to 10/31.
ORI FFCLs XII.C.xi.1-6. The indentation analyses show the
opposite, however. Ex. H500, at 2. Thus, contrary to ORI's
assertion, the indentation analysis is consistent with the
December dates.

The Panel concludes that ORI did not prove charge 6, Table 3, by
a preponderance of the evidence.

D. Other charges

1. I-1 Notebook

I-1 Notebook, charge one: ORI charged scientific misconduct,
alleging that Dr. Imanishi-Kari deliberately created, compiled,
and reported certain fabricated and falsified data in the I-1
Notebook. Charge letter 7.

Issue: In addition to the specific pages discussed above, ORI
asserted that there were other pages of the I-1 notebook which
were fabricated or falsified. ORI relied on the forensic
document examination and statistical evidence for the pages that
we have already discussed with regard to allegedly fabricated or
falsified pages as well as on document examination and
statistical evidence generally for certain additional pages of
the I-1 notebook.

Analysis: ORI asked us to conclude (1) that Dr. Imanishi-Kari
was attempting to create an experimental record that was more
convincing than it actually was (ORI FFCL XII.E.6); and (2) that
the scientific, forensic, and statistical evidence presented as a
whole supports a conclusion that at least 33% of the I-1 notebook
was falsified or fabricated with respect to time (ORI FFCL
XII.F.1.). To buttress its assertion as to the overall
fabrication and falsification of many pages of I-1, ORI relied on
its assertions that there was no scientific basis for the
serological results reported in the Cell paper.

In our analyses of ORI's allegations related to the Cell paper,
we have rejected many of the specific allegations related to
pages of the I-1 notebook that are also included in this charge.
We have discussed at length the complex questions of
interpretation of scientific data raised by ORI's charges and by
its evidence. We have also discussed at length why we found
unpersuasive the results of the various documentation and
statistical techniques employed to analyze the pages of the I-1
notebook. With regard to the additional pages added in this
charge, ORI relied on the same types of forensic findings. Even
if true, these forensic findings are not sufficient to prove the
fabrication and falsification charged. For reasons analogous to
those explained at length earlier, we conclude that the forensic
evidence adduced by ORI is simply not probative of its charge.

The Panel concludes that ORI did not prove charge one related to
the I-1 notebook by a preponderance of the evidence.

2. Letter of Correction

Letter of Correction, charges one and two: ORI charged that
Dr. Imanishi-Kari included in Table 1 of the Letter of Correction
(Ex. H3) data that were fabricated to establish the specificity
of BET-1. In addition, ORI charged that the data presented in
Table 2 of the Letter of Correction (on selected hybridomas from
the primary wells reported in Table 2 of the Cell paper) were
also fabricated. Charge letter 8.

Issue and analysis: These charges were entirely based on
allegations that the underlying data at I-1:121 and I-1:124-128A
(June subcloning) were fabricated. We discussed and rejected
these allegations in our analysis on Figure 1 and Table 2 of the
Cell paper. Therefore, we summarily reject these charges.

The Panel concludes that ORI did not prove charges one and two
related to the Letter of Correction.

3. Grant applications

Grant Applications, charges one and two: ORI charged that
Dr. Imanishi-Kari committed scientific misconduct by deliberately
making false and misleading statements in two grant applications
-- 1 RO1 AI22632-01 and 2 PO1 CA28900-06. Charge letter 8-9.

Issues: ORI charged that these applications, one of which was
funded, contained deliberately "false discussions of idiotype-
specific hybridomas in transgenic mice that were material and
critical to the grant applications." ORI Br. 118.

Analysis: It is undisputed that intentional falsification of
material statements in a grant application would be scientific
misconduct. See DAB 1431, at 2 (1993). Here, the allegations
relating to the grant applications are largely subsidiary to the
primary charges relating to the Cell paper and the I-1 notebook.
The applications did not receive close scrutiny until relatively
recently; we regard these allegations as largely an afterthought
to add additional weight to support the proposed ten-year
debarment and other administrative actions should the primary
charges be upheld. On the whole, the grant application charges
derive either from the rejected Cell paper charges or an overly
harsh parsing of the application text. While the language
excerpted by ORI suffers generally from the same type of
imprecision we saw in the Cell paper, we reject ORI's charges of
deliberately false and misleading statements in general
because --

some charges rely on an unreasonable reading of particular
language or on only one possible reading of ambiguous
language that in context is better read in another way; and

some charges derive from charges of fabrication and
falsification related to the Cell paper, which we concluded
above were not substantiated.

We discuss each of the applications below.

ORI did not prove that Dr. Imanishi-Kari deliberately made
false and misleading statements in Grant Application 1RO1

This application, "Regulation of Idiotype Expression," was
submitted on October 31, 1984 and funded for $68,252 in direct
costs. Dr. Imanishi-Kari admitted in her response to the charge
that she prepared this application and was responsible for its
contents. Stips. 30, 40; Tr. 5496, 5498-99. The disputed text
appears as part of the "[P]reliminary results and background
studies" section following research proposed to "[d]etermine
whether expression of a particular heavy chain on the surface of
B cells disrupts the normal T and B cell repertoire." ORI
asserted that this text was material to the decision to fund the

The disputed text, with the allegedly falsified parts in bold and
designated by capital letters, states as follows --

Spleen and lymph nodes of mouse 54.28.5 as well as from
normal littermate mouse [A] was fused to HAT sensitive
myeloma X63.6.5.3. Hybrids were subcloned and analyzed
[B] for NIP binding as well as for 17.2.25 idiotype.

The number of hybrids obtained from normal littermate
spleen and lymph nodes were the same as that obtained
from mouse 54.28.5. Some of the hybrids of normal
littermate, did secrete anti-NP antibodies associated
with [kappa] light chains, however, none expressed
17.2.25 idiotype. [C]

The results obtained from the analysis of 37 hybridomas
from 54.28.5 mouse is shown in table 2. There were 10
hybrids that bound strongly to NIP hapten and had
[kappa] light chains and two were [lambda]1 light
chains. . . . [D]

We first discuss each of the allegedly false statements, and then
discuss ORI's general argument that these statements were
material to the decision to fund the application.

A. ORI asserted that the reference to a normal littermate mouse
fusion was falsified since Dr. Weaver's notebook contained no
contemporaneous normal mouse fusion during the spring of 1984
when he conducted the transgenic fusion. ORI Br. 119; ORI FFCLs
XIV.A.7, 15. However, we find that the disputed text neither
states nor infers that Dr. Weaver carried out a contemporaneous
normal fusion; moreover, as noted by Dr. Imanishi-Kari, there
were clearly normal fusions appearing in her data books prior to
the submission of the grant application, such as the one at I-
1:3-5. We find Dr. Imanishi-Kari's assertion that the specific
normal assays referred to in the application were no longer
available to be reasonable under the circumstances. We find that
ORI presented no credible reason why Dr. Imanishi-Kari would
deliberately misrepresent work done on normal mice. Since the
application was submitted in October 1984 by Dr. Imanishi-Kari,
who worked with Dr. Weaver on the fusions, and who carried out
independent work on fusion assays, there is no reason to read her
grant application as referring only to data now in Dr. Weaver's

B. ORI asserted that this statement was false since the "data
were not obtained from subcloned hybrids," and that
Dr. Imanishi-Kari had misrepresented her work as further along
than it was since the underlying data reported in the application
corresponded absolutely with the data at I-1:12-21 (a rescreen
dated August 24, 1984 of Dr. Weaver's original 24-well plates
(dating from April and May of 1984)). ORI Br. 119-120. Dr.
Imanishi-Kari attributed the alleged falsification to confusion
as to her use of the terms clone and subclone. While there is
certainly confusion in terminology manifested throughout this
record, we find that the charge for this statement is also based
on a misreading of the sentence itself. The sentence at issue
states that the "Hybrids [parent wells] were subcloned and
analyzed . . ." The sentence does not state that the subcloned
(or cloned) hybrids were analyzed. The underlying experimental
data, which ORI acknowledged are accurately presented, are for
the parent wells. Therefore, while the wording of the disputed
sentence is not particularly well-drafted, we find no reason to
read the sentence as ORI does to refer to data on subclones. We
resolved above allegations of scientific misconduct related to
use of the term hybridomas when referring to the cells in a
parent fusion well. This common laboratory jargon also appears
in the application. Thus we conclude that the wording of this
sentence is confusing, not deliberately falsified.

C. ORI asserted that this sentence was false because "Dr. Weaver
did not carry out a fusion with spleen and lymph nodes of a
normal mouse in parallel with his fusion of the M95.18 mouse."
ORI Br. 120; ORI FFCL XIV.A.4. As Dr. Imanishi-Kari noted, the
application does not represent that the fusion was
contemporaneous or represent that the underlying data are from
Dr. Weaver's notebook. Therefore, we reject this allegation
since the fact proved by ORI does not show that the
representation in the application was false.

D. ORI asserted that this statement is false and misleading
because the quantitative results claimed were not obtained by
Dr. Imanishi-Kari" since I-1:20-21 shows that the results given
for hybridomas were for parent wells not for clones. ORI argued
that Dr. Imanishi-Kari knew that because the data were "derived
from multi-clone wells, the numbers reported are clearly
inaccurate and overstated." ORI Br. 120-121; ORI FFCLs XIV
passim. We have discussed above the Table 2 Cell paper charges
related to the "hybridoma" wells versus clones issue. We
similarly reject the charge that this term is used here
intentionally to mislead. The application statement is clearly
imprecise. However, the laboratory jargon "hybridomas" when
referring to parent wells was used even in the Cell paper. The
grant application text does not purport to report on definitive
results. Furthermore, Table 3 of the application, which displays
the data on the 37 "hybridomas," uses the nomenclature for the
parent wells, i.e., L3-1, L3-2, L3-2 etc. We do not find it
credible that Dr. Imanishi-Kari would deliberately misrepresent
her data by using the term hybridoma yet accurately display her
parent well data and the nomenclature in the associated table.

ORI also argued that the disputed language was material to the
decision to fund this application. ORI asserted that it "seems
very likely that the [alleged] misrepresentations contributed to
the favorable review." ORI Br. 121. ORI's argument makes only a
speculative connection between the disputed statements and the
favorable review. ORI relied on the laudatory comments made by
the NIH Study Section, which, it is undisputed, were highly
complimentary. Tr. 1648-50 (McClure). The Study Section opined
that the proposed study of NP gene expression in collaboration
with Dr. Baltimore's group was "extraordinary." Ex. H5 (Summary
Statement at 3). The Study Section comments do not rely
specifically on the statements at issue and state that the "vital
task is the sequencing and identifying of the Vh genes
expressed". Id. The Study Section emphasizes the collaborative
efforts on the molecular work with Dr. Baltimore. Thus, ORI's
reliance on the favorable review to show materiality of the
disputed language is misplaced.

The Panel finds that ORI did not prove that Dr. Imanishi-Kari
deliberately made false and misleading statements in the
October 1984 grant application.

ORI did not prove that Dr. Imanishi-Kari deliberately made
false and misleading statements in Grant Application 2 PO1

This application, "Control of Antigen-Specific T-cell Responses"
was dated January 31, 1985; Project III was prepared by
Dr. Imanishi-Kari and was not funded. The disputed text is part
of the description of a table and appears in the portion of
Dr. Eisen's application that was prepared by Dr. Imanishi-Kari.
Ex. H6, at 130. ORI also charged that portions of
Dr. Imanishi-Kari's Figure 1 were "false, fabricated, and
misleading." ORI Br. 123-124.

Table 3 presents data on 11 hybridomas (not from parent wells),
some of which are included in Table 3 of the Cell Paper. The
disputed text for Table 3, with the allegedly deliberately
falsified material in bold, is --

. . . This table is depicting only the most representative
hybridomas obtained from the lymph nodes and spleen of a
transgenic mouse. Out of a total of 150 hybrids analyzed,
40 were [kappa]-bearing NIP and 17.2.25 positive, and 11
were 17.2.25 positive non-NIP binders; 9 were [lambda]-
bearing NIP and 17.2.25 positive. There were 34 hybrids
that were [kappa]-bearing NIP binders but non-idiotype

. . . . Out of 150 hybridomas only one may be expressing
the exogenous gene. . . .

ORI argued that the phrase "most representative" was deliberately
falsified because it represented the hybridomas in the table as a
"population of representative hybridomas obtained at random from
a mouse . . . rather than only a limited population of 150
hybridomas representing sets of clones obtained from just 29
wells of a fusion." ORI FFCL XIV.B.9. Dr. Imanishi-Kari argued
that the language simply means that she was presenting some of
the results obtained. R. Br. 91. In light of the complexity and
difficulty of these types of assays, it is unclear whether one
could ever obtain a representative sample of hybridomas such as
ORI described. Therefore, we reject ORIs strained reading to
support its charge.

ORI also asserted that there were 146 clones, not 150 and that
Dr. Imanishi-Kari's description infers that 150 have been
completely analyzed, with only one expressing the transgene.
While the disputed language is no model of clarity, ORI's
assertion of deliberate falsification goes too far. As
Dr. Imanishi-Kari pointed out, her discussion in the text, while
not well-drafted, makes it reasonable to infer that her point is
that only one of those analyzed "so far" for mRNA is expressing
the transgene. The text does not assert that all the work was
complete; indeed, Dr. Imanishi-Kari indicated that another
application covered funding requested to do such further
analysis. Therefore, despite ORI's urging, we decline to infer
simply from the general references to analyses of the 150
hybridomas that Dr. Imanishi-Kari was representing that all the
work on these hybridomas was complete and only one expressed the
transgene. In addition, ORI did not provide evidence that
Dr. Imanishi-Kari did not have some additional hybridomas from
this fusion besides the 146 tested in the ELISA. Therefore,
there is no reason to find the reference to 150 to be false. On
balance, the other alleged deliberate falsifications are properly
described as imprecise language not reasonably read to be
deliberate falsification.

ORI also raised in its proposed FFCLs the allegation that
Dr. Imanishi-Kari's "claim that the data in Table 3 was obtained
using RIA is false." ORI FFCL XIV.B.7. ORI asserted that
"Dr. Imanishi-Kari has presented no data to support the claim in
Table 3 . . . that 40% of the clones tested positive by RIA."
ORI FFCL XIV.B.7. ORI stated that the Table 3 results show a
different percentage (40%) of the hybridomas to be idiotype
positive than the ELISA data at I-1:83-88 (80%). ORI discounted
Dr. Imanishi-Kari's statement that both show a high percentage of
idiotype-positivity as supporting her position that both
experiments were done. This allegation goes beyond the charges
stated in connection with the grant applications; therefore, we
reject it summarily. We note, however, that ORI argued that
Dr. Imanishi-Kari was obliged to maintain and provide these RIA
data in 1988 upon request. It is not clear to which request ORI
refers, since we do not read the Dingell request cited above to
cover data underlying the applications. Moreover, the data
pertain to a 1985 application which was not funded. While ORI
stated that any such RIA was related to an earlier grant, ORI
provided no evidence that the three-year record retention
requirement applied, or that, if it applied, it would not have
expired by 1988. See 45 C.F.R.  74.20-22. Thus, the
requirement does not support any inference adverse to
Dr. Imanishi-Kari from the unavailability of such data. We also
note that, as we discussed above, ORI calculated the 80%
idiotype-positivity on the ELISA by treating +/- and + as
positive scores, which is a questionable interpretation of that

ORI also asserted that Figure 1 (Ex. H6, at 131) of the grant
application was deliberately falsified. This figure contains
four panels showing analyses of sera from normal and transgenic
mice. This figure is at I-1:68. ORI asserted that Figure 1 was
deliberately falsified since some experimental data at I-1:50 are
the same as the graph in some respects, but different in others.
ORI said that Figure 1 was deliberately falsified by adding four
data points representing a fifth dilution and by misplotting
several points. Dr. Imanishi-Kari asserted that Figure 1 was
based on data other than I-1:50 and that the similarity between
I-1:50 data and the Figure 1 graph (and I-1:68) is not surprising
since Figure 1 contained percentage binding data, not actual
counts, and was a reassay. ORI did not rebut Dr. Imanishi-Kari's
testimony concerning significance of the use of percentage
binding rather than specific counts per minute. See ORI Br. 123-
24; ORI FFCLs XIV.B.18-24. ORI advanced no reason why
Dr. Imanishi-Kari would deliberately misrepresent the Figure 1
data. Therefore, we find no reason to conclude that Figure 1 was
deliberately falsified.

The Panel concludes that ORI did not prove either charge one or
charge two related to certain grant applications by a
preponderance of the evidence.


For the reasons explained above, we conclude that ORI did not
prove by a preponderance of the evidence that Dr. Imanishi-Kari
engaged in scientific misconduct as charged. We conclude that
ORI's findings are not supported and the proposed administrative
actions are not justified. We recommend no debarment.

Judith A. Ballard

Dr. Julius S. Youngner

Cecilia Sparks Ford
Presiding Panel Member

Senior Staff Attorneys assigned to case: Leslie A. Sussan
Jeffrey A. Sacks

Appendix A

Witnesses for ORI:

John E. Dahlberg, Ph.D., Senior Scientist Investigator, Office
of Research Integrity, Public Health Service.

James E. Mosimann, Ph. D., Adjunct Professor, The American
University, Department of Mathematics and Statistics.

Austin M. Barron, Ph.D., Associate Professor, The American
University, Department of Mathematics and Statistics.

Margot O'Toole, Ph.D., Staff Scientist, Genetics Institute,
Cambridge, Massachusetts.

Joseph M. Davie, M.D., Ph.D., Vice President, Department of
Research, Biogen, Inc., Cambridge, Massachusetts.

Walter Gilbert, Ph.D., Carl M. Loeb University Professor,
Department of Cellular and Developmental Biology, Harvard

William R. McClure, Ph.D., Professor, Department of Biological
Sciences, Carnegie Mellon University.

John W. Hargett, Chief Document Examiner, Forensic Services
Division, U.S. Secret Service, U.S Dept. of the Treasury.

Larry F. Stewart, Senior Document Examiner-Instrumental Analysis
Section, Forensic Services Division, U.S. Secret Service, U.S.
Dept. of the Treasury.

Charles Maplethorpe, Ph.D., Medical Officer, Center for Biologics
Evaluation and Research, Food and Drug Administration, Public
Health Service.

Leiko Dahlgren, Ph.D., Tech Services Team Leader, Beckman

Witnesses for the Respondent, Thereza Imanishi-Kari, Ph.D.:

Thereza Imanishi-Kari, Ph.D., formerly Research Associate, Tufts
University School of Medicine.

Robert T. Woodland, Ph.D., Associate Professor of Molecular
Genetics and Microbiology, University of Massachusetts Medical
School, Worcester, Massachusetts.

Brigitte T. Huber, Ph.D., Professor of Pathology, Tufts
University School of Medicine.

Martina E. Boersch-Supan, Ph.D., Freelance Translator and
Scientific Editor.

Herman N. Eisen, M.D., Professor Emeritus, Department of Biology;
Senior Lecturer, Center for Cancer Research, Massachusetts
Institute of Technology.

David Baltimore, Ph. D., Ivan R. Cottrell Professor of Molecular
Biology and Immunology, Massachusetts Institute of Technology.

Terence Paul Speed, Ph.D., Professor, Department of Statistics,
University of California, Berkeley.

David T. Weaver, Ph.D., Associate Professor, Dana Farber Cancer

Martin E. Dorf, Ph.D., Professor of Pathology, Harvard University
Medical School.

Susan L. Swain, Ph.D., Professor, Department of Biology and
Cancer Center, University of California San Diego.

John F. Kearney, Ph.D., Professor of Microbiology, University of
Alabama at Birmingham.

Norman W. Klinman, M.D., Ph.D., Member, Scripts Institute;
Adjunct Professor, University of California San Diego.

Moema H. Reis, Ph.D.

Christopher Albanese, Supervisor, Laboratory of Molecular Biology
and Molecular Medicine, Northwestern University Medical Center.

Henry H. Wortis, M.D., Director, Division of Immunology;
Professor, Department of Pathology, Tufts University School of

J. Donald Capra, M.D., Director of Molecular Immunology Center;
Professor of Microbiology and Internal Medicine, University of
Texas Southwestern Medical Center.

Edward B. Reilly, Ph.D., Research Investigator, Abbott
Laboratories, North Chicago, Illinois.

Philipa Charlotte Marrack, Ph.D., Investigator, Howard Hughes
Medical Institute; Professor, Department of Microbiology and
Immunology, University of Colorado Health Services Center.

Gerald B. Richards, Independent Forensic Scientist, Richards
Forensic Services; Adjunct Professor, Forensic Science
Department, George Washington University.

Elliott W. DeHaro, Billing Manager, Center for Cancer Research,
Massachusetts Institute of Technology.

Robert L. Kuranz, Forensic Ink Analyst.

Reynold Verret, Ph.D., Associate Professor, Department of
Chemistry, Clark University.

Vivien Igras, Research Technician, Dana Farber Cancer Institute.

William P. Fitzgerald, Sales Representative, Robin Scientific
Corporation, Sunnyvale, California.

1. A debarment is an exclusion from eligibility for federal grant (non
procurement) and contract awards for a specified period of time. A finding
that a cause for debarment exists does not mandate debarment. Debarment is a
discretionary, not punitive, action taken to protect the public interest.
Gonzalez v. Freeman, 334 F.2d 570 (D.C. Cir. 1964); 45 C.F.R.  76.115 and
48 C.F.R.  9.402(a) and (b).

2. The guidelines permit appointment of an unbiased and disinterested
scientist or other expert to the Panel.

3. See Appendix for a list of witnesses.

4. In proposed findings of fact and conclusions of law (FFCLs) submitted
after the hearing, ORI misstated the evidentiary standard in this case, by
suggesting that merely a "proposed finding" by ORI was sufficient to shift the
burden to Dr. Imanishi-Kari to rebut the finding with credible evidence. ORI
FFCL I.A.6. A shift in burden of proof (as distinct from a burden of going
forward) occurs only after a prima facie case is presented, and such a case
must itself be made based on reliable, credible, probative evidence. Here,
much of the evidence which ORI presented does not meet this test. For the
most part, ORI relied on its findings, from which it asked us to draw a
particular inference. ORI sometimes failed to prove the factual basis for the
inference ORI asked us to draw. Other times, ORI failed to demonstrate that
the inference was a reasonable one in light of the proven facts, or failed to
show that other equally reasonable inferences, in Dr. Imanishi-Kari's favor,
should not be drawn, based on the evidence as a whole.

5. The history of the matter now before us has been long, tangled, and
heated, stretching over more than a decade. We do not attempt a comprehensive
exposition nor do we undertake to resolve all the disagreements among various
participants about events and motives. Rather, we set forth here only a
summary of the facts needed to understand the conclusions that we have reached
about the charges before us, as we find them based on our review of the entire

6. The Cell paper was initially submitted in December 1985 and accepted
after revision in February 1986. Stip. 41.

7. During 1984 to 1986, when the Cell paper research was done, the National
Institutes of Health (NIH) awarded grants totaling $207,971 to Dr. Imanishi-
Kari as principal investigator. Stips. 22, 23, 32, 33, 40.

8. Dr. Imanishi-Kari left MIT in July 1986 and moved to Tufts University
where she remained until December 1994. ORI FFCLs III.G.1, 2.

9. This research on antibody production in B-cells followed earlier work in
Dr. Baltimore's laboratory which indicated that the transgene was expressed
without exception at the pre-B cell stage.

10. Antibodies are composed of light and heavy chains. Both light and
heavy chains can be found in a variety of types, known as isotypes. The two
types of light chain are kappa and lambda. The heavy chain isotypes of most
relevance to the Cell paper are æ (mu), þ (gamma), and à (alpha).
Characteristic forms of the heavy chain isotypes, called allotypes, can be
identified in inbred strains of mice. The allotypes of interest in the Cell
paper were for the æ heavy chain. The BALB/c mice, from which the transgene
was derived, expressed the æa allotype, while the C57BL/6 mice normally
expressed the æb allotype. An additional attribute of individual antibodies
that is of interest here is idiotype, which may vary even among antibodies of
the same isotype. The idiotype refers to structural features of the antibody
and can be defined by other antibodies which react to one or more of those
features or determinants. The idiotype of the transgene was called 17.2.25
and could be detected by anti-idiotype antibodies.

11. The serological work reported in the Cell paper involved primarily the
products of fusions, in which B cells from the spleen and lymph nodes of mice
(which may produce antibody) were fused with myeloma cells (which have the
capacity for unlimited replication) in an effort to create hybridomas which
are immortal and produce antibodies.

12. The procedure used in these experiments (and elsewhere in the Cell
paper) is a "sandwich assay," which essentially involves a coating reagent
adsorbed to a plate, to which the supernatant or sera of interest is added,
and then a reagent of some kind (depending on what is being studied) is used
as a probe to determine the extent to which the supernatant or sera bound to
the coat (was "captured"). If the probe is labelled with radioactive iodine,
the binding can be measured by a device called a gamma counter which counts
radioactive emissions, i.e., a radioimmunoassay (RIA). Alternatively, an
assay system can use an enzyme coupled to the antibody to detect binding,
i.e., enzyme-linked immunosorbent assay (ELISA). ELISA assays can be read by
eye based on the intensity of the resulting color change or by an optical
scanning device. Dr. Imanishi-Kari did not have such a device during the time
period relevant for this case.

13. Dr. Constantini was not involved in the drafting of the Cell paper but
rather developed and supplied the strain of transgenic mice. Stip. 19; ORI

14. The group included, among others, Drs. Huber, Woodland, Wortis,
Imanishi-Kari, and Broder (Dr. O'Toole's husband, who was at Tufts
University). Tr. 1747 (Huber); ORI Br. 24, n.8.

15. The charge was to "(1) determine whether the published paper is
scientifically accurate, based on the original data; (2) if inaccuracies are
found, describe the nature and extent of the inaccuracies in the published
paper, including a statement as to whether misrepresentation or other
misconduct is involved; and (3) recommend appropriate corrections in the
scientific literature, if inaccuracies are found." Ex. H262 (NIH Scientific
Panel Report at 1).

16. Dr. Imanishi-Kari compiled her records into a binder referred to as
the "I-1 notebook."

17. Two additional scientists joined the NIH Scientific Panel members in
serving as advisors: Dr. Stewart Sell and Dr. William R. McClure.

18. ORI stated that if the I-1 notebook was authentic, "its component
parts existed in 1986 and should have been enough to satisfy Dr. O'Toole that
her concerns about the experimental support for the Cell 45 Paper were largely
groundless." ORI Br. 23. Yet, since the forensic results are inconclusive,
ORI often depended on its interpretation of the notebook data to buttress its
conclusions. This compelled us to address scientific issues, to evaluate ORI

19. Thus, for example, the Customs Court wrote that --

our concern here is with the quality of the evidence in the record. In
ascertaining the proper weight to be accorded to the various reports and
affidavits, which were received in evidence in this case, no
significance can be attached to the mere number of such reports and
affidavits that one party or the other may have adduced in support of
their respective contentions.

D.C. Andrews & Co., Inc. v. United States, 1963 WL 9768, at *5 (Cust.Ct. March
5, 1963). We strove here to avoid the error, which has been observed to be
common among advocates, of confusing "quantity of evidence with probative
quality." United States v. Reaves, 636 F. Supp. 1575, 1579 (E.D.Ky. 1986);
accord United States v. Hildebrand, 1996 WL 262823, No. CR 95-2014 (N.D.Iowa
May 16, 1996).

20. ORI also submitted 272 pages of proposed findings of fact and
conclusions of law, numbering in the thousands. We have not addressed each
proposed FFCL individually in this decision for the same reason. However, we
carefully reviewed each and every FFCL in light of the record as a whole.
Many of the proposed FFCLs are undisputed, are irrelevant in light of the way
we have resolved the issues in this case, misstate evidence in the record, or
are implicitly rejected in our discussion. Where we found the substance of a
proposed FFCL supported by the evidence in the record as a whole and necessary
to the resolution of the issue, we reflect the substance of the FFCL in the
text of the decision but do not necessarily refer to the numbered proposed

21. We note that testimony before us from Dr. Davie (as well as from some
of ORI's other scientific experts) was based on the assumption that the Secret
Service conclusions about the authenticity of the I-1 notebook were well-
founded and on accepting information in the ORI Report as accurate. See,
e.g., Tr. 1312, 1392 (Christ), 1505 (Gilbert) and 1576, 1635 (McClure).

22. For example, in regard to two pages of data (I-1:41 and I-1:43), ORI
claimed they were fabricated to support Dr. Imanishi-Kari's claims that
certain hybridomas were idiotype-positive, yet stated that the data on those
pages were equivocal as to some of the challenged hybridomas and better for
some that were not at issue. See ORI Reply Br. 96. ORI acknowledged that
these results "do not show as positive a story as one would expect" if they
were fabricated to support Dr. Imanishi-Kari's claims, but suggested that this
might "simply reflect haste" in the fabrication process. ORI Br. 96, n.72.
It is hard to see why haste would explain faking less than helpful results --
it is as easy to put down a "good" number as a bad one for a hybridoma likely
to be important to Dr. O'Toole's challenge. Similarly, the "June subcloning
data," discussed in relation to charges concerning Table 2 of the Cell paper,
were allegedly fabricated very late in the process (possibly even during a
1988 NIH visit to interview Dr. Imanishi-Kari, according to ORI), yet they
present results that were inexplicable at the time (although undisputed
testimony indicates that they may have been explained by later scientific
work). Cf. ORI Br. 62-63; Respondent's (R.) Br. 55-56. Haste does not
explain creating new scientific questions.

ORI argued that Dr. Imanishi-Kari was constrained by the need to fit her
fabrications to the unchallenged data and to the data on pages she had already
shown to Dr. O'Toole soon after she raised her concerns. ORI's Reply Br. 4-5.
These alleged constraints, however, really do not explain why Dr. Imanishi-
Kari would not have discarded any unfavorable or inconsistent pages that were
not shown to Dr. O'Toole, especially if Dr. Imanishi-Kari were undertaking the
sort of wholesale laundering of the record that Dr. O'Toole has alleged.
Furthermore, such constraints would not explain why Dr. Imanishi-Kari would
not, at least, ensure that the data that she did fabricate were uniformly
helpful. ORI argued that the allegedly false data were good enough to satisfy
the earlier inquirers until being subjected to a degree of scrutiny by the
Secret Service that Dr. Imanishi-Kari could not have expected. ORI Reply Br.
5. The fact that the other scientists who looked at the data accepted them as
looking "real" and as adequately supportive of the reported results, despite
the evident imperfections, could also be interpreted as meaning that the data
are real and that this unprecedented level of scrutiny might uncover similar
inexplicable oddities in any real data.

For example, Table 3 contains the notation "NP" after several hybridomas,
which had no meaning in the context of this table and was apparently carried
over accidentally from other work. Table 2 contains an evident numerical
error showing a total of 1/144 normal spleen hybridomas as 17.2.25 idiotype
positive and then a total of 3 hybridomas are counted under subcategories of
idiotype positives with a footnote indicating that one of these was not
17.2.25 idiotype. Figure 3, which presents molecular data and was not
challenged in this case, refers in the legend to a Part C which does not
appear in the figure. Figure 4, which also presents molecular data and is not
at issue, shows the hybridization probe constructed from a 236 bp restriction
fragment whereas the legend to the figure cites a 233 bp fragment.

25. Dr. O'Toole appeared to subscribe to this conspiracy theory, at least
as to the period after her challenge, having stated that "at the meeting with
Dr. Eisen and co-authors Drs. Baltimore, Imanishi-Kari and Weaver, Dr.
Baltimore decided on the course of action for the authors (i.e., a cover-up)."
Ex. H265, at 3. She also accused Drs. Huber and Wortis of having "declared
their intention of orchestrating a cover-up." Ex. H284, at 4; see also Ex.
H263, at 2. She accused the NIH Scientific Panel of bias, accepting "totally
impossible explanations," "hypothesizing totally ridiculous and
unsubstantiated theories," and "performing contortions of logic and denying
the undeniable." Id. at 7-8.

26. For example, the people that Dr. O'Toole has accused of making false
statements in this matter include Drs. Baltimore, Eisen, Wortis, Huber,
Woodland and Imanishi-Kari. See, e.g., Exs. H265, at 4; H270, at 3; H272, at
2; H276, at 2-3; H290, at 6, 9.

27. Reacting to a suggestion that he could have evaluated Dr. O'Toole's
concerns by reviewing Dr. Imanishi-Kari's laboratory notebooks, Dr. Eisen
testified that the idea was ridiculous:

Ridiculous because it doesn't take into account what looking at
notebooks entails. It entails an investment of weeks and weeks and
weeks and weeks. I couldn't do that. I'm paid by the NIH to do
research. Not to investigate notebooks endlessly.

Tr. 2025. Having in fact undertaken a detailed review of the laboratory
notebooks, the Panel can confirm the time and difficulty involved.

28. Dr. Baltimore testified that today scientists "keep records with an
eye to their perhaps becoming public," whereas in the mid-1980's, most
scientists "kept notes largely for themselves as a way of providing the
information they needed when they came to writing a paper so they could
describe in detail what they had done" and "notes in that period of time were
more a personal crutch than a public document." Tr. 2070. In fact, he
testified that he had known "scientists who kept surprisingly skimpy notes on
surprisingly unofficial pieces of paper. Like paper towels." Id.

29. We do not imply that scientific misconduct investigators are obligated
to invest the time and resources to repeat all questioned experiments, nor
that such a procedure would resolve all questions. Nevertheless, it is
interesting that even the Secret Service records do not preserve results in a
form allowing retrospective re-interpretation by others without reproducing
the experiment.

Mr. Stewart pointed out that no findings were made about numerous alterations
in the I-1 notebook which appeared to be natural changes, as opposed to "a
change with some kind of intent." Tr. 4070. He testified that if he were to
try to make such a determination about the alterations in the work sheets, he
"would probably want to run ESDAs" (a test to detect impressions from writing
done above the page as discussed later); he would "probably want to do ink
analysis"; he might "try to see if this was from a pad;" in short, he would do
the "same type of things I did in this case." Id. The trouble with this
reasoning is that all of these tests would address the timing of the
alteration and the source of the materials (ink and paper) used to make the
alteration, but without knowing the substantive significance of the change and
the possible motivation for making such a change, the tests would say nothing
about intent. It is not self-evident that a change made at a later time or
using a different pen is less likely to be an honest or accurate correction
than one made immediately or in the same ink.

31. Mr. Stewart testified that this case was "a new experience" as far as
the "time lapse," and that "it would have been nicer in hindsight if we had
kept better work sheet records, but we didn't have any anticipation that it
was going to take this long to culminate this case." Tr. 4055-56.

Reviewing an ink analysis which did not include a test of the contribution of
the paper from which one of the ink samples was taken, Mr. Stewart testified
that that indicated to him that he had already tested the contribution of that
paper on another plate. The following exchange ensued:

Q Oh, really. Where does it indicate that?

A It's my own knowledge. It makes no sense why I would have a plate
with just two paper blanks when I have three different tapes there
unless I had previously run them at some point.

Tr. 4087, discussing Ex. H532; see also Tr. 4067 (Stewart) ("guess" about
meaning of note). Similarly, Mr. Stewart testified that a lane on an ink
analysis plate that appears empty may have had a paper blank sample, that he
"can usually tell after the fact, but I wouldn't expect that you could be able
to tell." Tr. 4076. Asked if one should be able to reconstruct the findings
from the work sheet, Mr. Stewart testified that: "In a perfect world that
would be nice. If we could go back to the work sheet and know exactly what
occurred for that analysis. It doesn't always happen that way." Tr. 4046-47.
In evaluating the possible contribution of certain pages, Mr. Stewart elected
to test a sample of the paper only from one page because he reasoned that the
other pages "appeared to be a continual run" so that he "had confidence" that
the one page could represent the group. Tr. 4088-89, 4091; Exs. H529 and

Both Dr. Mosimann and Dr. Speed have extensive experience in applying
statistics to the field of biology. Exs. H81; R37. ORI argued that we should
give more weight to Dr. Mosimann's opinions than to Dr. Speed's because Dr.
Mosimann has a scientific degree and has published articles regarding
statistical analysis of "fusion experiments." Dr. Mosimann's scientific
degree, however, is a bachelor's degree in zoology, and ORI did not show that
his studies included work related to radioimmunoassays. Both of the "fusion
experiment" articles Dr. Mosimann co-authored relate to essentially the same
analysis: determination of the expected average number of cells distributed
to each of the wells after a cell fusion, based on the number of wells
exhibiting growth. Exs. H794A; H794B. Neither article related to the testing
of supernatants from cells of a fusion to determine the properties of
antibodies produced by the cells.

34. Mr. Hargett testified that: "This whole case was unusual. This is
the first time, that I've ever known of, that we had a request from Congress.
It was a whole different thing for us. And -- would things have been worked
better if -- you know, yes it could have been done better, no question about
it." Tr. 3559-60. He further testified that this case was "not typical," in
that most of their cases are "not as complex as this, and certainly not as
large as this as far as documents are concerned." Id. While Mr. Stewart
recalled that the Secret Service had been asked to assist in investigations by
Congressional committees in the past on a few occasions, he noted that none of
those investigations anywhere approximated the size of this one. Tr. 3983-84.

Mr. Stewart, who has worked with the Secret Service as a document examiner
since 1982, reported that the range of forensic investigations in which he
worked included "murder investigations to Nazi war criminal investigations to
nuclear armament transfers from one country to another; disputed documents of
all types, counterfeit currency, threats to the President, many different
types of cases." Tr. 3593.

Thus, Mr. Hargett stated that his conclusions were as to "authenticity as to
the respective times," but that "[a]s far as falsification and fabrication,
that is not . . . my terminology." Tr. 3496. He further clarified that his
conclusions were that, based on all their tests, "some of these pages have
been manufactured" but that "[i]f you want to call that fabrication, I don't
know. I just don't know." Tr. 3498. Asked if what he meant by
"manufactured" would include a situation where someone would "take a page and
sit down with a bunch of data that had been collected over a period of time
and select pieces of that data . . . and somehow put it together, after the
data had been created originally," Mr. Hargett agreed that that "could be a
manufactured page, yes." Tr. 3498.

As a result, many of the inferences which they drew from their findings were
not adequately supported. For example, because we find that the material now
in the I-1 notebook was collected and organized at many different times and
did not purport to have been put in the present format or order at the time
the experiments were done, we could not infer fabrication from evidence
regarding the order in which pages were created or regarding the dating of
experiments. We discuss below the evidence relating to individual pages. The
general limitations of the forensic analyses of the I-1 notebook that we have
discussed here, however, inform our assessments of the implications of various
anomalies in production of or alterations to notations on those pages. It
must be noted that we are not able to rule out in every case the possibility
that some changes were made by Dr. Imanishi-Kari in 1988 or made with an
awareness of the issues under investigation. The limitations of the forensic
documentation examination simply do not permit us to distinguish changes made
in the course of organizing the data over the months from changes made much

38. Dr. Imanishi-Kari explained this in a 1990 interview at NIH as
follows: "My books were not books. My books were -- that one spiral notebook
and something [data] inside of here [manila folders] books." Ex. H103, at 47-
48 (brackets to transcript citations indicate handwritten changes by Dr.
Imanishi-Kari). She also stated that these materials were not stored in any
one location but "went all over the place, these data" and that "[w]e took all
over. We took to the lab. We took to Moema's lab. We went to the counter.
It shifted. It went sometimes -- [with Weaver]." Id. at 48. "So, they were
kept in different folders in different things and they were kept all over the
place." Id. at 50.

It appears that sometimes she forgot to record the necessary information too,
since she testified that if she had not labelled a counter tape, she would
throw it away because she would not later know what experiment it was. Tr.
5256. Such mishaps resulting from careless practices may account for some of
the missing or problematic data.

40. The counter tapes are generated from continuous paper rolls of about
10 inches in width. The actual number readings are printed in a narrow column
(single or double spaced, depending on the counter) running down the tape. At
intervals the tape may be divided into a "page" by spacing and a broken line
and (depending on the counter) a register number (discussed further below in
relation to the findings about register numbers).

Dr. Imanishi-Kari did all of these at different times, and all of these
practices are reflected in the notebooks of other researchers in the record.

Her cavalier attitude toward dates contributed to the enormous difficulty in
sorting this matter out over the years, but is not unique among scientists at
the time even in this lab. Our review of the Maplethorpe and Weaver notebooks
showed considerable unreliability in dating. For example, Dr. Maplethorpe's
notebooks have some consecutive pages dated with different years but the same
month. Dr. Weaver has written that he was "not in the habit of dating every
page," so that "a significant number of the data sheets and autoradiographs
were not dated." Ex. H254, at 2. Consequently, he submitted his data with
approximate dates (such as "about May 1985") and stated that "undated data
should be regarded as having been recorded" in the period from March 1984 to
July 1985. Id.

43. The present numbering of the pages in the I-1 notebook was not put on
by Dr. Imanishi-Kari but was apparently added by investigators at some point
in the process, possibly at the Subcommittee. Tr. 3471-74 (Hargett), 5149

44. ORI also claimed that Dr. Imanishi-Kari's brief was inconsistent in
describing how she organized data because it stated that she might organize
tapes from very different dates on a page to compare data but also stated that
she might not have had any reason to organize the counter tapes immediately
where no comparison was involved. ORI Reply Br. 4, citing R. Br. 4-5. These
statements in Dr. Imanishi-Kari's brief are not inconsistent--she might
organize counter tapes from an earlier experiment when she got results on a
later one that required comparison or she might simply get around to
organizing an old experiment when it was needed for a paper or some other
reason. It is plain from a review of the notebooks that data from different
assays were in fact compiled onto the same pages and that these assays were of
a magnitude unlikely to have been performed simultaneously. Dr. Imanishi-
Kari's statements more likely mean simply that whenever an assay was performed
that was to be compared with other earlier work, there was reason to organize
the tapes together because she could not fully evaluate the second set without
putting them side by side.

45. Thereafter, the Secret Service obtained additional notebooks,
including a box of additional materials from Dr. Maplethorpe in late 1990, for
an eventual total of about 62 notebooks of one kind or another according to
the examiners. Tr. 3501, 3515 (Hargett). The present record includes more
than 70 "notebooks."

46. As noted above dating irregularities were not unique to
Dr. Imanishi-Kari. ORI did not establish through testimony that the
researchers who produced the notebooks used as comparisons viewed accuracy in
dating as more critical than Dr. Imanishi-Kari did or that they habitually
compiled their data into notebook form (for those whose materials could even
now properly be considered to be in a "notebook" form) when they did the

47. Of course, proceeding in this way risked a self-fulfilling prophecy.
Removing from the definition of a "normal" notebook one that contains the
characteristics encountered in the questioned notebook defines those
characteristics as abnormal. We do not share Dr. Imanishi-Kari's view that it
was inappropriate for the Secret Service to abandon the R-1 notebook as a
comparative norm, since the Secret Service could perhaps reasonably decide not
to treat any of the questioned notebooks initially presented to them for
evaluation as a legitimate source of standards. Cf. R. Br. 19. However, it
is somewhat more troubling that the Secret Service then proceeded to treat the
I-3 notebook as a comparative norm when its provenance was the same, because
the only distinction made between them was that the R-1 notebook did have many
of the aberrations noted in I-1 and the I-3 notebook did not. Tr. 3613-15,
3860-61 (Stewart), 3231-33 (Hargett). Also, Mr. Stewart testified that his
choice of norm would not be affected by the scientific content or organization
of the notebook because it would not affect the "forensic evidence," but there
was evidence that the I-3 notebook might differ from I-1 precisely because it
contained mouse screening records for which chronology was much more
important, which could affect the way it was compiled and organized. See Tr.
3860-62 (Stewart).

48. As we noted, the Secret Service raw data records contain alterations
and erasures as well, so in themselves such flaws are not significant. Mr.
Hargett testified that when anomalies were found in other notebooks, the
examiners would assume them "to be innocent changes, perhaps or normal
changes, as were some of the changes in the I-1 notebook," unless some special
significance were found. Tr. 3553-55. However, it does not appear that the
other notebooks were scrutinized as was I-1 to seek matching inks or take
impressions of each page to see what writing had been on top of it. Tr. 3552
(Hargett) (impression analysis done of only six notebooks, including I-1).
The Secret Service criteria for distinguishing "honest date changes" from
meaningful alterations was whether the examiners got results on other tests,
i.e., whether "they were multi-faceted type things." Tr. 3546 (Hargett).
However, while it may be more reasonable to conclude, for example, that a
change was made at a different time than other writing on a page when only
part of the writing shows up on an impression analysis or if you also know
that the ink used on that portion of the writing is different from the ink
used on the rest of the page, it is not necessarily more reasonable to
conclude from those facts that it is more likely that the change was made for
a dishonest reason. Evaluating whether a change or deviation in sequence
could result from innocent reasons is heavily related to the content of the
change and the existence of some potential motive or other indicia of intent.
It is difficult to see how the Secret Service ruled out any special
significance to the anomalies in other notebooks without knowing anything
about the substance of the work in them or the nature of any publications or
grants relying on them. It is true that a change of date by a day or two is
more easily recognized on its face as a possible immediate correction of an
error than a change from October of one year to January of the next (as is
alleged in relation to certain pages in I-1), but whether the latter type of
change is logically explained depends on the researcher's practices. Cf. Tr.
3662-71 (Stewart). In this case, compiling data after-the-fact and relying on
protocols to match up experimental results with probable dates could
potentially lead to "innocent" dating errors of more than a day or two.

49. As discussed in detail below, this question of whether the universe of
notebooks was obtained is particularly important in relation to whether
counter tapes in the I-1 notebook matched those being produced by other
researchers around the same time.

50. Direct requests were also made to Dr. Maplethorpe and Dr. O'Toole for
notebooks. Tr. 3515, 3551 (Hargett), Tr. 6127 (O'Toole). It is somewhat
troubling that such requests were directed to the person who challenged the
paper and to a researcher whose hostility to Dr. Imanishi-Kari was
longstanding and well-known (especially since green tapes from his notebooks
formed much of the basis for finding that Dr. Imanishi-Kari's green tapes were
from much earlier dates). Cf. Tr. 5760 (Maplethorpe), 6147 (Igras) (heard Dr.
Maplethorpe say about Dr. Imanishi-Kari that he would "get her somehow"). In
addition, although much emphasis was placed on the absence of green tapes in
Dr. Maplethorpe's notebooks after January 1984 (since his green tapes were the
most recently dated in the record except for Dr. Imanishi-Kari's), we have no
assurance that Dr. Maplethorpe turned over all of the green tapes which he
generated during the relevant period.

51. It is somewhat ironic that such efforts were made, when the
objectivity of the examiners was under a more direct threat from the contacts
made by certain Subcommittee staff. Mr. Hargett testified that Walter
Stewart, for example, in interacting with the Secret Service, was a strong
advocate of the view that misconduct had occurred. Tr. 3505-08, 3547-48,
3570-71. Although the examiners testified they did not consider such
representations in deciding how to proceed, resource limitations made them
gladly accept the guidance of the Subcommittee to focus on the I-1 notebook
and to rely on the Subcommittee's efforts to obtain the comparison notebooks,
which they accepted as unquestioned. Tr. 3874-77 (Stewart).

52. We deal with the specific green tapes at issue in relation to
particular charges in later sections. Here, we discuss the evidence about
green tapes generally and the weight properly given to evidence about
"matches" of tapes.

53. ORI complained that Dr. Imanishi-Kari was arguing that "valid forensic
conclusions about the green tapes" are "impossible absent perfect
information." ORI Reply Br. 13. Nothing prevents valid forensic conclusions
from being reached based on testing the green tapes in existence or requires
that unattainable perfection be achieved. Logic demands, however, that
conclusions based on the present absence of comparable tapes from a specific
time period rest on reasonably certain information about what other tapes were
generated at that time. The undisputed testimony that the counter machines
were almost constantly in operation and that they generated a steady rate of
tapes indicates that we may have no more than a small fraction of the tapes.

54. Unfortunately, no inventory of the notebooks and their contents
appears to have been made, nor any log book kept to record access to them
during the period (which may have been a number of years in some cases) when
they were not in the possession of the Secret Service. Tr. 3509-12 (Hargett);
see also Affidavit of Barbara R. Williams, Ph.D., August 11, 1995. This
omission raises the small but disturbing possibility that additional green
tapes were present but were lost or removed.

55. Dr. Imanishi-Kari pointed out that green tapes appear to have been a
small share of the total tapes produced. R. Br. 26. For example, only five
of the researchers' notebooks examined had any green tape in the period from
1981-85. Ex. H503, at 3. ORI responded that numerous green tapes appear in
some of the older (1981-82) notebooks, e.g., over 100 green tapes in the
Pasternak notebooks. ORI Reply Br. 13. However, the relevant point is that,
assuming that the proportion of green tapes available in the later period
(1983-1985) was small compared to yellow tapes, it would be particularly
important to have a larger share of the total tape production to try to
pinpoint exactly when the green tapes finally ceased to be used.

56. Furthermore, it was not disputed that the paper used in the counter
machines was likely to be available from many sources. Apparently, the
companies that marketed the counter machines resold tape rolls since Ms. Igras
testified that she purchased the tape for MIT only from Packard. See Tr.
3738-39 (Stewart); Tr. 4575 (Igras). The investigation did not rule out MIT
obtaining green counter tape at a later date than it would have last been seen
at Western Union (even assuming the Western Union employee's general memory of
a 1983-1984 phase-out was intended to be exact).

57. An expert witness for Dr. Imanishi-Kari (Mr. Richards) also presented
a theory concerning the green tapes. He stated that he received information
that each counter came with a small amount of beige counter tape, and
contended that, because of lack of any other evidence to the contrary, the
beige tapes under certain conditions of light and heat would eventually turn
green. He also noted that the color of counter tape paper under cellophane
tape was different (and greener) than uncovered tape paper, and suggested that
a further "shift to green" might result from solvents in the cellophane tape.
Tr. 4259-63, 4372-74, 4381, 6254-55. Although the Secret Service examiners
experimented with heat and ultraviolet light and were not able to alter the
tint of yellow counter tape to get a green shade (although aging and browning
did occur), Mr. Richards argued that the possibility of sunlight or chemicals
causing such a color change in beige paper was not ruled out. Tr. 4262-64,
6184-85, 6221-22, 6232-33 (Richards), 3621-22, 3626, 5537 (Stewart). Mr.
Stewart testified that the cellophane tape probably could also cause paper to
retain color while uncovered paper faded, but stated his opinion that the
greenish tint was not the result of covering with cellophane tape. Tr. 5538.
ORI questioned whether any beige paper was in fact provided, and if so,
whether it was likely to still be in use so many years after the purchase of
any new counters. See ORI FFCL IV.H.v.9. In any case, whether the green
tapes were a color variant of yellow or an effect of cellophane taping or a
color change of a beige shade, we are confronted with interpreting the pattern
of these green tapes appearing in decreasing frequency from 1981-1985.

58. ORI theorized that green tapes petered out in June 1983 with a few
later instances in December 1983 and January 1984 found in Dr. White-Scarf's
and Dr. Maplethorpe's notebooks. ORI Reply Br. 13. This theory posits an
innocent gap of more than six months between appearances of green tape in the
record. If green tape were phased out as stocks were consumed, it is
reasonable that green tapes would become less common and more sporadic in the
notebooks. It is possible green tape was used in 1984 and 1985 only in the
form of old partial rolls used when tape ran out or on a printer used
occasionally as a back-up or substitute. See, e.g., Tr. 5105 (Imanishi-Kari).
ORI acknowledged that the January 1984 green tape in Dr. Maplethorpe's
notebook was something of an "outlier" with no surrounding examples. The
absence of "surrounding" green tapes makes clear that the absence of a
continuous run of tapes of the same color sufficient to account for a complete
tape roll is not an occurrence unique to Dr. Imanishi-Kari's notebooks.

59. This ambiguity highlights the unresolved question of whether other
researchers' dating and organizing practices were any more reliable than
Dr. Imanishi-Kari's. We have no testimony or affidavits from most of the
other researchers whose notebooks were reviewed about what the dates in their
books meant in relation to when the counter tapes were generated and whether
they were organized chronologically so that undated material could be
approximated based on neighboring pages.

60. The examiners determined that the printers were not manufactured by
the companies providing the counters. Tr. 5541-42, 5579-88 (Stewart), 5936-37
(Dahlgren), 6164 (Fitzgerald). It is likely the teletype printers sold with
the Beckman gamma counters were originally manufactured by Teletype
Corporation and may have been obtained through the Western Union Telegraph
company. See generally ORI FFCLs IV.E.1-11.

61. The information about the format of the Packard gamma counter and the
font of its associated printer was presented incorrectly in ORI's FFCLs but
correctly in ORI's brief and in the sources cited in the FFCLs, so we conclude
that it is not in dispute. Cf. ORI Br. 69; ORI FFCL IV.D.21.

62. Mr. Stewart was aware of the danger that he might "have multiple
printers that are appearing as one to me." Tr. 3737. While he sought to
address that in his interviews with printer manufacturers, nothing in the
record indicates that the examiners obtained information sufficient to rule
out the possibility of multiple printers.

63. Ms. Igras testified that in 1982 a lot of new equipment was purchased
to equip the Whitehead Institute when Dr. Baltimore moved his laboratory there
and that "those pieces of equipment that were left behind, Ward Deharo put
them in the basement of the Cancer Center." Tr. 4588. The old equipment
"included a Teletype." Tr. 4574 (Igras).

64. "I certainly know that they [the counters] had printers with them
built in or beside them and that when they broke . . . I went and found
another and switched them." Tr. 4448 (DeHaro). Such a change using either an
excess printer from the basement or alternating a working printer between two
Beckman counters was a matter of a "couple of minutes, three, four, two
minutes. You just picked -- unplugged the -- one, the data line, the power
line, move out of the way, move another one into it, into place. . . . They
weren't light, but you could do it, and just plug it on the back, plug the
power in and away you went, if it worked well." Tr. 4455 (DeHaro).

65. ORI made this claim in the context of claiming "full matches" of I-1
green tapes to certain earlier Maplethorpe tapes even though the format makes
clear that they came from different counters made by different manufacturers.
These matches would not have any meaning unless the Maplethorpe tapes were
produced on the same printer as Dr. Imanishi-Kari's tapes. Mr. DeHaro and Ms.
Igras testified to their belief at the time (well-founded or not) that they
could not switch printers between Beckman and Packard counters. Tr. 4455
(DeHaro), 4572 (Igras).

66. In light of how little certainty existed about the availability and
use of specific printers and counters much closer in time and space to the
relevant period (i.e., by MIT in 1989), it is not surprising that we are
unable to reach conclusions with any more certainty almost seven years later.

67. The document examiners employed thin-layer chromatography (TLC) to
analyze ink by separating and examining dye components. Tr. 3763-64

68. In fact, matching ink clearly must have come from entirely different
ribbons in some instances. The examiners claimed a match between certain
Dr. Imanishi-Kari green tapes and a whole set of Maplethorpe tapes from a 21-
month period (April 20, 1982-January 15, 1984), and these tapes could not
possibly all be from a single ribbon.

69. A further complication was that Mr. Stewart's testing may not have
ruled out the possibility that test results might vary on one ribbon if the
ink formula changed or fluctuated in producing the ribbon. Cf. Tr. 3759-61
(Stewart). ORI considered this implausible, since ribbons were produced in
large batches and since Mr. Stewart (being unquestionably expert in ink
analysis) did not mention it. ORI Reply Br. 18. However, Mr. Stewart was not
asked one way or the other about the production of typewriter ribbons and the
application of ink (unlike the production of writing pads, about which
considerable evidence was adduced), so we can draw no conclusion about whether
ink formula would be expected to be uniform across a ribbon. ORI also pointed
out that the Secret Service took samples from a number of different points
along the tested counter tapes, so that arguably any variation in ink
formulation would have been discovered. ORI Reply Br. 18. This argument is
flawed, however, because (1) the Secret Service took samples of ink only from
relatively small segments of counter tape, not along the length of the ribbon
itself, so the samples are likely to represent only a small area on the
printer ribbon (a single ribbon lasted about 90 days), (2) samples of ink from
counter tape segments were apparently pooled for testing, so variation within
the ink on a segment might not be noted, and (3) there is no evidence that any
tests for this purpose were ever conducted.

70. The ink on a tape dated March 29, 1982 generated by another researcher
on the same counter was different from that on Maplethorpe tapes with the
surrounding dates including March 23 and 30 and April 16th. Ex. H532 (compare
Reilly and Maplethorpe tape results). The Secret Service suggested that this
might have resulted from a change of ribbon and from misdating by the second
researcher so that the change actually occurred after the last Maplethorpe
tape (or two changes were made within three weeks). Tr. 4092-95 (Stewart).
Similarly, the White-Scharf tape dated December 1, 1983 did not have ink
matching those of tapes near in time, but the Secret Service suggested that a
substitute printer might have been used. Ex. H503, at 3. As Dr. Imanishi-
Kari argued, the same kind of eventualities could account for differences in
her tapes. R. Br. 33-34, n.41.

71. We have pointed out elsewhere that the dates on pages with counter
tapes in the I-1 notebook do not necessarily purport to be the dates of the
experiments themselves and that the dates in comparison notebooks have not
been shown to be any more precisely related to the dates on which tapes in
those books were generated.

72. Even the inference that pages with different defects did not come from
a single pad is not certain since Mr. Stewart testified that it was possible
the defects which he used to distinguish two different sources could have
occurred on pages which were interleaved into the same or multiple pads
alternatingly and that different thicknesses of paper did not necessarily
imply different pads. Tr. 3967-69, 3972-73.

73. Actually, the Secret Service expressly found that both inks used on
the June subcloning pages, for example, were available on the "purported
dates" of those pages. Ex. H503, at 1.

74. Dr. Imanishi-Kari pointed out in her brief at least one instance
occurred in which stacking is the only possible explanation for material on
one page appearing in registration on another, because the pages are from
different sources. R. Br. 79, n.103; I-1:86-87; Ex. H539. Furthermore, Mr.
Hargett acknowledged that such small portions of the writing of several
critical pages appeared in registration that stacking might explain those
pages as well. Tr. 3453-54 (I-1:121 and I-1:5), 3464-67 (I-119 and I-1:7-9;
I-1:107 and I-1:31).

75. In some instances, ESDAs (not necessarily in registration) were also
used as evidence that portions of some pages were written at a different time
than some other material appearing on the same page. This inference was drawn
when only one portion of the writing on a page appeared on an ESDA of a page
which must have been underneath at the time when that part was written (since
it received an impression). However, the examiners acknowledged that the
difference in time could be a matter of minutes if the pages happened to be
moved in relation to each other. Tr. 3529 (Hargett), 4014-15 (Stewart). In
fact, even if they were not moved, an ESDA does not necessarily pick up the
impression of every piece of writing placed on a page while in position above
another. Some impressions simply do not happen to come through or at least
are not successfully captured by the ESDA process. The idea that such ESDA
results imply that some writing was added years later in an attempt to deceive
is hence not well-founded.

76. We discuss Dr. O'Toole's allegations that these pages were brand new,
and that Dr. Imanishi-Kari said at the meeting that she had just produced
them, in relation to the charges concerning Figure 1, and conclude that Dr.
O'Toole's interpretation of what Dr. Imanishi-Kari said about these pages was
improbable and uncorroborated. Cf. Tr. 984-87 (O'Toole). Had
Dr. Imanishi-Kari just finished fabricating these pages with an intent to
deceive, she would hardly be likely to announce that fact or express
resentment at the trouble involved.

77. "Ig" is shorthand for immunoglobulin. IgM antibodies are those with
the æ isotype, but are produced in two allotypic variants, æa and æb.

78. Dr. Imanishi-Kari stated in an interview with NIH that the vials with
iodinated material were dated and unusable material disposed of. Ex. H103, at

79. In a 1990 interview with NIH, Dr. Imanishi-Kari stated that she had
problems with BET-1, not in terms of specificity, but in iodination, depending
on protein concentration. She stated that "in the beginning when I purified
BET-1, I couldn't get a lot of Bet-1 purified from the supernatants and the
protein concentration at that time was not very good," and that iodination was
problematic with the low concentration of protein with which she was working
at times. Ex. H102, at 144, 146-47.

ORI made no charges of scientific misconduct in relation to the loss of the
raw data on the fourth dilution data points, admitting that it had
"insufficient evidence to hold Dr. Imanishi-Kari directly and specifically
responsible," while still asserting that the fourth data points must be
fabricated. ORI Report 61. It is hard to see why ORI accepted that
Dr. Imanishi-Kari could have relied on Dr. Reis in regard to these data but
that Dr. Imanishi-Kari should nevertheless be held directly and specifically
responsible for the other shortcomings charged in relation to Dr. Reis's work
in Figure 1.

There was unrebutted expert testimony that doing this was not scientifically
difficult using the equipment then available at MIT. Tr. 1914-15 (Eisen); R.
Br. 44.

For purposes of this section, we consider all the data in the experimental
record before us as authentic to determine if failing to report the discrepant
results with BET-1 constituted deliberate misrepresentation. The following
section addresses ORI's allegations that some of the experiments in which BET-
1 performed well were fabricated. Since we conclude there that ORI did not
prove that those pages were fabricated, we need not consider here how our
conclusions would be affected if portions of the experimental data were
disregarded as fabricated. One difficulty with the assessment by ORI's
scientific experts about whether the representations about BET-1 were
misleading is that they discounted those pages which were forensically
challenged, assuming their invalidity as data. ORI considered Dr. Eisen's
evaluation of the experimental evidence to be "limited, at best" because he
did not review the statistical and other forensic evidence. ORI Br. 39.
Since we do not share ORI's assumption that the questioned pages were not
legitimate data, we find more useful for our purposes here (in evaluating
whether the reporting regarding BET-1 in the Cell paper was deliberately
misleading) the testimony of scientists who considered the entire experimental

83. The authors published the required correction in Cell in 1989. Ex. H3
(57 Cell 515). The correction stated that iodinated BET-1 was sometimes not
specific but that data from those occasions were not used in the paper. The
correction further stated that, when working, BET-1 "could discriminate with
1,000- to 10,000-fold specificity" in favor of æa. Id. In addition, the
correction published further data on BET-1 specificity drawn from I-1:121.
ORI challenged the correction too as deliberately misleading because:
(1) Dr. Imanishi-Kari knew that iodination was not the cause of nonspecific
binding as the correction implied; (2) BET-1 did not generally discriminate
with the level of specificity claimed in the correction; and (3) I-1:121 was
fabricated. See ORI FFCLs XIII.1-14. We discuss the charges relating to I-
1:121 in the next section. The degree of specificity is discussed in the text
and is not out-of-line with the assessments of the NIH Scientific Panel and
witnesses before us. The possibility that iodination might affect specificity
was described in the NIH Scientific Panel Report as "reasonable," though
unproven. The correction does not claim that the authors knew that iodination
problems caused the non-specific binding, and ORI did not prove that
Dr. Imanishi-Kari knew of another cause for the inconsistent performance of
BET-1. Dr. Dahlberg asserted that damage to an antibody during iodination
would affect its ability to bind at all rather than its ability to
discriminate, but ORI did not prove that effects on specificity could not
occur. ORI pointed to I-1:110 as evidence that BET-1 still showed high cross-
reactivity, even when not iodinated. ORI Report 68, 89, n.53. But we found
that the degree of cross-reactivity shown at I-1:110 properly considered is
much lower, so that page supports the opposite conclusion. Scientists
testified that the denaturing caused by iodination could cause non-specific
binding. Tr. 1921-26 (Eisen), 1293 (Davie). The important point is that,
whatever the source of the occasional problems with BET-1, Dr. Imanishi-Kari
and Dr. Reis, and others using it, obtained good specificity results on other
occasions, were able to tell when it was working well, and relied on those

84. These results conflict with Dr. O'Toole's assertion that BET-1 "never,
ever, ever, ever worked" for her as a æa specific reagent in an RIA. Tr.
1064; Ex. H109 at 23. Instead, they support Dr. Imanishi-Kari's testimony
that Dr. O'Toole's data showed BET-1 specificity on her assays, too. Tr.
4954-55. ORI apparently did not credit Dr. O'Toole's position that BET-1
lacked all specificity, stating that "neither Dr. Dahlberg nor any of ORI's
other experts ever contended that BET-1 could not discriminate between æa and
æb." ORI Br. 39. Rather, ORI limited its position in its brief to the claim
that BET-1 did not discriminate elsewhere as well as depicted in Figure 1.

85. ORI also pointed to the fact that the correction letter refers to
BET-1 as a "key reagent" as further evidence that its specificity was
important to the Cell paper. ORI Br. 35; Ex. H3. The fact that BET-1 was
undoubtedly important in identifying the transgene does not mean that its
allotype specificity being relative rather than absolute, and therefore
potentially exaggerating the presence of transgene, undermines its role in the

86. However, Dr. Davie, who testified for ORI, stated that he did not find
it surprising that the values obtained with sera exceeded the highest value
with the positive control protein, since it was comparing "apples and
oranges." Tr. 1300.

87. The corresponding percentages of cross-reaction for AF-6 in these
experiments was calculated by ORI as ranging from 3-14%, except for R-1:34
which was 41%.

In fact, Dr. Dahlberg testified that four of the assays in his analysis in the
ORI Report at 65 are "consistent with" the specificity in Figure 1, if he were
not questioning their authenticity. Tr. 529.

An example of the impact of the different approaches to assessing cross-
reactivity is Dr. Eisen's discussion of ORI's comparison of BET-1 data at I-
1:75-78, which were not published, to R-1:37-40. ORI plotted the average
values for BET-1 bound to transgenic and normal sera in each assay and
emphasized that the unpublished assay had a 30% average level of cross-
reactivity. ORI Report 62, Ex. C. Dr. Eisen testified, however, that the
unpublished assay showed a 100-fold discrimination in favor of æa, which is
"perfectly satisfactory specificity." Tr. 1918-19. Dr. Capra agreed that the
30% average cross-reactivity for the first dilution calculated by Dr. Dahlberg
in that graph is not an accurate demonstration of non-specificity and that
looking at all the dilutions shows that BET-1's reactivity to the transgenic
and normal sera differed substantially. Tr. 2822-23. He concluded that both
the published and unpublished data sets would support the same conclusion,
i.e., that BET-1 unambiguously and clearly discriminates by allotype.

90. Dr. Dahlberg correctly pointed out that in assays using an anti-
idiotype coat very few idiotype-positive antibodies of any kind would be
captured in normal mouse sera so that the BET-1 probe would have little target
to which to attach. Hence, it would be difficult in such assays to determine
if the BET-1 count on normal sera was low because the probe was specific for
only æa or because there were simply no idiotype-positive antibodies present.
The control proteins, by contrast, are known to be idiotype-positive with æ
heavy chains of the allotypes of the specific mouse strain targeted, so they
would be expected to attach to the anti-idiotype coat and provide a target for
the anti-allotype probes. On this basis, ORI disregarded some of the assays
to which Dr. Eisen pointed in his testimony as further examples of good
discrimination by BET-1. ORI Br. 39; ORI FFCL IX.F.ii.9-20. However, this
point did not account for all the overlooked assays.

91. ORI's position on this assay overlooked the fact that it was run on an
anti-kappa coat which would capture almost all antibodies in normal mouse sera
and therefore should provide ample targets to test BET-1's specificity even if
the controls were performed on sera rather than control proteins.

92. This testimony is corroborated by her similar use of the names of the
mouse strains on R-2:6, where she pointed out that the listing of
concentrations instead of dilutions was more consistent with purified proteins
than sera. Tr. 2658-59. Where Dr. Reis did use sera, the specific mice from
which the sera came are identified. R-2:47; R. Br. 42, n.52.

93. Their initial reaction of anger and dismay belies the claim that
Dr. O'Toole's memory of early events in this dispute differs from that of
virtually every other participant because Dr. Imanishi-Kari's friends
immediately closed ranks to protect her regardless of culpability. Dr.
O'Toole relied as evidence of this protectiveness on her allegation that Dr.
Wortis announced in her presence that he would have to "rehabilitate"
Dr. Imanishi-Kari if Dr. Imanishi-Kari could not "explain away" the problems
with the paper. Tr. 953-54 (O'Toole). Dr. Wortis denied that he ever said or
thought that Dr. Imanishi-Kari needed rehabilitation. Tr. 2811. Dr. O'Toole
apparently used the word "rehabilitate" as implying a whitewash of
Dr. Imanishi-Kari's reputation, but, in fact, the phrase seems as likely to
imply a need for some retraining or corrective action against Dr. Imanishi-
Kari had she not explained adequately. It is implausible that, were Dr.
Wortis to have intended to somehow protect Dr. Imanishi-Kari if he found some
wrongdoing (which we do not think he ever intended), he would begin by
announcing that intention to the person who raised the concerns.

94. Dr. O'Toole asserted that Dr. Imanishi-Kari told her when she arrived
in the laboratory that Dr. Reis was recloning BET-1 because it had been
contaminated so that it was not specific and did not work well. Tr. 887-88.
Dr. O'Toole claimed no direct knowledge of whether such contamination occurred
or why Dr. Reis was recloning. Dr. Reis testified that while some plates (not
cells) were contaminated, she had recloned BET-1 only to get more reagent.
Tr. 2508-10, 2605 (Reis), 4903-04 (Imanishi-Kari).

95. ORI disregarded Dr. Marrack's testimony because she did not review all
of I-1 and the ORI Report in reaching her conclusions about the
appropriateness of Figure 1. ORI Br. 36. The scope of her review is
irrelevant to her testimony about standards of scientists in reporting
problems with reagents.

96. We note that ORI also used this assay (with 34% cross-reactivity) as
proof of BET-1's lack of specificity in its first charge on Figure 1.

97. One problem in sorting out why BET-1 sometimes failed to discriminate
well is that, as mentioned above, no records exist to trace individual batches
of BET-1 to determine if certain preparations caused the difficulty. The fact
that recloning occurred at a given date does not imply that all experiments
after that date used BET-1 iodinated at the same time or that all conditions
in later assays were otherwise ideal.

98. The data on I-1:111-12 do not seem to have been needed for any other
purpose and so retaining them is particularly inconsistent with the idea that
Dr. Imanishi-Kari was intentionally doctoring the record to support BET-1's
specificity. I-1:113 was among the pages shown to Dr. O'Toole early on as an
example of endogenous idiotype-positive clones, since it included an assay on
anti-idiotype coat with an AF6 probe. The controls show that the separate
BET-1 assay was not working well, but the AF6 results were not affected.
Dr. Imanishi-Kari might have had reason to retain the AF6 results, but, if she
were intentionally tampering with the record, she could have removed the BET-1
results or moved the AF6 to a different page.

99. As discussed above, however, we do not share ORI's belief that
Dr. Imanishi-Kari "knew" that iodination was not the cause of BET-1's
unreliable specificity results. All that is established on this record is
that she knew BET-1 did not always work well and that she thought it
sufficient to disregard the results of assays where BET-1 did not demonstrate
adequate specificity.

100. The conclusion depends on the assumption that both tapes were
generated by the identical gamma counter, and the evidence on possible
"matches" assumes that the comparison tapes were also generated by the same
machine. The testimony indicated that they matched the format for a Beckman
7000 counter which was one of the two which Dr. Imanishi-Kari said she
generally used for experiments in the I-1 notebook. Tr. 648-49 (Mosimann).
However, while the format could be used to distinguish between the Packard and
the Beckman gamma counters used in Dr. Imanishi-Kari's laboratory, there were
gamma counters on other floors in the building, some of which may also have
been Beckman 7000s. Although Dr. Imanishi-Kari testified that she did not
believe she used those for any experiments at issue in I-1, she also said that
she might have. Tr. 5098. Dr. Mosimann testified that the comparison tapes
which he used to create a "time line" of the register numbers (to show that
the ones on these pages could not fit in during March 1985 and to support the
conclusion that the numbers increased at a slow and steady rate) were from the
same Beckman (at least one other Beckman was in the building at the time).
Tr. 653-669; Exs. H422, H438, and H278, at 6. While the progression was
persuasive, it is not certain that some of these tapes could not have come
from other Beckman 7000s. The MIT fact sheet on gamma counters stated that
they were located in common areas "available to all scientists in the
building" and that it was "not uncommon for researchers to move between floors
to use shared equipment as needed. Therefore, all the gamma counters in the
building could in theory be used by any individual at any one time." Ex.
H278, at 6-7. No records were kept that would connect any particular counter
tape to a specific gamma counter. Id.

101. Dr. Imanishi-Kari testified that the fact that these tapes were
pasted on as whole pages, and that the data in these sheets were not used in
the paper or for any speech or grant, suggested that she did not get around to
organizing them until relatively late, "probably . . . when I was organizing
the last piece of, pieces laying around." Tr. 5254. This scenario is
consistent with the forensic evidence, discussed below, that intervening pages
were organized quite late (perhaps in early 1986) and with the increased
likelihood of error in retrospectively dating them well after the experiments
were completed. Dr. Imanishi-Kari noted that she may have estimated the dates
based on the dates on the BET-1 purification protocol on I-1:110, which was
one of the spiral notebook pages, since she "may have just guessed" that these
BET-1 tests "may be related" to that series of experiments. Tr. 5256-57.

102. ORI admitted that it had no "forensic or other evidence specifically
indicting I-1:114," other than its association with the surrounding pages.
See ORI Br. 47. Thus, it is unnecessary for us to address that page, in light
of our overall conclusion that it is likely that the dates on all of these
pages are not reliable in relation to the dates of the assays, but that there
is no evidence of intentional deception.

103. In his testimony, Mr. Stewart repeatedly stated that I-1:113
purported to be from 1984. Apparently, he based this on grouping it with I-
1:30 and 41 which bear 1984 dates. Tr. 3703-09; see also, Tr. 3272-73, 3313
(Hargett). However, it is obvious that I-1:113 did not purport to be from
1984 since Dr. Imanishi-Kari has maintained consistently that the laboratory
did not have BET-1 until the end of 1984 or early 1985. In its brief, ORI
claimed that I-1:113 purported to be from March 1985. ORI Br. 43-46; see
also, Tr. 3711-12 (Stewart) (late 1984 or 1985).

104. It was not disputed that the I-3 notebook was from 1986 and was
compiled from the beginning as a mouse screening notebook, for which purpose
specific dates were much more important. Tr. 5151-55 (Imanishi-Kari).

105. Mr. Stewart indicated that they attempted to track "movement" of
certain defects in a pad to estimate if pages were taken from near each other
in same pad. He considered that the I-1 pages discussed here "clustered" with
I-3 pages. Tr. 3700-03. However, since the same defect can occur on pages
interleaved into many different pads, the precise location of the defect does
not necessarily correlate with the placement of the page within a particular
pad. Tr. 3967-69 (Stewart).

106. In addition, I-1:113 has writing in blue ballpoint pen ink that
matches the ink formula of the blue pen used on I-1:30 (also from the pages
grouped together by the Secret Service). Tr. 3706 (Stewart).

107. The assertion that they must all be from after February 26, 1986 was
not based on any unavailability of a particular type of pad or ink formula
before that date. Rather, the precise date was assigned simply because of the
link among the pages, and because no other instances of pad paper with the
precise matching defect was located in any other researchers' notebooks until
after February 1986. Tr. 3707-12 (Stewart). Of course, other researchers in
the building would not necessarily have shared a single pad or even pads from
a single package. In fact, the occurrence of matching pad paper in other
researchers' notebooks at any time reinforces the likelihood that paper defect
and thickness matches do not demonstrate a single pad source. Since we do not
have all the researchers' notebooks (and since pads may well have been
consumed for purposes other than preserving laboratory data), the absence of
matching pad paper before February 1986 is relatively meaningless and, in any
event, would not prove that the I-1 pages were compiled after May 1986, when
questions were first raised about BET-1.

108. ORI mocked the idea that the coincidences could have occurred
innocently, saying that it would require that a pad be used to generate
I-1:30, 41, 43, and 113, along with a black ballpoint pen and a blue ballpoint
pen; then all three items would have to be put away for two years; then all
three items would have to be pulled out and used to generate I-3:25, which
would have to be "stacked" with I-1:30 in the process of being generated."
ORI Br. 46. None of this is necessary. It would suffice for I-1:30, 41, 43,
(with late 1984 data) and 113 (with spring 1985 data) to have been organized
around the same time in 1985 using pens and pad paper which remained in the
laboratory for a few more months to be used in early 1986 to put data on
I-3:25. Other possibilities are that similar pads or pens from the same
batches were around, or that all of these data (none of it critical data which
needed to be in order before the paper was submitted) were compiled around the
same time in 1986 using counter tapes from when the experiments were
performed. Registration could have occurred either because the completed
pages remained on the pad and fresh pages were folded back over them or
because of stacking. See Tr. 3261 (Hargett).

109. The Secret Service report even concluded that I-1:41 was "produced
with the same pad of paper directly above" I-1:113. Ex. H500 at 2; Tr. 3266
(Hargett). As discussed above, however, this assumption is unwarranted, since
other methods could produce registration, and it is impossible to tell if the
page bearing the ESDA impression already had writing on it when the impression
was produced. Tr. 3261, 3565-66 (Hargett). Since no evidence established
that ESDA impressions could not be picked up by pages other than the one
directly underneath, the conclusion that writing was done "directly" above is
not supported by the ESDA. Mr. Hargett also considered the location of
defects in the pad paper in reaching this conclusion, but as discussed
earlier, this defect analysis did not suffice to identify paper as coming from
a single pad. Cf. Tr. 3540-42 (Hargett).

110. It would have been obvious that Dr. Imanishi-Kari was not claiming to
have actually done the experiments just before the meeting to satisfy Dr.
O'Toole, since these are assays that would require substantial periods of time
to undertake and complete.

111. The improbable and ominous construction which Dr. O'Toole put on
Dr. Imanishi-Kari's complaint is typical of the escalating pattern of
miscommunication running through the long history of this conflict.

112. Dr. Imanishi-Kari testified that she did not recall putting together
pages of data specifically for this meeting. Tr. 6477. However, that does
not mean that these pages purported to have been compiled in March 1985. She
has long maintained that she organized data from counter tapes on to pages as
much as several months later, so that these pages could have been produced in
late 1985 or early 1986 apart from the meeting. R. Br. 21-22; Ex. H120, at

113. It is in recognition of this problem that ORI developed the bizarre
"Bad BET-Good BET story" to explain why Dr. Imanishi-Kari would fabricate

114. ORI did not claim that I-1:119 might have been compiled before August
1984 (which would have been highly unlikely since BET-1 was not in the
laboratory then). As to I-1:7-9, there was little conceivable reason to
misrepresent the dates on these pages, which contain unpublished preliminary
assays testing idiotype reagents. Of course, compiling pages in 1985
containing data collected in 1984 was not shown to be incompatible with
Dr. Imanishi-Kari's normal practices.

115. Another error in the Cell paper which the authors acknowledged in a
correction letter was a statement in the text related to Table 2 about the
non-æ antibodies: "The remaining 119 clones produced other Ig heavy chain
isotypes, the majority being þ2B (data not shown)." Cell paper 250 (emphasis
added). The correction indicated that the data on isotype were collected in
different experiments, not those relating to Table 2, and that it showed only
that most were þ, not necessarily þ2B. Ex. H2. The NIH Scientific Panel
considered the error in referring to isotyping done on a different set of
fusions and felt that it should be clarified, which was done in the
correction. Tr. 1335-36 (Davie). Although ORI proposed FFCLs asserting that
this isotyping claim is "fabricated," no such charge was contained in the
charge letter. See ORI FFCLs X.C.; Charge letter passim. For that reason, we
do not decide the issue. However, we note that substantial evidence in the
record suggests that the error resulted from miscommunication, apparently
confusing data on the most common isotype of Table 3 hybridomas and applying
those results to the number of non-æ hybridomas from the Table 2 fusion. Cf.
Tr. 2808 (Wortis). Dr. Weaver indicated that he may have added the sentence
without seeing the data based on his understanding of what Dr. Imanishi-Kari
told him. Tr. 2408-09.

116. As noted above, one of the obvious careless errors in the paper
occurs in Table 2. Although a total of only 1 idiotype-positive is reported
for the normals, the results for idiotype positive NIP-binders show one kappa
and two lambda positive, which would make a total of 3. A footnote indicates
that the kappa-NIP binder is not idiotype positive, which still leaves unclear
whether one or two normal hybridomas were idiotype positive.

117. The cut-off was not specified in Table 2 but was clarified in the
correction as being set at 1000, using negative controls which "showed fewer
than 600 cpm" so that a cut-off "at about 2-fold over background for safety"
was used. Ex. H3.

118. Without the counts for the negative results, no complete graph can be
made to determine the distribution. It should be observed, however, that 55%
of the results were omitted and presumably below the 1000 cpm cut-off. Cf.
Tr. 296 (Dahlberg).

119. More specifically, her testimony was that her general practice was to
use two to two and one-half times the background as her cut-off, but not lower
than 1000 cpm. Tr. 4869, 5490.

120. ORI asserted that the cut-off level was so egregiously unacceptable
that Dr. O'Toole "noticed it immediately and it was one of the aspects of the
17 pages that caused her the greatest concern." ORI Br. 55; Tr. 920-23.
However, her early memorandum to Dr. Eisen does not assert that the 1000 cpm
cut-off was inappropriate, but rather that the relative sensitivity of the
assays might differ and cause some transgene to be missed. See Ex. H231.

121. The NIH Scientific Panel noted this difficulty in its report, finding
that other assays of Dr. Imanishi-Kari's which it reviewed were consistent
with her representation that 1000 cpm was chosen to significantly exceed the
usual background values of 200-500, but that, without the negative or
background counts, this experiment could not be evaluated. Ex. H262 (NIH
Scientific Panel Report at 4).

122. Several witnesses also testified that 1000 cpm appeared to be a
reasonable choice of cut-off for this assay, apart from the specific relation
of the cut-off to the background. Tr. 3117 (Marrack), 2825-26 (Capra).

123. The fact that more than half of the counts in the normal and
transgenic assays were apparently below 1000 cpm (judging by the sample
numbers that were skipped because the negative results were not recorded) also
suggests that using a 1000 cpm cut-off did not result in all background counts
being recorded as false positives.

124. ORI says this demonstration that the 1000 cpm cut-off was used in
good faith and was effective is irrelevant because the normal portion of the
assay was ultimately discarded. ORI Reply Br. 27-28. However, this response
ignores the fact that the normal results were discarded precisely because the
cut-off showed so high a rate of positives that the researchers undertook
definitive molecular testing that established the mix-up. Cf. I-1:133.

125. We do not agree with ORI's assertion that Dr. Reis's testimony was
that she did not know to which experiment the controls related. ORI Reply Br.
28, citing Tr. 2650. Language difficulties caused some confusion at the
hearing but Dr. Reis stated clearly that she did not agree with the idea that
these controls were for the main fusion assay. Tr. 2530-32.

126. In addition, the ESDAs showed that a circle and arrow on the pages
which indicated that the controls went to the June 6th retest were added at a
different time than the June 6th retest data. Tr. 252-53 (Dahlberg); Ex.
H540; ORI Br. 58. This finding is not surprising since Dr. Imanishi-Kari
testified that she added the circle and arrow in the process of explaining to
Drs. Wortis, Woodland and Huber at the first meeting that these controls were
for the retest. Tr. 4858. At the least, this indicates that
Dr. Imanishi-Kari took the same position about these controls from the
beginning and not as a belated recollection long into the process.

127. The contrary results showing the presence of control data from R-1:34
on the ESDA of R-1:35 are visible to the naked eye despite the fact that Dr.
Dahlberg claimed not to have seen any control data information on the ESDA
after having "examined the rest of the original ESDA with a magnifying glass
and everything else that could be used to bring up faint images." Tr. 262.

128. This experiment is discussed in detail in a later section because the
normal data in Table 2 used to replace the normal mouse that was mistyped came
in part from these data.

129. The normal fusion was the other source of normal data for Table 2 in
lieu of the mistyped mouse. The transgenic fusions were not published.

130. In any case, there was no evidence presented that Dr. Imanishi-Kari
was directly involved in setting the cut-off for this particular experiment.

131. The first correction stated that the "data represent primary
hybridoma wells, not isolated clones." Ex. H2. The second correction stated
that Table 2 "summarizes a vast amount of serologic data on primary hybridoma
supernatants." Ex. H3.

132. The first correction stated that, while the number of clones per well
was not directly addressed in Table 2, it was possible to calculate from the
number of negative wells that "it was likely to be only one or a few." Ex.
H2. The second correction letter specified that the wells were "pauci-
clonal," with 20-40% with no growth. Ex. H3.

133. ORI pointed to no scientific evidence that any significant number of
normal hybridomas would be expected to produce idiotype-positive antibodies,
and our review of the data for many fusions in the record supports an
expectation of the very low frequency reported in Table 2.

134. Dr. O'Toole argued that it would have been possible for such a well
to contain a clone that was idiotype positive and transgene positive because
she believed that the anti-light chain reagent used to identify idiotype-
positive antibodies might be more sensitive than the BET-1 reagent, thus
picking up evidence of idiotype-positivity at a concentration of antibodies
insufficient for BET-1 to recognize the presence of the transgene. See,
e.g., Ex. H231, at 2. While this was an interesting scientific question which
was a focus of much discussion at earlier points in this matter (and an
example of the sort of challenge that the scientists first approached by Dr.
O'Toole considered a reasoned and serious scientific dialogue), it is
essentially irrelevant to any charge of fraud before us. No persuasive
evidence was presented that this theoretical difference in sensitivity existed
and, if it did, that Dr. Imanishi-Kari had any evidence of it at the relevant

135. That this widespread ambiguity in terminology extended to Dr. O'Toole
casts doubt on her recollection that she was sensitive to this distinction
from the beginning. Certainly, she did not note any question about this on
the draft of the paper when she reviewed it, since no notation was made to
Table 2 or the discussion although the same terminology was used as in the
final paper. Ex. H227. At the hearing, Dr. O'Toole suggested that she was
sensitized to this point early in her time at the laboratory because
Dr. Imanishi-Kari told her to "represent the parent wells as clones in her
[Dr. O'Toole's] PC binder experiment, because `if you don't say they're
clones, that's [the readers'] problem if they assume they're clones.'" ORI
Br. 61, quoting Tr. 874 (O'Toole). We do not find it credible that this
particular incident occurred as described. Certainly, Dr. Imanishi-Kari
clearly sought to press Dr. O'Toole to move forward with her experiments and
to publish. It is undisputed that Dr. Imanishi-Kari (who had herself been
under publication pressure) was concerned that Dr. O'Toole's career would
never progress if Dr. O'Toole, having been a post-doctoral fellow then for
eight years, did not produce successful publications. See, e.g., Tr. 872-73,
885 (O'Toole). Dr. O'Toole had previously been unable to obtain grants
because reviewers felt she needed to show more progress before funding. Tr.
858-62 (O'Toole). Dr. Wortis, who had been Dr. O'Toole's advisor, was
troubled when she came to him with concerns about the "tenor of life" in
Dr. Imanishi-Kari's laboratory, testifying that this was the third time she
had made such complaints within a short time of beginning a post-doctoral
position and this was the fourth such position she had held without publishing
any work. Tr. 2789-90. Also, Dr. O'Toole vigorously insisted in her contacts
with all the administrators and scientists at Tufts and MIT that she was not
alleging research fraud. See, e.g., Tr. 963, 1003 (O'Toole), 1716-17
(Woodland), 2794, 2487, 2856 (Wortis). Since the question about the "wells v.
clones" was raised by the time she met with Drs. Wortis and Huber, it is not
plausible that Dr. O'Toole would not have believed Dr. Imanishi-Kari guilty of
misconduct if Dr. O'Toole previously had been instructed by Dr. Imanishi-Kari
to lie about the very same point. At any rate, she would surely have told
about this episode immediately to convince the others of the seriousness of
the problem she perceived with the Table 2 data, and there is no evidence that
she did so. Instead, she portrayed herself from the beginning as shocked by
the apparent inconsistency of the 17 pages with reported data, which indicates
she expected until then that the data were consistent with the paper as she
read it. Cf. Tr. 929 (O'Toole). We find it more likely in light of this
history, that, in retrospect, Dr. O'Toole interpreted encouragement and
pressure to publish from Dr. Imanishi-Kari in the most ominous light, i.e., as
meaning she should publish at any cost including dishonesty.

136. The testimony of Dr. Wortis supported this conclusion, since he
described the distinction between "clone" and "subclone" as a chronological
question, depending on whether one had previously cloned the cells. Asked if
the terms were sometimes used "interchangeably," he stated that since
"operationally you're doing pretty much the same thing, in laboratory jargon,
you might go back and forth." Tr. 2750.

137. ORI argued that the data were probably fabricated in 1988 because
they were presented to the NIH Scientific Panel then "on the second day of a
two-day interview, after they raised questions on the first day about the
frequency data . . . ." ORI Br. 62. In addition, ORI argued that the data
must not have existed in 1986 or they would have been shown to Dr. O'Toole
when she raised questions about the June 6th retest of the Table 2 primary
hybridomas. ORI Br. 23-24. However, ORI later conceded that I-1:124-128A
"could have been fabricated in 1986" to show to Drs. Wortis, Woodland and
Huber in relation to the June 6th retest but argued that would only show that
Dr. Imanishi-Kari used the fabricated data on another occasion to deceive
others. ORI Br. 62, n.43. However, the concession also means that ORI
recognized that Drs. Wortis, Woodland, and Huber said that they reviewed the
data (as discussed elsewhere) and that they found the data adequately
supported the paper, whether or not Dr. O'Toole saw or accepted the data then.

138. It is interesting to observe that ORI here fell into the same use of
the terminology "clones" to refer to the products of the primary fusion for
which it charged Dr. Imanishi-Kari with misconduct, thus illustrating that
these terms are easily misused even after attention is drawn to the potential
for confusion.

139. Dr. Imanishi-Kari confirmed in an earlier interview that these data
were on paper from an MIT spiral notebook, and that at some point the pages
were removed. Ex. H103, at 49-50.

140. In light of such changes and the passage of time, it is not
particularly surprising that the participants at the meeting might have
difficulty describing exactly the subcloning data they saw. Dr. Huber was
mainly worried about the normal mouse data which appeared to contradict the
paper and relieved when that proved to be a mistyping, but she also testified
that she "understood the argument about the subcloning and that this was
important, that we had seen data for subcloning." Tr. 1805. ORI argued that
Dr. Huber mistakenly referred to the subcloning data as related to identifying
idiotype-positive þ antibodies. ORI FFCL III.E.10, citing Tr. 1781. Even if
her testimony is read to misstate this detail about the Table 2 subcloning
data, at the time she felt plainly that what she was shown was
"radioimmunoassay data [that] was sufficient to show that these hybridomas had
been subcloned." Tr. 1782. We found credible the testimony that all three
left satisfied that the issues had been resolved. See Tr. 1782, 1805 (Huber).
They were unlikely to have been satisfied on this issue if the data which they
saw had not related to the Table 2 issue which was before them.

141. Our conclusion that the participants in the meeting are accurate in
their consistent recollection of having seen the June subcloning data and that
Dr. O'Toole is mistaken in denying that this occurred is partly based on the
fact that Dr. O'Toole was not present the first time that Drs. Wortis, Huber,
and Woodland met with Dr. Imanishi-Kari. Dr. O'Toole's repeated assertions
that not only were the data not shown to anyone else but instead
Dr. Imanishi-Kari specifically announced to Dr. O'Toole that no such
subcloning was done is also undercut by a review of Dr. O'Toole's prior
memoranda on this subject. Dr. O'Toole wrote that, if one is not persuaded of
the inauthenticity of these data by the forensics or the peculiar results
(which, as discussed elsewhere, we are not), then "one is left with
contradictory assertions by Dr. Imanishi-Kari and myself." Ex. H282, at 2.
Previously, Dr. O'Toole declares, written statements of the other meeting
participants that they had seen the disputed data were accepted as accurate,
but "as I have told Drs. Kimes and Hadley [both then working with OSI, ORI's
predecessor], there is a tape recording of [the meeting participants']
assertions to the contrary." Id. Since no such tape recording was ever
produced to us and its absence was never explained, we can reasonably infer
that it either did not exist or did not support ORI's allegations. We
therefore give more credence to the mutually-corroborative testimony of the
other participants than to Dr. O'Toole's contradictory assertions.

142. Dr. Imanishi-Kari testified that at the time she considered this to
be a possible artifact of the gene insertion process but that since then the
phenomenon has been studied. Tr. 4876. In 1993, a scientific explanation
emerged, i.e., that sometimes hybridomas that were transgene negative
initially could later produce transgene-positive subclones because the
endogenous chromosomes were lost, permitting the expression of the previously
"silent" transgene. Tr. 4876-77; R. Br. 55, n.67. ORI quarreled with this
theory because ORI considered the idea incompatible with the observations of
many immunologists that chromosomes are lost, not gained, in subcloning. ORI
Br. 63. The new work does not suggest that new chromosomes are acquired but
rather that the loss of chromosomes may trigger expression of remaining

143. Dr. Imanishi-Kari also pointed to another aspect of the June
subcloning data, i.e., that S444 (subclone 8) tested as strongly BET-1
positive (7300 cpm) but completely negative on the anti-æ assay (310 cpm),
which seems impossible since BET-1 tests for a subset of æ antibodies. Again,
this kind of peculiarity or error seems strange in data that is allegedly
fabricated to bolster the Table 2 claims after-the-fact (when
Dr. Imanishi-Kari should certainly have known what she was trying to prove).

144. For example, Dr. O'Toole's notebooks contain numerous examples of
shorthand entries for cpms (for example, using 5.0, 4.6, or 0.3), without any
underlying data being retained. See Exs. H18 and H19 passim.

145. Alternatively, the dashes which appear for the highlighted subclones
in the columns for the anti-æ probes could be negatives, indicating that
Dr. Imanishi-Kari simply was not interested even in relative values for these
subclones because there was no antibody production. This possibility is less
probable than the likelihood that the dashes indicate that these subclones
simply were not tested on the anti-æ probes, but if true, would mean that the
handwritten numbers are not a complete set of counts from a gamma counter--a
scenario which would also undercut ORI's premises.

146. A "Poisson distribution" refers generally to a distribution of values
about a mean in the form of a bell-shaped curve, in which the population mean
equals the population variance. Tr. 628-29 (Mosimann), 1481-82 (Gilbert),
1618-19 (McClure), Tr. 6087 (Speed); see also ORI Report 123, n.84. ORI
argued that a series of low background readings, such as the background
readings from a microliter plate in a fusion experiment, will be distributed
around multiple means and thus follow "mixtures" of Poisson distributions,
essentially a combination of a varying number of individual Poisson curves.
ORI Br. 72-73; ORI Report, App. B, at B-7; Tr. 738-740 (Mosimann), 837
(Barron), 1482-83, 1539-42 (Gilbert).

147. ORI did not find that any of the analyzed low counts fit a single
Poisson distribution.

148. "Standard deviation" (ë) measures the degree of dispersion of the
values in a sample from the mean of those values. Standard deviation is
calculated by dividing (a) the sum of the squares of the difference of each
value in the sample from the mean by (b) the sample size minus 1. This yields
the "sample variance"; the standard deviation is the square root of the sample
variance. In a Poisson distribution, the population mean equals the
population variance, so the standard deviation is the square root of the mean
of the values. It is thus possible to estimate what the dispersion of
multiple readings of a radioactive source around the mean of those readings
would be, where only one reading is obtained, by treating that reading as the
mean and taking its square root. Tr. 622-30 (Mosimann).

149. Perhaps because ORI had not originally offered Dr. Gilbert as a
witness who would address the Poisson distribution, the testimony ORI elicited
from Dr. Gilbert was general. He made four major points: (1) the decay of
radioactive atoms is an unpredictable, random event; (2) such randomness is
why statistical analyses have been used to detect fraud (because a fabricator
cannot reproduce random numbers accurately); (3) one would "expect" a certain
"spread" of data among background counts in an assay since background data are
"in principle the same empty tube . . . being counted over and over again;"
and (4) one might expect the final data to look like the sum of "several"
Poisson distributions if the experiment had several backgrounds. Tr. 1479-83,
1520-24, 1537-44. These points did not directly address our concerns here.
First, he described the randomness of radioactive decay as meaning that, if
one gets a count of 100, for example, in one minute, then counts again in the
next minute, one will not get the same number. Tr. 1480. Similarly, the
Beckman 300 manual states that, because radioactive decay is a random process,
it is unlikely that two successive measurements of the same sample would give
identical readings. Ex. H526, at 31. Such randomness, however, would affect
the order in which various counts are generated; ORI did not find non-
randomness in the order of the counts, but focused on the frequency of the
counts. The Poisson distribution may indicate some "spread" of data, as Dr.
Gilbert suggested, but also indicates that the mean and other values around
the mean will be generated more frequently than other values. Second, he did
not state how he knew that the randomness in radioactive decay has been used
to detect fraud elsewhere, nor specify how particular statistical methods were
applied there. Third, while Dr. Gilbert's testimony indicated that it did not
matter that the counts here were from multiple sources because background is
in principle the same empty tube being counted over and over again, he did not
explain how that principle would apply to counts under 600, which may or may
not be background. In fact, the backgrounds listed for the three assays at
issue here (two for each) were as low as 270, with the highest being 410, so
Dr. Mosimann's "low counts" could encompass values that are not background.
Moreover, the counts here are presented as counts of tubes which each may have
contributing radioactive components not only from the tube, but from the well
from a microliter plate put into the tube, anything sticking to the well, and
any contamination in the tube. While Dr. Gilbert recognized that there might
be several backgrounds due to different backgrounds in different tubes, he did
not address possible different contributing factors within a tube and how this
might affect the outcome. We also note that he first indicated one might
expect "several" mixtures, then amended this to "four or five" mixtures, then
amended this to "five or six mixtures." Tr. 1483, 1540, 1546. Yet, some of
the control sets fit to mixtures of seven or nine. Thus, we cannot accept his
testimony as establishing a scientific basis for how many mixtures should be
expected in the particular assays at issue here, even assuming such mixtures
should apply.

150. In order to determine whether given data fit Poisson mixture models,
ORI employed the chi-square goodness-of-fit test, a widely accepted technique
for measuring the difference between observed values (the sets of data) and
expected values (the Poisson models). The purpose of tests such as the chi-
square test is to show how likely it is that differences between observed data
and a model for such data can be explained on the basis of chance variation,
if the model is true. Tr. 754 (Mosimann), 6004-06 (Speed). The test involves
deriving a chi-square statistic based on the differences between the observed
and expected values and the degrees of freedom, a factor of the size of the
sample. The chi-square statistic is then compared with standard tables to
determining the chi-square probability value for a given data set at given
degrees of freedom. Tr. 754-58 (Mosimann); Exs. H413, H414, H415. To
determine whether or not a set of data fit a Poisson mixture model, ORI
employed a probability of .05, or 5%. In general, this means that if the chi-
square probability value obtained from comparing the data and the model is
less than .05, then there is only a 5% probability that the data fits the
model and that differences between the data and the model are due to chance
variation alone. Tr. 754-58 (Mosimann), 839-42 (Barron), 2311 (Speed). In
those cases, ORI concluded that the particular data set did not fit the
Poisson mixture model to which it was compared. Exs. H413, H414, H415.
Conversely, where the chi-square probability value for the data is more than
.05, or 5%, ORI concluded that the data fit the Poisson model. Exs. H413,
H414, H415.

151. Dr. Speed also disagreed that the statistical authority he cited for
the rule of 5 supported ORI's use of minimum expected values of less than 5 in
this case. He stated that this authority supported applying the "80%" variant
only to goodness-of-fit tests involving unimodal distributions, as opposed to
the multi-modal Poisson mixture distributions employed here. Tr. 2941-43.

152. ORI attempted to submit evidence that it had used statistical
analyses in scientific misconduct cases where a researcher admitted fraud
based on ORI's findings. We ruled that these cases are irrelevant to the
question of whether the particular analyses here are reliable to establish
fabrication where it is contested. Nothing in our decision suggests that ORI
may not continue to use such analyses in its investigations, however.

153. One of the control data sets was the counts from the tape in the left
column of the June subcloning data (the kappa/lambda probe). This set did not
fit the model, but Dr. Mosimann said that this was "probably due to the fact
that in the rounding protocol here, counts less than 1000 are rounded to the
nearest 10's." Tr. 791. He did not explain what this meant, but we note that
the handwritten results from the June subcloning include a higher percentage
of counts less than 1000 than this tape, and most of them were rounded to the

154. The Secret Service made an affirmative finding that both the blue and
the black pen ink used in writing on these pages were available on the
purported dates. Ex. H503, at 1. Nevertheless, both ORI and the Secret
Service witnesses concluded that the pages were "not produced at the time
purported." ORI Br. 64, citing Tr. 3236 (Hargett) and 4016 (Stewart).

155. The pages referenced (I-2:29, 34-35, 37-38, and 42-44) are dated June
3-28, 1985. Mr. Stewart testified that the ink from these pages matched among
themselves but did not match the ink from I-1:124-128. Tr. 3779; Exs. H503,
H533, H551.

156. For example, if the counter on which they were produced was only
acquired after 1985 or of a kind only available at Tufts to which
Dr. Imanishi-Kari had moved in July 1986, that would have strongly indicated
intentional fabrication after publication of the paper.

157. By 1988, she would have already moved to Tufts. On the one hand, had
she been fabricating these experiments, she might have recognized that she
needed to use counter tapes from her time at MIT, since those at Tufts might
noticeably differ. On the other hand, it seems less likely that tapes
supposedly dating from 1981-82 would be "lying about" after she had
reorganized her data for the move. Cf. ORI Reply Br. 20; Tr. 5126

158. Nor was there any evidence presented establishing that fusions were
performed in her laboratory as early as 1981-82 (the time frame for which the
Secret Service claimed matches for some of her green tapes) that would have
yielded this kind of counter tapes. Dr. Imanishi-Kari indicated that in her
years at MIT before 1984 she was mostly engaged in molecular work that did not
require use of the gamma counters. Ex. H103, at 33.

159. As discussed later, some Maplethorpe tapes were alleged to match over
a period of almost 21 months. Exs. H502 and H503. As Dr. Imanishi-Kari noted
in her brief, these tapes could not have been produced with a single ribbon,
and yet they all match in TLC ink analysis. R. Br. 25 and n.28.

160. While ORI questioned the idea that substitute printers were brought
in fairly often when printers broke down, ORI devoted a great deal of hearing
time to defending the possibility that printers could be switched between
entirely different types of counters: i.e., between a Packard beta and a
Beckman gamma. This claim became important because ORI relied on the match
with Maplethorpe tapes to suggest that the June subcloning tapes were not only
not produced within the exact month purported (i.e., not in June 1985), but
from a period too distant to have been simply accidentally misdated and in
fact before the transgenic mice existed. ORI Br. 69. However, the
Maplethorpe tapes were produced on a Packard beta counter while the I-1 tapes
were produced on a Beckman gamma counter. Yet if the tapes were merely
produced by printers with similar fonts hooked up to different counters, there
would be no reason to attribute any significance to whether their ribbons had
matching ink formulas. Hence, it was important that the same printer
generated all the tapes. The evidence in the record on the compatibility of
the printers in switching between different makes of counter machines (as
opposed to between the two Packards in the laboratory) was conflicting. A
Beckman representative suggested that the Beckman 7000T was probably
compatible in the mid-1980's, but did not know if other machines later were.
Tr. 5936, 5940-41 (Dahlgren). However, the persons at MIT who were
responsible for maintaining the counters seem to have believed (rightly or
wrongly) that such inter-make switching was problematic (unlike switching
printers between counters of the same make), so it not likely that they were
interchanged even if they could have been. See Tr. 4604, 4607-08 (Igras).

161. The reliance on the idea that only green tapes were relevant was
based on Mr. Stewart's assumption that once the color of tape was changed one
would expect the new color to remain in place for a substantial portion of the
succeeding pages. Tr. 3647. However, this assumption is inconsistent with
testimony that partial rolls and left-over ends would sometimes be used, which
might result in short runs of a different color. The brevity of the run might
be exaggerated in the existing record by the fact that most of the tapes were
not retained in any case. Eliminating from consideration yellow tapes from a
wider time frame may have resulted in overlooking a match in ink formula to a
date nearer in time to dates consistent with Dr. Imanishi-Kari's explanation
of her experiments.

162. Printer 1 would have been previously used to generate one set of
Maplethorpe, Pasternak and Reilly tapes from 1982-1984 (which match one of the
green tapes from the January fusion tapes); Printer 2 was previously used to
generate the Maplethorpe tapes found to be from 1981-82 (which match the June
subcloning tapes and the other January fusion green tapes).

163. The earlier green tapes could have been produced on either the same
printer or one with the same font, using, over the years, various ribbons each
with one of the two common ink formulas that recurred throughout the time
period. As noted, it is highly unlikely that the matches among the various
earlier tapes represent single ribbons over such long time frames, even
assuming that the substitute printer was used only sporadically.

164. We use the term "hybridoma" without further qualification in this
section for simplicity and to parallel the usage in the Cell paper.

165. The data at R-1:76-80 contained results of tests on supernatants of
108 normal spleen hybridomas and six normal lymph node hybridomas, the data at
I-1:106 contained results on 36 normal spleen hybridomas, and the data at I-
1:107 contained results on 94 normal lymph node hybridomas, for a total of 144
normal spleen and 100 normal lymph node hybridomas.

166. Another possibility is that the original entry was simply erroneous,
since Dr. Reis in other instances confused various Greek letters.

167. ORI pointed out that the control proteins were also changed (to
P8.56.7 from P8.86.9 and to P5.40.3(5) from an undiscernible notation). ORI
Br. 77. ORI offered no explanation of why it would benefit Dr. Imanishi-Kari
to make such changes in the control protein nomenclature other than to correct
mistakes in the original writing beyond vague speculation that perhaps the
original proteins were not "appropriate" for guinea pig anti-17.2.25. Tr.
364-66 (Dahlberg). No change was made to the control protein 17.2.25 which
clearly elicited the highest reaction. Dr. Imanishi-Kari's main point was
that the reaction to the 17.2.25 itself was most likely to be this
disproportionately strong if the coating reagent was anti-17.2.25.

168. Dr. Imanishi-Kari testified that for some purposes she used the
rabbit anti-124/40 intentionally to determine if the specific reactivity
occurring in endogenous idiotype-positive antibodies would resemble more
closely the pattern of 124/40 (which was encoded originally by a gene derived
from C57/BL6 normal mice) or the 17.2.25 (which was encoded by a gene from the
mice from which the transgene was derived). Dr. Imanishi-Kari's position that
the two reagents, while not identical, were very similar and could be
interchanged for at least some uses was supported by the testimony of several
witnesses. Tr. 2879-81, 2892-94 (Wortis), 3122-25, 3142-43 (Marrack). ORI
pointed to the work of Dr. Boersch-Supan, who testified that she explored and
published research (while in Dr. Imanishi-Kari's laboratory) showing that the
two reagents had molecular and serological differences. ORI Br. 79. However,
ORI ignored Dr. Boersch-Supan's testimony that they were very similar in the
idiotypic response and belonged to a highly homologous group of antibodies.
Tr. 1833-36. It is important to realize that the experiment at issue served
only to demonstrate the absence of idiotype-positivity in normal hybridomas
rather than to study particular patterns of idiotypic determinants. ORI did
not demonstrate that rabbit anti-124/40 would have been significantly
different for that purpose than the guinea pig anti-17.2.25. Absent such a
demonstration, no reason has been provided why Dr. Imanishi-Kari would attempt
to disguise the coating reagent.