DEPARTMENTAL APPEALS BOARD
Appellate Division
SUBJECT: Ohio Department of Human
Services
Docket No.
89-45
Decision No. 1202
DATE: November 1, 1990
DECISION
The Ohio Department of Human Services (State) appealed a
penalty
disallowance imposed by the Office of Child Support Enforcement
(OCSE)
under section 403(h) of the Social Security Act (Act). Based on
an
audit of the State's child support enforcement program for fiscal
year
1984 and a follow-up review for calendar year 1988, OCSE found that
the
State did not substantially comply with the requirements of the
Act.
OCSE proposed a one percent reduction of the amount otherwise payable
to
the State for Aid to Families with Dependent Children (AFDC) during
the
period January 1, 1988 through December 31, 1988 (a reduction
estimated
at $5,106,748).
For the reasons stated below, we uphold OCSE's decision to reduce
the
State's AFDC funding for calendar year 1988 by one
percent.
Specifically, we conclude that--
o OCSE properly applied its interpretation of the statutory
term
"substantial compliance" to the time period here;
o OCSE reasonably interpreted the statutory requirement
for
"substantial compliance" to mean that a state must be taking
action
to provide basic child support services (required under
the Act) in
at least 75 percent of the cases requiring those
services;
o the statistical sampling evidence submitted here reliably shows
that
the State failed to achieve "substantial compliance;"
and
o OCSE otherwise properly interpreted the statute and
OCSE's
implementing regulations.
Statutory and regulatory provisions
Each state that operates an AFDC program under Title IV-A of the Act
is
required to have a child support enforce- ment and
paternity
establishment program under Title IV-D of the Act. Section
402(a)(27)
of the Act. The Title IV-D program has been in existence
since July
1975. OCSE has the responsibility for auditing state Title
IV-D
programs, pursuant to section 452(a)(4) of the Act, and
evaluating
whether the actual operation of such programs complies with
statutory
and regulatory requirements. Under section 403(h) of the Act as
enacted
in 1975, a state was subject to a five percent reduction of its
AFDC
funds if the audit found that the state was not in full
compliance.
OCSE began performing compliance audits after December 31,
1976.
Congress, however, enacted a moratorium on imposing the five
percent
penalty and extended this moratorium several times. Thus,
during the
first eight years of the program's operation, no state actually
had its
AFDC funding reduced (although OCSE did perform annual audits).
On August 16, 1984, Congress adopted the Child Support
Enforcement
Amendments of 1984, section 9 of Public Law 98-378 (the
1984
Amendments). As amended, section 403(h)(1) of the Act provides
that--
if a State's program operated under Part D is found
as a result of
a review conducted under section
452(a)(4) not to have complied
substantially with
the requirements of such part for any quarter
beginning after September 30, 1983, and the Secretary
determines
that the State's program is not complying
substantially with such
requirements . . ., the
amounts otherwise payable to the State
under this
part [A] for such quarter and each subsequent
quarter,
prior to the first quarter throughout which
the State program is
found to be in substantial
compliance with such requirements, shall
be reduced
. . . .
(Emphasis added)
The amended section then provides for graduated reductions, starting
with
a reduction of "not less than one nor more than two percent" and
increasing
to a maximum of five percent with each consecutive finding
.that a state is
not complying substantially with Title IV-D
requirements.
The 1984 Amendments provided for the continuation of compliance
audits,
which could in appropriate cases be scheduled as infrequently as
once
every three years. The Amendments further provided that a
reduction
could be suspended while the state was given an opportunity to
bring
itself into compliance through a corrective action plan approved
by
OCSE. Section 403(h)(2) of the Act, as amended. If a follow-up
review
of a state's performance during a corrective action period showed
that
the state had failed to achieve substantial compliance, a penalty
would
be applied for quarters ending after expiration of the corrective
action
period.
Section 9(c) of the 1984 Amendments provides that they "shall be
effective
on and after October 1, 1983."
OCSE proposed regulations implementing the Amendments on October 5,
1984,
49 Fed. Reg. 39488, and issued final regulations on October 1,
1985, 50 Fed.
Reg. 40120. (We refer to these regulations as the "1985
regulations.")
The 1985 regulations amended parts, but not all, of the
audit regulations at
45 C.F.R. Part 305. Section 305.20(a), as amended
by the 1985
regulations, provided that, for the fiscal year (FY) 1984
audit period,
certain listed audit criteria (related primarily to
administrative or fiscal
matters) "must be met." This section also
provided that the procedures
required by nine audit criteria (related to
basic services provided under a
IV-D state plan) "must be used in 75
percent of the cases reviewed for each
criterion . . . ." All these
audit criteria are based on sections of 45
C.F.R. Part 305 which (with
minor exceptions not relevant here) were
originally published in 1976,
with minor amendments in 1982. (We refer
to these provisions, as
amended in 1982, as the "existing regulations" since
they were in effect
during FY 1984.)
Thus, under the 1985 regulations, substantial compliance for FY
1984
audits was measured by audit criteria from the existing regulations,
but
a state had to be providing the required services in 75 percent of
the
cases requiring them. In follow-up reviews after a corrective
action
period, OCSE would examine only the audit criteria that the state
had
previously failed or had complied with only marginally (that is, in
75
to 80 percent of the .cases reviewed for that criterion). 45
C.F.R.
305.10(b) and 305.99, as amended. 1/
Factual background
OCSE's audit of Ohio's IV-D program for FY 1984 (October 1, 1983
through
September 30, 1984) resulted in a March 2, 1987 notice to the State
that
it had failed to comply substantially with the requirements of
Title
IV-D of the Act. State Exhibit (Ex.) C. OCSE found that the
State had
failed to meet the 75 percent standard for three audit
criteria:
"establishing paternity," "support obligations," and "state
parent
locator service." See 45 C.F.R. 305.24(c); 305.25(a); and
305.33(g).
OCSE further found that the State had only marginally met two
additional
criteria: "enforcement of support obligations" and
"individuals not
otherwise eligible." See 45 C.F.R. 305.26(a), (b), and (e);
305.31(a),
(b), and (c).
The State proposed a corrective action plan, which OCSE approved.
After
the one-year corrective action period, OCSE conducted a follow-up
review
of the State's performance for the period July 1, 1987 through June
30,
1988. OCSE found that the State had failed to achieve
substantial
compliance with any of the previously unmet audit criteria.
This
finding led to the reduction notice that is the subject of this
appeal.
OCSE used statistical sampling techniques in conducting both the
initial
audit for FY 1984 (which OCSE referred to as a "program results
audit"
to distinguish it from a financial audit) and the follow-up
review. The
OCSE auditors examined only whether the State took some
action to
provide required child support services in the sample cases.
The
success of the State's efforts, while noted for statistical
purposes,
was not determinative as to whether the State was found to be
in
substantial compliance; the State received credit for a case so long
as
the State had taken some action consistent with the State's
written
procedures.
The issues
The State's major arguments presented here were in two general areas:
(1)
arguments related to the fact that OCSE applied the 1985 regulations
to the
1984 audit; and (2) arguments related to the statistical validity
of the
sampling methods OCSE used to determine whether the State had met
the 75
percent standard.
The State also argued in the alternative that the 1985 regulations did
not
provide for a reduction if a state maintained its compliance with
marginally
met criteria, as the State did here, and that OCSE had failed
to conduct a
"complete" audit, as required by the Act.
Analysis
I. The State's challenges to the 1985 regulations are without merit.
The State challenged the 1985 regulations that OCSE used in
concluding
that the State was not in substantial compliance.
Specifically, the
State argued that--
o the regulations are impermissibly retroactive under Bowen
v.
Georgetown University Hospital, 488 U.S. 204 (1988)
(hereafter
Georgetown), since OCSE lacked express statutory
authorization to
apply these regulations retroactively;
o the regulations have retroactive effect in violation of
the
Administrative Procedure Act (APA), which defines a "rule"
as having
"future effect" (see 5 U.S.C. 551(4) and Georgetown
(Scalia, J.,
concurring));
o the 75 percent standard in the regulations had no empirical basis
and
therefore was established in an arbitrary and capricious
manner under
Maryland v. Mathews, 415 F. Supp. 1206
(D.D.C. 1976); and
o the regulations were invalid because they did not include
a
definition of "violations of a technical nature," based on
section
403(h)(3), as amended.
OCSE disputed the State's position, but also pointed out that the Board
is
bound by applicable laws and regulations under 45 C.F.R. 16.14.
The
regulations at issue were "effective" on the date of final
publication
(October 1, 1985). However, section 305.20(a), which sets
out the 75
percent standard for service-related audit criteria, states that
it is
to be applied "[f]or the fiscal year 1984 audit period." The
preamble
to the regulations confirmed that OCSE intended to apply this
section to
FY 1984 audits, based on the October 1, 1983 effective date of the
1984
Amendments. 50 Fed. Reg. at 40126, 40131-2, and 40138.
We are, of course, bound by the Department's regulations, even if
invalid
under a constitutional analysis, if those regulations are
applicable.
While some of the issues here clearly would be controlled
by 45 C.F.R. 16.14,
the State's arguments also raise interrelated
questions of
applicability. We do not need to sort out these issues
precisely,
however, since we conclude that all of the State's arguments
concerning the
regulations are completely without merit. Our reasons
are:
o Section 403(h)(1) of the Act, as amended, requires reductions
for
states not found to be in substantial compliance in audits
"for any
quarter beginning after September 30, 1983," and
Congress explicitly
made the 1984 Amendments effective on
October 1, 1983. The
circumstances here are therefore
distinguishable from those in
Georgetown, where the agency
published cost- limit rules for Medicare
providers in 1984 and
attempted to apply the rules to 1981 costs, in
the absence of
any statutory authority to do so. Here, the statute
expressly made the change in the standard retroactive. 2/
o The effect of the 1985 regulations here is also
significantly
different from the effect of the cost-limit rules
considered in
Georgetown. There, Medicare providers were
entitled to a specific
level of reimbursement under the
regulations in effect in 1981, and
the 1984 rules would have
retroactively reduced that level. Here,
the AFDC funding
reduction applies to periods after the 1985
regulations were
published.
o The audit criteria at issue here were in the existing
regulations,
had been in effect without substantial change since
1976, and were
based on IV-D state plan requirements. The
75 percent standard is
more lenient than the standard in the
existing regulations, which
provided that the State must "meet"
the criteria. Even if the State
is correct that OCSE could not
reasonably implement this by requiring
action in 100 percent of
the cases, the existing regulations clearly
contemplated a
compliance level greater than 75 percent. 3/
o More important, the 1985 regulations afforded the State a
corrective
action period. The State had notice of the 75
percent standard prior
to this period, and over two years to
adjust its administrative
practices before the follow-up review
period.
o The regulations here merely interpret the statutory term
"substantial
compliance." Obviously, the range of
compliance levels OCSE could
adopt is limited by this term,
particularly when it is read together
with section 403(h)(3) of
the Act (which permits a finding of
substantial compliance only
when any noncompliance is of a technical
nature). A level lower
than 75 percent would have been subject to
challenge as
inconsistent with statutory intent.
o Even in the absence of the 1985 regulations, we would reject
the
State's position that it should be found to meet the
substantial
compliance standard. The record here supports a
finding that the
State did not achieve substantial compliance
under any reasonable
reading of that term. This Department
clearly may retroactively
adjudicate a state's entitlement to
AFDC funds under the applicable
statutory standard, without
violating the APA (even as interpreted in
the concurring opinion
in Georgetown).
o Since the 75 percent standard reasonably interprets the
statutory
term "substantial compliance," the circumstances here
are
distinguishable from those considered in Maryland, where the
court
found that regulations setting "tolerance levels" for
AFDC
eligibility determination errors were not reasonably
related to the
purposes of the statute. Moreover, unlike
the "tolerance levels" in
Maryland, the 75 percent standard here
had an empirical basis in past
performance levels measured
through OCSE's audits. While audit
results from FYs 1980 and
1981 showed that some states were not yet
achieving 75 percent
levels, other states were achieving 100 percent
levels at that
time, and OCSE could reasonably expect all states to
be
achieving 75 percent levels by FY 1984. State Ex. M. 4/
o Finally, we reject the State's arguments based on section
403(h)(3)
of the Act. That section permits OCSE to find
substantial compliance
only where any noncompliance is "of a
technical nature not adversely
affecting the performance of the
child support program." OCSE
implemented this provision
through its regulations, determining that
failure to meet the
critical audit criteria in its regulations is not
simply
technical since the required activities are essential to an
effective program. 50 Fed. Reg. at 40130. We find
that
interpretation to be reasonable as applied here since the
State's
failures would adversely affect program performance; the
State took
no action whatsoever to provide basic child support
services in a
significant number of cases.
Thus, we conclude that application of the 1985 regulations here
was
clearly proper, and that those regulations are consistent with the
1984
Amendments.
II. The record shows that the State did not achieve the 75
percent
standard for all of the audit criteria.
In both the program results audit for FY 1984 and the follow-up
review,
OCSE used statistical sampling techniques to determine whether the
State
met the 75 percent standard for those audit criteria to which
that
standard applied. OCSE drew a random sample of the State's Title
IV-D
cases for each relevant time period. Each sample was "stratified"
--
samples were drawn from case groups (strata) consisting of cases
from
one or more political subdivisions (counties) of the State. OCSE
first
examined each sample case to determine what action, if any, was
required
in the case (in other words, what audit criteria applied). For
example,
if the whereabouts of an absent parent who owed support was unknown,
the
case would be classified as a "locate" case, requiring review to see
if
the State took any action to locate the absent parent, as required by
45
C.F.R. 305.33(g). OCSE then examined the case files and other
records
to determine whether the State had, in fact, taken any required
action
during the relevant time period, finding either "action" or "no
action"
for each sample case reviewed. For example, in the follow-up
review,
OCSE found that the State took action in only 48 of the 100 cases
which
required action to locate the absent parent.
OCSE then used the sample findings to calculate an "efficiency rate"
and
an "efficiency range" for each criterion. The "efficiency rate" is
the
single most likely estimate of the percentage of cases requiring
review
under an audit criterion which were "action" cases. The
"efficiency
range" was to be equivalent to what is called the "95 percent
confidence
interval." A confidence interval is a statistician's
calculation of the
range of values within which the statistician can say with
a specified
degree of certainty (here, 95 percent) the true value
occurs. See,
e.g., State Ex. B, pp. 3-4; 29.
Under OCSE's audit procedures, a criterion was considered "unmet" if
the
"high range" of the "efficiency range" (also called the "upper limit"
of
the confidence interval) was less than 75 percent, and only
"marginally
met" if the "high range" was 75 to 80 percent. It is
undisputed that,
to determine the high range (upper limit) of the 95 percent
confidence
interval, you first calculate the "standard error" associated with
a
particular sample, then multiply that amount by 1.96, and then add
the
product to the efficiency rate. By using the "high range" figure,
OCSE
was essentially assuming the risk associated with potential
sampling
error. See OCSE SAF, pp. 113, 438, 510-513; State Ex. B, pp.
28-29.
In other words, not only could OCSE say with at least 95 percent
certainty
whether a state was meeting each criterion, it could also say
that its
approach erred on the side of passing a state where a complete
review might
well have identified a failure.
In the program results audit, OCSE examined all audit criteria listed
in
section 305.20(a) of the 1985 regulations. In the follow-up
review,
OCSE examined only those audit criteria which were either "unmet"
or
only "marginally met" in the program results audit.
The State did not attack the use of sampling in general as a basis
for
determining whether the State had met the audit criteria, nor did
the
State challenge OCSE's findings in specific sample cases. See
State's
brief, pp. 9-18. The State noted, however, that in past
decisions on
the use of sampling as a basis for disallowance, this Board
has
recognized that sampling can produce a valid result only if done
"in
accordance with the general rules and conventions statisticians
have
developed . . . ." State's brief, p. 9, quoting California Dept.
of
Social Services, DAB No. 816 (1986), pp. 4-5. Based on an affidavit
and
report from a statistical sampling expert, the State raised a number
of
issues regarding the sampling methods initially used by OCSE.
These
issues related to three areas: (A) the method OCSE used to
calculate
the efficiency rates and efficiency ranges for the audit criteria;
(B)
the sample size; and (C) OCSE's alleged failures to follow its own
audit
guides.
We discuss each of these issues below, and explain why we conclude
that
OCSE has established here with the requisite statistical validity
and
reliability that the State failed to achieve substantial compliance.
A. Recalculation of the efficiency rates and
ranges is permitted
and
shows that the State did not meet the 75 percent standard.
The issues regarding how to validly determine the efficiency rates
and
ranges have evolved during Board proceedings. Thus, we provide
a
history of how the issues developed and what the remaining issues
are,
before explaining our analysis.
1. How the issues developed
With its initial brief, the State submitted an affidavit and report
from
its expert, attesting that OCSE auditors had erred by using a method
for
determining the efficiency rates and ranges which is appropriate
for
simple random sampling, but not appropriate for the more
complicated
sampling used here. State Ex. O. The State's expert
described the
sampling method OCSE used for both the program results audit
and the
follow-up review as two-stage sampling with stratification. He
cited to
a treatise on statistical sampling regarding this type of
sampling
method.
OCSE did not defend its original method for calculating the
efficiency
rates and ranges. Instead, OCSE recalculated these figures
using two
formulas (OCSE Methodology #1 and OCSE Methodology #2), which
OCSE's
statistical sampling expert said were based on the treatise cited by
the
State. OCSE Ex. 7, OCSE Supplemental Appeal File (SAF), pp.
108-110.
In reply, the State submitted a second affidavit and report from
its
expert. He attested that OCSE had not, in fact, properly applied
the
correct formulas. He stated his opinion that OCSE Methodology #1
is
inappropriate because: "It makes no sense to include cases that
were
not applicable for review for a given criterion in the
estimated
efficiency rate and in the subsequent standard error
calculation."
State SAF, p. 333. He based this opinion on the
underlying principle
that "the universe, or population, of interest is the
total number of
cases applicable for review for a given criterion" and that,
if cases
are not in the target population, they should not be included in
the
sample on which the efficiency rate and the corresponding standard
error
are based. State SAF, p. 333 (emphasis in original). He
further
asserted that, although OCSE Methodology #2 partially recognizes
this
principle, it uses certain values which are inconsistent with
this
.principle. Finally, he explained how these values should be
replaced,
offered alternative formulas for calculating the efficiency rates
for
the State, and provided the results of some of his calculations.
The Board then issued a "Preliminary Analysis and Order To Develop
the
Record." The Board noted that the State's expert appeared to be
correct
in describing the appropriate universe, since 45 C.F.R. 305.20(a)
refers
to a state using the procedures required by the audit criteria "in
75
percent of the cases reviewed for each criterion." The Board asked
OCSE
to comment on whether it disputed the State's expert opinion and, if
so,
on what basis.
In its subsequent submission, OCSE presented another affidavit and
report
from its expert. OCSE's expert did not deny that OCSE
Methodology #1
was inappropriate, nor did he disagree with the principle
that the
appropriate universe for each criterion consists of the total
number of cases
requiring review for that criterion. He did, however,
defend OCSE
Methodology #2 on the ground that the formula used
sufficiently recognizes
this principle. He stated:
The fact that OCSE, upon my advice, weighted the
political
subdivisions and strata according to the
total number of cases
rather than the number of
cases representing the various criteria
is not
inconsistent with that principle because this decision
rests
on the reasonable assumption that there is a
high positive
correlation between the criteria
weights and total weights. In
other words, it
is logical to assume -- as confirmed by the
experience of the OCSE auditors -- that the larger the total
number
of cases, the larger the number of cases for
the criteria covered,
and vice versa.
OCSE SAF, p. 429.
OCSE's expert acknowledged that the alternative suggested by the
State's
expert -- estimating the number of cases applicable for review for
each
audit criterion per subdivision and stratum -- is also an
appropriate
statistical approach. OCSE's expert expressed the opinion
that
recalculations were not necessary since the State's expert had
not
presented any evidence that OCSE's assumption was unreasonable, nor
had
he done a complete .analysis of his recommended approach. OCSE's
expert
said this was especially true since, as the Board had noted, the
raw
sample data here indicates so few required actions taken by the
State
that the question naturally arises whether adjustments to
the
calculations to correct alleged deficiencies would make a difference
in
the ultimate finding that the State was not in substantial
compliance.
OCSE SAF, pp. 430-431.
In spite of this position (and even though the Board's order did
not
require it), OCSE nonetheless again recalculated the relevant
efficiency
rates and efficiency ranges. OCSE's expert explained that
the procedure
used for this was called a "ratio estimation" procedure.
At his
suggestion, OCSE applied this procedure in three ways: (1)
substituting
the estimated criteria weights in Methodology #2 to demonstrate
the
reasonableness of the assumption upon which OCSE relied
(OCSE
Methodology #2A); (2) preparing scatter diagrams showing
the
relationship between the estimated criteria weights and
the
corresponding total-case weights (correlation analysis); and (3)
using
the ratio estimation technique as an independent methodology
for
calculating the efficiency rates and standard errors (OCSE
Methodology
#3). OCSE SAF, pp. 431-432.
The high range figures produced by both OCSE Methodology #2A and
OCSE
Methodology #3 are substantially less than 75 percent for two
criteria
("establishing paternity" and "state parent locator service") out of
the
three criteria which OCSE had previously found the State failed to
meet
in both the program results audit and the follow-up review. For
the
third criterion ("support obligations"), these methodologies result in
a
finding that the State marginally met the "support
obligations"
criterion in the program results audit, but failed to meet
this
criterion in the follow-up review. OCSE SAF, pp. 510-513.
Under the
regulations, the State was required to maintain (during a
corrective
action period) its compliance with any criterion which was
only
marginally met in a program results audit. Thus, under any of
OCSE's
methodologies, OCSE would find that the State failed to meet all
three
criteria.
The State asked for and received an opportunity to respond. The
State
then submitted a third affidavit from its expert, together with his
more
detailed report. State Ex. T. This affidavit acknowledged that
OCSE's
new methodologies partially corrected for the previous errors alleged
by
the State's expert. The State's expert .asserted, however, that
there
were still problems which had not been corrected. His basic
points were
that--
o OCSE Methodology #2 is based on certain assumptions that
are
discredited by OCSE's own correlation analysis; 5/
o OCSE Methodology #2A correctly calculates the efficiency rate, but
is
incorrect because the formula used to calculate the
efficiency ranges
fails to take into account the fact that the
actual total number of
cases requiring review for each criterion
is unknown and that a
random estimate is being used instead;
o While OCSE Methodology #3 employs a ratio estimator found in
most
books on sampling, this ratio estimator is not directly
applicable to
the situation here;
o Existing ratio estimators could be modified to account for
the
particular problem here (the randomness of the estimates);
and
.o Taking into account the randomness of the estimates of the number
of
cases requiring review for a particular criterion would
increase the
standard error of the efficiency rate and therefore
widen the
efficiency range (confidence interval).
The State's expert explained that it was not an easy problem to
determine
the proper ratio estimators and that his efforts were "stymied
due to lack of
time." State SAF, p. 394. Thus, he presented no
calculations to
show that use of ratio estimators consistent with his
analysis would result
in the State being found to have met the 75
percent standard for any
particular criterion.
The State asserted that OCSE had failed to meet its burden to
establish,
at the 95 percent confidence level, that the State had failed to
achieve
substantial compliance with the child support regulations. The
State
based this assertion on its expert's opinion that--
Until the correct analysis is put forward, . . . no
one can
conclude that Ohio is not in
compliance. One may perform 1,000
incorrect
analyses that arrive at a common conclusion, but
that
conclusion is not valid until a correct
analysis is performed.
State SAF, p. 396.
The State also renewed its objections to the Board permitting OCSE
to
recalculate the efficiency rates and ranges. According to the
State,
OCSE is essentially attempting to issue new and different audit
findings
in violation of fundamental fairness to the State. The State
argued
that OCSE should be bound by the findings of the original audit
reports,
including the calculations that those findings were based on.
"Since
those calculations are mutually acknowledged to be
statistically
inappropriate and incorrect," the State argued, "the
penalty
disallowance should be reversed." State's reply brief, p. 5.
2. Analysis
At the outset, we reject the State's position that we must reverse
the
disallowance because of defects in how OCSE originally calculated
the
efficiency rates and efficiency ranges. The issue here is
properly
viewed as an evidentiary question: whether the sample findings
are
reliable evidence that the State did not meet the 75 percent
standard
for any of the three criteria. The State did not dispute
OCSE's
findings about the sample cases. 6/ To evaluate this evidence,
we must
determine what inferences can validly be drawn from the sample
case
findings, in accordance with principles of statistical sampling.
Given
the sample case findings, however, this Board would be abdicating
our
responsibility under the statute if we awarded the State unreduced
AFDC
funding solely because OCSE's original calculations were
defective,
without proceeding to address the question of what inferences
can
validly be drawn from the sample results. The State has had
ample
opportunity to address this question. 7/
The issue here focused on the method used for calculating the 95
percent
confidence interval, since OCSE had chosen to adopt that degree
of
certainty for its findings. 8/ Based on our examination of the record
as
a whole, we conclude that OCSE has shown, with the requisite degree
of
certainty, that the State did not meet the audit criteria at issue
here.
First, we find that OCSE Methodologies #2 and #3 are valid methods, of
a
type which would ordinarily be relied on by statisticians.
More
important, as we discuss in detail later, we find that the
limited
modifications ultimately proposed by the State's expert would not
result
in a finding of substantial compliance.
OCSE's expert was well-qualified and persuasively attested to the
validity
of the methods OCSE used, providing supporting analyses. While
the
State's expert was also well-qualified, we find his third affidavit
to be
inadequate to rebut OCSE's expert's opinion for the following
reasons:
o The State's expert described the assumptions underlying
OCSE
Methodology #2 as requiring a constant relationship between
total
number of cases requiring review for each criterion and
the total
caseload (which he called a "deterministic"
relationship). See note
5 above. OCSE's expert,
however, had said the underlying assumption
which rendered the
method valid was merely that there was a high
positive
correlation between the two numbers. The State's expert
did
not even acknowledge this difference of opinion; thus, he
provided no
supporting analysis, or citation to a statistical
treatise, which
would give greater credence to his opinion about
what was required.
o Contrary to what the State's expert implied, the
particular
assumption described by OCSE's expert is not
discredited by the
correlation analysis. While the
correlation is not constant, OCSE's
expert attested that only a
high positive correlation is required.
The analysis shows that
the correlation coefficient for the criteria
at issue here was
from +.9275 to +.9869 (where +1 would represent an
exact
positive correlation). OCSE SAF, pp. 433, 504-509. In
other
words, the correlation does show that it is logical to
assume that
the larger the total number of cases, the larger the
number of cases
requiring review for each criterion. 9/
o The State's expert did not deny the assertion by OCSE's expert
that
the ratio estimation technique used in OCSE Methodology #3
is
substantially the same as the estimation procedure proposed
by the
State's expert in his second affidavit. OCSE SAF,
p. 431; State SAF,
pp. 333-34. This undercuts the
statement in the State's expert's
third affidavit that OCSE
should have used a more sophisticated
formula, even though that
formula cannot readily be found in
statistical sampling
texts.
Even if we were to give more weight to the opinion of the State's
expert
than to the opinion of OCSE's expert, however, we would not accept
his
opinion stating the legal conclusion that "[u]ntil the correct
analysis
is put forward, . . . no one can conclude that Ohio is not
in
compliance." State SAF, p. 396. We find that the further
modifications
he proposed would not show that the State was in substantial
compliance.
The State's expert did not himself perform any calculations using
what
he called the "correct analysis." Thus, his affidavit at
most
establishes that such an analysis would "widen" the confidence
interval
for each criterion; he did not express the opinion that any
such
widening would be substantial or would have a reasonable possibility
of
resulting in findings that the 75 percent standard was met for
any
criterion, nor can we infer from his affidavit that he held
this
opinion. 10/
Moreover, other evidence in the record establishes to the contrary
that
such widening would not be sufficient to result in a finding that
the
State met the audit criteria in the follow-up review--
o The State's expert acknowledged that the efficiency rates
were
correctly calculated under OCSE Methodology #3; thus, we
can focus on
the high ranges of the efficiency ranges (the upper
limits of the 95
percent confidence intervals) calculated using
that method. OCSE's
calculations showed the following as
the upper limits based on sample
results of the follow-up
review: 45.6 percent of the cases requiring
review for
"establishing paternity;" 56.1 percent for "support
obligations;" and 59.0 percent for "state parent locator
service."
OCSE SAF, p. 512. This means that, for the State
to achieve the 75
percent standard for each criterion, the
confidence intervals would
have to increase from the amounts
calculated using OCSE Methodology
#3 by 29.4 for "establishing
paternity," 18.9 for "support
obligations," and 16.0 for "state
parent locator service."
o The increases required to widen the confidence intervals
sufficiently
would be many times the standard errors OCSE
calculated using its
Methodology #3 (admittedly based on a
commonly used ratio estimation
technique). For example,
the standard error for "establishing
paternity" using
Methodology #3 is .0221 and would have to increase
by .15 to
widen the confidence interval enough to make a difference
here. (As explained above, to determine the upper limit you add
to
the efficiency rate -- here 41.3 percent -- 1.96 times the
standard
error.) OCSE SAF, p. 512. Since the State's
expert acknowledged
that OCSE had moved in the right direction
in its Methodology #3, it
is logical to conclude that the
further modifications to the ratio
estimator the State's expert
said were necessary would not result in
increasing a standard
error by over six times the amount of the
standard error
calculated using Methodology #3. 11/
Based on our analysis, we conclude that the record supports a
finding,
with the 95 percent degree of confidence, that the State did not
meet
the 75 percent standard, for the three criteria at issue here, in
the
follow-up review. 12/
B. OCSE's failure to justify the sample size
does not provide a
basis for
reversing the disallowance.
Based on its expert's affidavit, the State alleged that justification
of
sample size is routinely required prior to any auditing procedure,
but
that the OCSE auditors had provided no such justification for either
the
program results audit or the follow-up review. The State
asserted:
This lack of justification for the sample size is
problematic for
two reasons: first, it
contravenes accepted auditing procedure
when a
sampling is used, and second, it makes it difficult if
not
impossible for the State to evaluate this aspect
of OCSE's
methodology.
State's brief, p. 11.
OCSE's expert responded to this as follows:
Provision of a justification for the sample size is
frequently
provided in audit situations, but it is
by no means required or
essential. By
determining sample size judgmentally, particularly
on the basis of previous audit experience and time and
cost
constraints, a smaller sample will result in
larger standard
errors, which . . . tends to favor
the State. As long as the
estimates and their
corresponding standard errors are properly
computed,
this criticism forms no basis for invalidating the
results.
OCSE SAF, p. 111.
The reply by the State's expert was merely that the fact that OCSE did
not
have a justifiable sample size correctly in mind at the outset
showed that
the audit was "ill-planned." State SAF, p. 336.
Thus, the State's evidence shows at most that OCSE's lack of a
statement
justifying the sample size was inconsis- tent with accepted
practice and
not proper planning -- points disputed by OCSE's expert, who had
more
experience with government auditing. In any event, the State did
not
dispute OCSE's expert's assertion that any lack of proper planning
with
respect to sample size benefitted the State since it would increase
the
standard errors.
.While the State said it was hampered in evaluating OCSE's methodology
by
not knowing OCSE's justification for the sample size, the State did
not
explain why this was so. The State clearly knew what sample size was
actually
drawn and knew the size of its own caseload. Moreover, in
response to
an inquiry from the State, OCSE explained that it had
developed the sample
size by referring to the tables in an appendix to a
particular statistical
sampling treatise. State Ex. L, p. 2. We fail
to see why the
State needed to have a written statement of precisely why
OCSE used a
particular sample size in order to evaluate the sample
results.
We conclude that the State was provided ample information about the
sample
and ample opportunity to evaluate that information.
C. Any failure by OCSE to follow its audit
guides does not provide
a
basis for reversing the disallowance.
The State's expert found that OCSE auditors did not follow their own
audit
guides for the period in question in every respect. The State
took the
position that "this failure to follow standardized audit
procedures has
irrevocably tainted the audit process because of the
arbitrary manner in
which these Audit Guides were disregarded." State's
brief, p. 18.
The State made separate allegations regarding the program
results audit and
the follow-up review, and we discuss these allegations
separately below.
1. The Program Results Audit
According to the State, the Program Results Audit Guide called for
the
auditors to use an initial "probe sample" but to expand the probe
sample
if necessary to ensure that 100 cases were reviewed for each
audit
criterion (unless a passing score could not be achieved by raising
the
sampled number of cases to 100). Only 55 cases were reviewed for
the
"support obligations" criterion. The State's expert found:
If all additional 45 cases reviewed for the
"Support" criterion
would have been classified as
"action" then the efficiency rate for
that criterion
would have been .76 and the efficiency range would
have been (.68, .84). Thus it seems that additional samples
should
have been drawn for the "Support"
criterion.
State Ex. O, Report, p. 2. 13/
OCSE responded that neither the statute nor the regulations requires
that
OCSE employ any particular methodology when conducting audits, and
that
OCSE's audit guidelines are just that--guidelines--not rigid rules
of
law. OCSE also pointed out that both the Comptroller
General's
standards for governmental audits and OCSE's guidelines emphasize
the
importance of exercising professional judgment appropriate to
the
circumstances under which the audits are being conducted. OCSE
argued
that the auditors' judgment not to expand the probe sample (made
after
consultation with OCSE audit division supervisory staff) was
both
consistent with the guide and was reasonable.
We agree with OCSE that its auditors' judgment not to expand the
probe
sample was neither inconsistent with the guide nor arbitrary. As
OCSE
pointed out, its guide did not require expansion of the probe
sample,
but only advises an auditor that this is one alternative which should
be
"considered" under certain circumstances. OCSE's brief, pp. 31-32;
State
Ex. A, App. I, p. 11. The circumstances here indicated that the
State
would pass the "support obligations" criteria only if the probe
sample
was expanded to 100 cases to be reviewed for this criterion and
the
State took action in all 45 cases to be added. A finding of action
in
all 45 added cases would be highly improbable, however, especially
given
the auditors' findings that the State had taken action in only 31 of
the
55 cases already reviewed. OCSE SAF, .p. 112; State Ex. B, p.
21.
Moreover, OCSE's expert pointed out, and the State did not deny,
that
the use of a smaller sample size does not by itself invalidate
the
sample results and, indeed, tends to favor the State by increasing
the
standard error. OCSE SAF, p. 111.
Finally, the failure to expand the probe sample to 100 cases would
not
provide a basis for reversing the disallowance, even if we were to
find
that the auditors improperly disregarded OCSE's audit
guidelines.
Expanding the probe sample to add 45 cases to be reviewed for
the
"support obligations" criterion would not have changed the
negative
sample results already obtained for the other two criteria.
The
auditors had already found that the State failed to meet these
two
criteria, based on relevant samples of 121 cases
("establishing
paternity") and 138 cases ("state parent locator
service"). In order to
be found in substantial compliance, the State
must meet all audit
criteria.
2. The Follow-Up Review
The "Follow-Up Review Guide for Evaluating Deficiencies Cited in
the
Notice of Substantial Noncompliance for Fiscal Years 1984 and
1985
Program Results Audits" describes a single-stage sampling
process.
State Ex. E. The State's expert found that this process was not
followed
in the Ohio follow-up review, but that the auditors had instead used
a
sampling strategy which was similar to that used in the program
results
audit, but differed in two respects: (1) the stratification
was
different; and (2) the way sample numbers of cases from each strata
were
selected was different. State Ex. O, pp. 4-5. The State
acknowledged
that the reason OCSE could not follow the process in the guide
was that
the State was unable to provide a complete statewide listing of
child
support cases. 14/ The State did not allege the sampling strategy
OCSE
employed in the follow-up review was inappropriate to
circumstances
where a statewide listing could not be used. Instead, the
State said
that OCSE should have anticipated that not all states could
produce a
statewide list and therefore should have had an alternative
method
available (such as that from the Program Results Audit Guide).
In response, OCSE provided the following opinion of its sampling expert:
[The] criticism that the stratification employed in
conducting the
[program results] audit differed from
that used in conducting the
[follow-up review] does
not serve to invalidate the results. The
construction of strata is frequently based on judgment
or
homogeneity of case characteristics. The validity
of the results is
not put into question because the
stratification system for the
[program results]
audit differs from that for the [follow-up
review]. . . . Similarly, the fact that a sample was
allocated
differently to the strata in the Follow-up
Review than in the
Program Results Audit does not
invalidate the result, as long as
the allocated size
is treated properly in deriving the estimates.
Clearly, it was impossible for the auditors to use the process
described
in the follow-up review guide, and the auditors chose another
reasonable
process. The mere fact that the process chosen differed from
that in
the Program Results Audit Guide does not invalidate the sampling
method.
The opinion given by the State's expert merely notes the
differences,
without providing any basis for disagreeing with OCSE's expert's
opinion
that these differences were not material. While it might have
been
preferable for OCSE to have anticipated in its follow-up review
guide
the possibility that a state could not produce a statewide list, we
do
not see any prejudice to the State here from OCSE's failure to do so.
Accordingly, we conclude that the State's arguments concerning
OCSE's
alleged failure to follow its own guides are without merit.
III. OCSE did not violate the statutory requirement that it conduct
a
"complete" audit.
Section 452(a)(4) of the Act, as amended, requires OCSE to--
evaluate the implementation of State programs
established pursuant
to such plan, conduct such
audits of State programs established
under the plan
approved under this part as may be necessary to
assure their conformity with the requirements of this part,
and,
not less often than once every three years (or
not less often than
annually in the case of any
State to which a reduction is being
applied under
section 403(h)(1), or which is operating under a
corrective action plan in accordance with section
403(h)(2)),
conduct a complete audit of the programs
established under such
plan in each State and
determine for the purposes of the penalty
provision
of section 403(h) whether the actual operation of
such
programs in each State conforms to the
requirements of this part; .
. . .
(Emphasis added) The State argued that OCSE had not complied with
the
statutory requirement of a "complete" audit since OCSE's
follow-up
review examined only those audit criteria which were either unmet
or
only marginally met in the program results audit. According to
the
State, the disallowance should be reversed on this basis.
OCSE pointed out in response that its regulation at 45 C.F.R.
305.10
provides for a "review at the end of the corrective action
period
[which] will cover only the criteria specified in the notice
of
non-compliance . . . ." OCSE explained that its
interpretation,
reflected in this regulation, is that the "complete" audit
requirement
of section 452(a)(4) applies only to a program results audit
since this
audit is the basis for a penalty determination. According to
OCSE, the
follow-up review "merely determines whether OCSE will proceed
with
application of the penalty or will rescind the penalty in the event of
a
state's having achieved compliance during a corrective action
period."
OCSE's brief, p. 25. OCSE also argued that it is reasonable to
conclude
that Congress intended only that any audit be as comprehensive as
is
necessary to make the required determination to impose or rescind
a
penalty.
We conclude that OCSE's interpretation is consistent with the statute
and
reasonably focuses audit resources on audit criteria a state is
having
problems meeting. Moreover, as OCSE pointed out, OCSE's
interpretation was to
the State's advantage: if OCSE had examined all
of the audit criteria
in the follow-up review, it may have found
additional criteria that the State
did not meet.
Accordingly, we reject the State's argument that the disallowance
should
be reversed based on the statutory requirement for a "complete"
audit.
IV. OCSE properly imposed a disallowance here even though the
State
maintained compliance with criteria that were
only marginally met
in the first audit.
The State argued that no disallowance should have been imposed
here
because the follow-up review showed that the State had
maintained
substantial compliance with the two criteria which OCSE found the
State
had only marginally met in the program results audit. The State
based
this argument on the following wording of 45 C.F.R. 305.100(d) in
the
1985 regulations:
In the case of a State whose penalty suspension ends
because the
State is implementing its corrective
action plan but has failed to
achieve substantial
compliance with the unmet criteria identified
in the
notice and maintain substantial compliance with any
marginally-met criteria identified in the notice within
the
corrective action period . . . the penalty is
effective for any
quarter that ends after expiration
of the suspension period . . . .
(Emphasis added) The State would have us interpret this language
(and
the same language in section 305.99(d)(3)) to mean that, for a
penalty
to be imposed after a corrective action period, OCSE must find both
that
the State failed to achieve substantial compliance with the
unmet
criteria and that the State failed to maintain substantial
compliance
with any marginally-met criteria.
The State's interpretation not only does not make sense, but it
is
inconsistent with other parts of the regulation. Section
305.20
provides that a state must meet all of the listed criteria in order
to
be found to be in substantial compliance. Section 305.99(d)(1)
provides
that the suspension of a penalty will continue until OCSE
determines:
"The State has achieved substantial compliance with the unmet
criteria
cited in the notice and maintain[ed] substantial compliance with
any
marginally-met criteria cited in the notice; . . . ." While the
wording
of the sections relied on by the State may be somewhat ambiguous,
the
meaning is perfectly clear in context: it is not sufficient for
the
State to maintain substantial compliance with the
marginally-met
criteria; the State must also meet in the follow-up review the
criteria
which were unmet in the program results audit.
Moreover, the State had clear notice through the preamble to the
1985
regulations that it would be subject to a penalty "if the State fails
to
achieve substantial compliance with the unmet criteria cited in
the
notice, or fails to maintain substantial compliance with
any
marginally-met criteria cited in the notice." 50 Fed. Reg. at
40125
(emphasis added). The preamble further states that a penalty will
be
imposed if a state "failed to achieve substantial compliance with
[the
IV-D] requirements within the corrective action period . . . ." Id.
Thus, we conclude that, in spite of the inartful wording of the
sections
cited by the State, those sections do not provide a basis for
reversing
the disallowance here.
Conclusion
For the reasons stated above, we uphold OCSE's decision to reduce
the
State's AFDC funding by one percent for calendar year 1988.
_____________________________ Donald F. Garrett
_____________________________ Alexander
G.
Teitz
_____________________________ Judith A.
Ballard
Presiding Board Member
1. The 1985 regulations also provided an expanded list of
audit
criteria relating to state plan requirements for subsequent
audit
periods, and added new performance- related indicators for use
beginning
with the FY 1988 audit period.
2. In spite of the statutory language, the State argued
that
legislative history of the 1984 Amendments shows that Congress
intended
that OCSE's implementing regulations would have prospective effect
only.
The legislative history on which the State relied, however, does
not
refer to OCSE's implementation of the substantial compliance
standard;
instead, it refers to the expectation by Congress that OCSE would
issue
new regulations focusing on whether states were effectively
attaining
program objectives (in addition to meeting the existing state
plan
requirements). S.REP. No. 378, 98th Cong., 2d Sess. 32-33
(1984).
3. The existing regulations required the states to have and
be
utilizing written procedures detailing step by step actions to be
taken.
45 C.F.R. 305.1, 305.24(a), 305.25(a), 305.33; 45 C.F.R. Part
303
(1983). Although no reduction had actually been imposed based on
the
existing audit criteria, this was due to the moratoria. The states
had
no guarantee that Congress would continue to delay imposition of
the
reductions.
4. We note that the percentages given in State Exhibit M (a
draft
analysis by OCSE of 1980 and 1981 audit results) are derived simply
by
dividing the number of complying sample cases by the total
number
reviewed. If OCSE had instead used the same method for
estimating
compliance levels it used in the 1984 and 1985 audits for all
states
(see our discussion below), the compliance percentages shown on
Exhibit
M for the earlier years would have been higher. Moreover,
contrary to
what the State argued, the report on audits for FYs 1984 and
1985
(State's Exhibit H) does not show that the 75 percent standard was
not
attainable. That report shows that 21 states or territories met all
the
criteria initially and at least 15 others met them after a
corrective
action period. (Some states had not yet had a follow-up
review, and
some have appealed the results of their follow-up reviews to
this
Board.) See also OCSE brief, pp. 23-24. In Maryland, the Secretary
had
acknowledged that some errors in making eligibility determinations
were
unavoidable due to the complex nature of the requirements. Here,
the
State did not argue that the requirements were complex or that there
was
any barrier to meeting the requirements which could not be overcome.
5. The State's expert described the following two assumptions
(which
he called "deterministic linear assumptions"):
(1) the proportion of cases requiring
action
in a particular subdivision relative to
the
total number of cases in that subdivision
is
constant for all political subdivisions
within
strata (homogeneity of ratios of
cases
requiring action relative to total case
load
for all political subdivisions within
strata)
and
(2) the proportion of cases requiring
action
in a stratum relative to the total number
of
cases in that stratum is constant
between
strata (homogeneity of ratios of
cases
requiring action relative to the total
caseload
between strata).
State SAF, p. 393. Stated differently, this means that the total
number
of cases requiring review for a particular criterion could always
be
determined from a multiple of the total caseload, so that the
ratio
between the two numbers would show up on a graph as a straight
line.
State SAF, p. 392.
6. Specifically, the State did not dispute the findings from
the
follow-up review that--
o The State took action in only 18 of 47 sample cases requiring
action
for "establishing paternity;"
o The State took action in only 16 of 41 sample cases requiring
action
to establish "support obligations;" and
o The State took action in only 48 of 100 cases requiring action
using
the "state parent locator service."
See State Ex. G, p. 2.
7. The State argued that it faced inherent difficulties in
conducting
an analysis of OCSE's recalculations in the middle of a
briefing
schedule, "including time constraints and lack of a mechanism to
compel
production of documents." State's reply brief, p. 5. While
we
recognize that the State's presentation of its case was made
more
difficult by OCSE's recalculations, the Board provided the State
ample
opportunity to respond, allowing extra time for the State to
consult
with its expert. Moreover, the State's reference to the Board's lack
of
direct subpoena power is disingenuous: the Board exercised
its
authority to order OCSE to produce relevant information by making
clear
to OCSE that it had to provide complete information regarding
the
statistical methods used. OCSE was unable to document every step of
its
calculations since they were apparently done using computer
programs,
but this did not prejudice the State. Presentation of the State's
case
was not dependent on knowing precisely each step of the
calculations
OCSE made, since the State could make its own calculations using
the raw
data and the proper formulas.
8. OCSE presented evidence from its statistical sampling expert
that a
finding of statistical validity does not depend on use of the 95
percent
confidence interval, since lesser confidence levels still indicate
a
very high probability in being correct in their assertions. OCSE
SAF,
pp. 112-113. Once having adopted the 95 percent confidence
level,
however, it would appear to be impermissibly arbitrary for OCSE to
use
it for some states and not for others.
9. Since OCSE's expert said that Methodology #2A was advanced
solely
in support of Methodology #2, we do not discuss Methodology
#2A
separately.
10. The State's expert attempted to justify this by asserting
that he
lacked sufficient time. However, the original problem had
been
presented to him at least nine months before his last
affidavit.
Moreover, the State did not ask for any extension of time for its
last
submission although the Board had liberally granted earlier
extensions
based on the State's need to consult its expert. Although
the State had
originally requested a hearing, the State withdrew this request
and did
not renew it.
Finally, we note that the Board had previously pointed out that the
burden
of performing recalculations could be minimized by focusing on
the criterion
("establishing paternity") for which the raw sample data
was least favorable
to the State. Yet, the State's expert did not even
go as far as
expressing what the likelihood would be that a correct
analysis would make
any difference with respect to this criterion.
OCSE's expert had expressed
his professional opinion that a statistician
has an obligation, when he
raises a question about a technique, to
address the question of whether any
alleged incorrect calculation is
material to the decision to be based on
it. OCSE SAF, p. 430. The
State's expert did not dispute this
professional opinion, but the State
responded by arguing that OCSE had the
burden to do any required
recalculations because of the earlier mistakes it
had made and because
it had the statutory duty to determine whether a state
was substantially
complying. In our view, the State at the very least had the
burden to
show that recalculations were required and might make a difference
in
the ultimate conclusion, especially given the raw data here.
11. The State's expert had in his second affidavit described
the
effect of OCSE's first recalculations as "substantially widening
the
confidence intervals" where the largest increase in a standard error
was
less than 2.5 times the original standard error. State SAF, p.
333. In
contrast, he did not describe the widening he anticipated from
use of
the "correct analysis" as substantial.
12. Assuming some widening of the confidence interval if the
"correct
analysis" were done, the only criterion with respect to which
the
State's presentation raises a reasonable doubt is the
"support
obligations" criterion. OCSE's Methodology #3 showed the State
achieving
77.0 percent (at the upper limit of the 95 percent confidence
interval)
in the program results audit for this criterion. OCSE SAF, p.
510. If
the confidence interval for this criterion increased by only
3
percentage points, the State's compliance with this criterion would
have
been more than marginal, and OCSE would not have reaudited for
this
criterion in the follow-up review. We do not consider this
possibility
significant, however, since the State still failed two other
criteria.
13. The State also argued that the Program Results Audit Guide
calls
for use of the 95 percent confidence level and that, because of how
OCSE
improperly calculated the efficiency rates and efficiency ranges,
OCSE's
results did not have this required degree of confidence. We first
note
that the Program Results Audit Guide does not specify any
particular
technique for calculating the 95 percent confidence level.
Even if we
read the Guide as referring to a precise technique and figure,
however,
we would reject the State's argument that any inconsistency arises
here.
As discussed above, the record provides a basis for concluding with
the
95 percent degree of confidence that the State would not meet all of
the
audit criteria even if the State's alleged "correct analysis" were
used.
14. The State blamed its inability to produce such a list on
OCSE's
delay in approving the State's proposal to develop a computerized
case
tracking system. The record indicates that much of this delay
was
attributable to flaws in the State's proposal, and to changes the
State
voluntarily made. See State Ex. N. OCSE also pointed out
that it had
informed the states in an action transmittal (OCSE-AT-87-7) that
they
should be prepared to provide statewide listings (although OCSE did
not
deny that it later rescinded this directive). The reason why the
State
was unable to produce such a list is, in any event, irrelevant.
The
important point here is that, given the lack of such a list, OCSE
was
unable to follow the process in its follow-up guide and had to use
a
different