Skip Navigation



CASE | DECISION | ANALYSIS | JUDGE | FOOTNOTES

Department of Health and Human Services
DEPARTMENTAL APPEALS BOARD
Appellate Division
IN THE CASE OF  


SUBJECT: Alabama Department of Human Resources,
Delaware Department of Health and
Social Services, District of Columbia
Office of the
Corporation Counsel, Hawaii Department of the Attorney
General
, Kansas Department of Social and Rehabilitation Services,
Louisiana Department of Social
Services, New Hampshire Department
of Health
and Human Services, New Mexico Human Services Department,
and Rhode Island Department of Human Services

DATE: July 28, 2005
   


 

Docket No. A-04-45, A-04-46, A-04-47, A-04-48, A-04-49, A-04-50, A-04-51, A-04-52, and A-04-53
Decision No. 1989
DECISION
...TO TOP

DECISION

The nine States listed above jointly appeal determinations by the Administration for Children and Families (ACF), dated November 14, 2003, that they were subject to penalties on the grounds that they failed to demonstrate that their child support enforcement programs, operated pursuant to title IV-D of the Social Security Act (Act) met required performance standards during fiscal years (FYs) 2001 and 2002. The penalties consist of one-percent reductions in the amount of funding that each State received for FY 2001 under the Temporary Assistance for Needy Families program (TANF) established by title IV-A of the Act. (1)

The States raise several issues common to all or several States, as well as issues affecting individual States. We address the common issues first, then issues pertaining to individual States. As discussed below, we find in favor of ACF and sustain ACF's determinations that the States are subject to penalties.

Four other states have individually appealed determinations that they are subject to penalties, also announced in letters from ACF dated November 14, 2003. Those appeals, which were assigned Docket Nos. A-04-40, A-04-43, A-04-56 and A-04-61, present issues similar to the common issues in these appeals, as well as issues particular to the four individual States. These appeals will be the subject of separately issued decisions.

Six states, including four of the joint States here, have also appealed determinations that they are subject to penalties for violations during FY 2002 continuing in FY 2003, which ACF announced in letters dated in February 2005. Those appeals have been assigned Docket Nos. A-05-43, A-05-44, A-05-46, A-05-47, A-05-50, and A-05-51. They have been stayed pending our decision in the appeals of penalties for FYs 2001 and 2002. As we explain in this decision, we find that Rhode Island, one of the states appealing a determination that it is subject to a penalty continuing in FY 2003, may in that appeal address particular issues raised here that were not dispositive to our decision in this appeal.

Summary of the IV-D performance penalty system

Before proceeding to the facts and issues raised by the appeals, we describe the statute and regulations for assessing penalties based on the performance of a state's child support enforcement program. This summary omits some specific provisions that we later provide when relevant to our analysis. Additionally, this summary necessarily reflects our conclusions on some contested legal issues.

Title IV-D of the Act provides federal funding for child support enforcement programs that seek child support on behalf of minor children receiving public assistance. Titles IV-A and IV-D and regulations at 45 C.F.R. Part 305 create a system of incentives and penalties under which federal TANF funds are awarded to or withheld from states based on the performances of their state IV-D programs. The Act and regulations establish five performance measures that are used to award incentives, of which three are also used to assess penalties. This appeal concerns the penalty performance measures. The performance measure at issue in most of these appeals is called the "paternity establishment percentage" (PEP). It measures a state's performance at establishing the paternity of children born out of wedlock. There are two types of PEPs, one based on children in a state's IV-D caseload, the other based on all children in the state. States may select either measure and may change their selection from year to year. The two other penalty performance measures assess a state's performance at establishing orders of support for minor children in IV-D cases, and at collecting support in IV-D cases. States are assessed on their performances for each federal fiscal year (FY or FFY), which runs from October 1 through September 30.

ACF assesses a state's performance based on data that the state submits on a form prescribed by ACF. States must submit complete and reliable data on their performances during each fiscal year by December 31 following the end of the fiscal year. ACF audits each state's annual data submission to determine if the data reported are complete and reliable. ACF is authorized to accept a state's unreliable data if it determines that the unreliability is of a technical nature that does not affect calculation of the state's performance measures. (For convenience, in this decision we refer to the complete and reliable data that states must submit simply as reliable data.)

As we conclude below, a state is penalized if, for two consecutive years, with respect to the same performance measure, it fails to demonstrate with reliable data that it met the required level of performance. Thus, a state is penalized if for two consecutive years it fails the same penalty performance measure or submits unreliable data on the same performance measure, or if it fails the performance measure in one year and submits unreliable data on that performance measure in the other year. A state that has failed to achieve the required level of performance may still pass the performance measure if its level of performance increased over the previous year's level by a specified amount. A state that has failed a performance measure or failed to submit reliable data in one year is not penalized if it corrects the failure with respect to the following year, which the regulations refer to as the corrective action year. (A third basis for a penalty, in addition to failing the IV-D penalty performance measures and failing to submit reliable data, is failure to substantially comply with the requirements of the IV-D program. That basis is not at issue in these appeals, where the penalties were all based on failures to meet performance measures or to submit reliable data, or some combination of the two.)

The penalties consist of reductions in the annual TANF funding that a state receives under title IV-A of the Act, called the State Family Assistance Grant (SFAG). The penalties range from one to five percent, depending on the number of failures and on how many years a state continues to have failures. A state may appeal a decision imposing penalties to the Board. Title IV-A of the Act also imposes TANF funding reductions as penalties for various failures to satisfy requirements related to a state's administration of its TANF program which, coupled with the IV-D penalties, could increase the amount of the reduction in a state's TANF funding. Those other penalty provisions are not at issue here.

ACF notified the States of its determinations that they are subject to penalties in letters to each State from the Assistant Secretary for Children and Families dated November 14, 2003. The letters informed each State that it had failed one or more penalty performance measures and/or submitted unreliable data for FYs 2001 and 2002, and that it was thus subject to a reduction in funding equal to one percent of its adjusted SFAG for the TANF program for FY 2001, "imposed quarterly, beginning with first quarter of FFY 2004, for the four quarters of FFY 2003 . . ." States' Notice of Appeal, Exhibits (Exs.) 1-9 (November 14, 2003 letters).

All nine States make one joint argument questioning whether ACF's process for notifying the States of their performance failures and penalties complied with the applicable regulations. Seven and eight of the States make joint arguments concerning, respectively, ACF's method for determining the reliability of a state's paternity establishment data, and whether ACF should have disregarded data errors that tended to make a state's performance appear worse than it actually was. Seven States make arguments that apply to their appeals only. Below, we first describe the general legal framework of the penalty process, followed by an analysis that first addresses the joint arguments, and then each State's appeal. As the applicable legal provisions are lengthy and extensively cross referenced, we present them when necessary for a particular portion of our decision.

Legal Background

Title IV-A of the Act (sections 401-419; 42 U.S.C. §§ 601-619), "Block Grants to States for Temporary Assistance for Needy Families" (the TANF program), provides grants to eligible states that have approved programs for providing assistance to needy families with children, and for providing their parents with job preparation, work and support services to enable them to leave the program and become self-sufficient. Sections 401, 402 of the Act. To receive TANF funds, a state must operate a child support enforcement program consistent with title IV-D of the Act. Section 402(a)(2) of the Act. Title IV-D (sections 451-469B; 42 U.S.C. §§ 651-669b) is a cooperative federal-state program that aims at increasing the effectiveness of child support collection by such measures as locating absent parents, establishing paternity, obtaining child and spousal support, and assuring that assistance in obtaining support be available to all children for whom such assistance is requested. Maryland Dept. of Human Resources, DAB No. 1875 (2003), citing section 451 of the Act. States operate their child support enforcement programs subject to oversight by ACF's Office of Child Support Enforcement (OCSE). We refer in this decision to ACF as the respondent federal agency; the IV-D regulations refer to OCSE.

I. IV-D Penalties -- Section 409(a)(8) of the Act and 45 C.F.R. Part 305

Titles IV-A and IV-D of the Act impose various requirements on states, including performance standards or measures that states must achieve in the operation of their TANF and Child Support Enforcement programs. Section 409(a) of the Act, "Penalties," provides for financial penalties against states, in the form of reductions in a state's federal TANF grant or SFAG, for some 14 categories of noncompliance with various requirements imposed by title IV. At issue here is section 409(a)(8) of the Act, which provides for penalties related to the performance of a state's child support enforcement program under title IV-D. As noted above, these penalties, as relevant here, address a state's failure, for two consecutive years, to meet the same IV-D performance measure and/or submit reliable data needed to calculate performance. Section 409(a)(8) provides in relevant part as follows:

(8) NONCOMPLIANCE OF STATE CHILD SUPPORT ENFORCEMENT PROGRAM WITH REQUIREMENTS OF PART D.--

(A) IN GENERAL.--If the Secretary finds, with respect to a State's program under part D, in a fiscal year beginning on or after October 1, 1997--

(i)(I) on the basis of data submitted by a State pursuant to section 454(15)(B), or on the basis of the results of a review conducted under section 452(a)(4), that the State program failed to achieve the paternity establishment percentages (as defined in section 452(g)(2)), or to meet other performance measures that may be established by the Secretary;
(II) on the basis of the results of an audit or audits conducted under section 452(a)(4)(C)(i) that the State data submitted pursuant to section 454(15)(B) is incomplete or unreliable; or
(III) on the basis of the results of an audit or audits conducted under section 452(a)(4)(C) that a State failed to substantially comply with 1 or more of the requirements of part D (other than paragraph (24) or subparagraph (A) or (B)(i) of paragraph (27), of section 454); and
(ii) that, with respect to the succeeding fiscal year--

(I) the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance as described in subparagraph (A)(i); or
(II) the data submitted by the State pursuant to section 454(15)(B) is incomplete or unreliable; the amounts otherwise payable to the State under this part for quarters following the end of such succeeding fiscal year, prior to quarters following the end of the first quarter throughout which the State program has achieved the paternity establishment percentages or other performance measures as described in subparagraph (A)(i)(I), or is in substantial compliance with 1 or more of the requirements of part D as described in subparagraph (A)(i)(III), as appropriate, shall be reduced by the percentage specified in subparagraph (B).

The Act's IV-D penalty and incentive provisions are implemented by regulations at 45 C.F.R Part 305. Section 305.61 of 45 C.F.R reflects the penalty provisions of section 409(a)(8) of the Act, and also refers to the two other penalty performance measures, which assess a state's performance at establishing orders of support and at collecting support in IV-D cases:

§ 305.61 Penalty for failure to meet IV-D requirements.

(a) A State will be subject to a financial penalty and the amounts otherwise payable to the State under title IV-A of the Act will be reduced in accordance with § 305.66:
(1) If on the basis of:

(i) Data submitted by the State or the results of an audit conducted under § 305.60 of this part, the State's program failed to achieve the paternity establishment percentages, as defined in section 452(g)(2) of the Act and § 305.40 of this part, or to meet the support order establishment and current collections performance measures as set forth in § 305.40 of this part; or
(ii) The results of an audit under § 305.60 of this part, the State did not submit complete and reliable data, as defined in § 305.1 of the part; or
(iii) The results of an audit under § 305.60 of this part, the State failed to substantially comply with one or more of the requirements of the IV-D program, as defined in § 305.63; and

(2) With respect to the immediately succeeding fiscal year, the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance or the data submitted by the State are still incomplete and unreliable.
(b) The reductions under paragraph (c) of this section will be made for quarters following the end of the corrective action year and will continue until the end of the first quarter throughout which the State, as appropriate:
(1) Has achieved the paternity establishment percentages, the order establishment or the current collections performance measures set forth in § 305.40 of this part;
(2) Is in substantial compliance with IV-D requirements as defined in § 305.63 of this part; or
(3) Has submitted data that are determined to be complete and reliable.

In the preamble to the Part 305 regulations, which was forwarded to the states as part of Action Transmittal AT-01-01, ACF referred to the first of the two consecutive years as the performance year. 65 Fed. Reg. 82,178, 82,186, 87,189 (Dec. 27, 2000). In these appeals the performance year was FY 2001, and the corrective action year was FY 2002.

The funding reduction penalties range from one to two percent of a state's SFAG for the first finding of two consecutive years of violations, from two to three percent for the second consecutive finding, and from three to five percent, for the each subsequent consecutive finding. Section 409(a)(8)(B) of the Act; 45 C.F.R. § 305.61(c). A state must expend additional state funds to replace any reduction in the SFAG resulting from penalties. (2) Section 409(a)(12) of the Act; 45 C.F.R. § 262.1(e).

As noted above, section 409(a) of the Act also imposes penalties for violations of various other requirements, principally related to the TANF program at title IV-A. For some of those other violations, not at issue here, the Secretary may not impose a penalty if he finds that there was reasonable cause for the violation, and must afford a state the opportunity to enter into a corrective compliance plan prior to imposing a penalty. Notably, however, section 409 withholds those ameliorative measures from the title IV-D penalties at issue here. Section 409(b),(c) of the Act.

II. The "Paternity Establishment Percentage" (PEP) -- section 452(g)(2) of the Act

The performance measure at issue in most of these appeals, the paternity establishment percentage, is essentially the percentage of children born out of wedlock for whom paternity has been established or acknowledged; it is "commonly known as the PEP." 45 C.F.R. § 305.2(a)(1).

Section 452(g)(2) defines two versions of the PEP, one based on children in a state's IV-D caseload, the other based on all children in the state:

(A) the term "IV-D paternity establishment percentage" means, with respect to a State for a fiscal year, the ratio (expressed as a percentage) that the total number of children--

(i) who have been born out of wedlock,
(ii)(I) except as provided in the last sentence of this paragraph, with respect to whom assistance is being provided under the State program funded under part A in the fiscal year or, at the option of the State, as of the end of such year, or (II) with respect to whom services are being provided under the State's plan approved under this part in the fiscal year or, at the option of the State, as of the end of such year pursuant to an application submitted under section 454(4)(A)(ii), and
(iii) the paternity of whom has been established or acknowledged,

bears to the total number of children born out of wedlock and (except as provided in such last sentence) with respect to whom assistance was being provided under the State program funded under part A as of the end of the preceding fiscal year or with respect to whom services were being provided under the State's plan approved under this part as of the end of the preceding fiscal year pursuant to an application submitted under section 454(4)(A)(ii);
(B) the term "statewide paternity establishment percentage" means, with respect to a State for a fiscal year, the ratio (expressed as a percentage) that the total number of minor children--

(i) who have been born out of wedlock, and
(ii) the paternity of whom has been established or acknowledged during the fiscal year, bears to the total number of children born out of wedlock during the preceding fiscal year; . . .

We present the regulatory definitions of the two PEP measures below when relevant to our analysis.

III. PEP performance levels required to avoid a penalty

To avoid a penalty, a state must maintain a PEP of 90% or more. A PEP lower than 90% may lead to a penalty unless the state has increased its PEP over the previous year by the percentages specified in the following table from the regulation:

----------------------------------------------------------------------------

PEP   Increase required  Penalty FOR FIRST FAILURE if
          over previous          increase not met
          year's PEP

----------------------------------------------------------------------------

90% or more ........... None ................ No Penalty.
75% to 89% ............. 2% .................. 1-2% TANF Funds.
 50% to 74% ............ 3% .................. 1-2% TANF Funds.
 45% to 49% ............ 4% .................. 1-2% TANF Funds.
 40% to 44% ............ 5% .................. 1-2% TANF Funds.
 39% or less ............. 6% .................. 1-2% TANF Funds.

45 C.F.R. § 305.40(a)(1), Table 4.

IV. Other performance measures established by regulation

The Act authorizes the Secretary to establish other IV-D penalty performance measures, in addition to the PEP. The Part 305 regulations establish two additional IV-D penalty performance measures, the Support Order Establishment Performance Level, essentially the percentage of the total number of IV-D cases during the fiscal year that have support orders, and the Current Collections Performance Level, essentially the percentage of current child support owed in IV-D cases that has been collected.

(In all, there are five IV-D performance measures for determining incentives, three of which may also subject a state to penalties and are thus referred to here as the penalty performance measures. The two performance measures that are used to determine performance only for the purpose of awarding incentives are the Arrearage Collections Performance Level and the Cost-Effectiveness Performance Level. They are relevant here in that the penalty against one of the States, the District of Columbia, was based partially on two consecutive years of failure to submit reliable data needed to calculate one of the incentive-only measures, the Arrearage Collections Performance Level.

Sections 262.1(b) through (e) of 45 C.F.R. outline the process for assessing penalties against states, including the assessment of multiple penalties, and the process for assessing penalties that exceed a 25% limit on the total reduction in the subject state's quarterly SFAG. (The SFAG is the amount of the basic block grant allocated to each eligible state under the formula at section 403(a)(1) of the Act. 45 C.F.R. § 260.30.) Section 262.7 provides for five-day notice and for a right to appeal an adverse action reducing funding to the Board.

The Act and the Part 305 regulations also contain requirements for the submission of reliable data, authorize the Secretary to accept unreliable data in certain circumstances, and provide a process for notifying states of data audit findings and determinations that that they are subject to penalties. We include these provisions in detail at relevant points in our analysis.

ANALYSIS
...TO TOP

I. Common issues

A. ACF complied with applicable notice requirements.

The States' principal overarching argument is that ACF failed to comply with the notice requirements of Part 305. The applicable regulation provides:

45 C.F.R. § 305.66 Notice, corrective action year, and imposition of penalty.

(a) If a State is found by the Secretary to be subject to a penalty as described in § 305.61 of this part, the OCSE will notify the State in writing of such finding.
(b) The notice will:
(1) Explain the deficiency or deficiencies which result in the State being subject to a penalty, indicate the amount of the potential penalty, and give reasons for the finding; and
(2) Specify that the penalty will be assessed in accordance with the provisions of 45 C.F.R. § 262.1(b) through (e) and 262.7 if the State is found to have failed to correct the deficiency or deficiencies cited in the notice during the automatic corrective action year (i.e., the succeeding fiscal year following the year with respect to which the deficiency occurred.)
(c) The penalty under § 305.61 of this part will be assessed if the Secretary determines that the State has not corrected the deficiency or deficiencies cited in the notice by the end of the corrective action year.
(d) Only one corrective action period is provided to a State with respect to a given deficiency where consecutive findings of noncompliance are made with respect to that deficiency. In the case of a State against which the penalty is assessed and which failed to correct the deficiency or deficiencies cited in the notice by the end of the corrective action year, the penalty will be effective for any quarter after the end of the corrective action year and ends for the first full quarter throughout which the State IV-D program is determined to have corrected the deficiency or deficiencies cited in the notice.
(e) A consecutive finding occurs only when the State does not meet the same criterion or criteria cited in the notice in paragraph (a) of this section.

The States argue that ACF did not properly notify them that they had failing data or performances and would be penalized if they did not correct those failures. They argue that the regulation required ACF to notify them either before or at some point during FY 2002, the corrective action year, that they had failed the performance or data standards in FY 2001, the performance year, and that they faced penalties (including the penalty amounts) unless they corrected their deficiencies during FY 2002. The only notices that ACF provided of those failures and of the specific penalties the States faced for failure to correct those problems in FY 2002, they argue, were ACF's November 14, 2003 letters announcing the determinations that the States are subject to penalties, which arrived too late for them to have avoided the penalties by taking corrective action during FY 2002.

For the reasons explained below, we conclude that the regulation did not require ACF to notify the States of their FY 2001 failures prior to or during FY 2002, the corrective action year. Rather, as explained below, we conclude that in the penalty process established by the regulations, penalties follow two consecutive years of failing performance or unreliable data relating to the same performance measure, and the start of the corrective action year is not conditioned upon notice from ACF. Neither the statute nor the regulation provide a specific time frame or deadline for ACF to notify a state of the possibility of penalties raised by its performance-year scores. Moreover, the structure of the system established by section 409 of the Act and the Part 305 regulations necessarily does not permit states a full year following notice of their performance-year failures before the imposition of penalties upon the end of the corrective action year.

We find that these provisions place on a state the ultimate responsibility for monitoring its own performance and that, in any event, the States here were aware of their own performances and moreover were informed of their data reliability problems during the corrective action year.

1. The regulation did not require notice prior to the end of FY 2002, the corrective action year.

The States argue that 45 C.F.R. § 305.66 refers to the assessment of penalties as a conditional, future event, meaning that ACF must notify a state when it is at risk for a penalty, and not after the penalty has been calculated and is being assessed, as in the November 14, 2003 letters. They cite language in the regulation that ACF's notice will "indicate the amount of the potential penalty" and state that a penalty "will be assessed . . . if the State is found to have failed to correct the deficiency or deficiencies cited in the notice during the automatic corrective action year." 45 C.F.R. § 305.66. The States argue that "potential penalty" means a penalty that has yet to be determined or assessed. The States cite language from the preamble to the notice regulation stating that the penalty "will be assessed if the Secretary determines that the State has not corrected the deficiency or deficiencies cited in the notice by the end of the corrective action year." 65 Fed. Reg. 82,192.

These statements, the States argue, demonstrate that the purpose of the notice required by section 305.66 is to afford a state the opportunity to make use of the corrective action period to avoid a penalty, to warn a state that it must correct the deficiencies cited in the notice or face penalties, and not to announce penalties as a fait accompli. The November 14, 2003 letters, they argue, failed to fulfill that purpose because they assessed specific penalties whose amounts had been definitively determined.

The States misconstrue ACF's obligation to provide notice under the regulations. The notice requirements of section 305.66 do not apply until a state "is found by the Secretary to be subject to a penalty as described in § 305.61 . . ." Section 305.61 in turn provides that a state "will be subject to a financial penalty," and the amounts otherwise payable to it under title IV-A will be reduced, upon the completion of two consecutive years of noncompliance or unreliable data relating to the same performance measure. A state is thus not subject to a penalty, and the notice requirements of section 305.66 do not apply, until the completion of two consecutive years of noncompliance, after which a penalty must be assessed.

The conditional nature of some of the language in section 305.66, such as the reference to potential penalties, is likely attributable to its incorporation of selected regulations at 45 C.F.R. Part 262, specifically sections 262.1(b) through (e) and 262.7. Part 262 outlines the procedures for assessing penalties for some of the several violations listed in section 409(a) of the Act, other than the noncompliance with IV-D performance measures addressed at section 409(a)(8) that is at issue here. Section 262.1(b) through (e) includes procedures for collecting multiple penalties from a state found to have committed more than one of the violations listed in section 409(a), the maximum reduction that may be made in a state's SFAG regardless of the number of penalties, and the requirement that penalties will be taken by reducing the SFAG payable for the quarter that immediately follows ACF's final decision. Section 262.7 affords states the right to appeal penalty determinations to the Board. A penalty to which a state is subject under section 409(a)(8) and Part 305 is thus a potential penalty in the sense that it is still subject to these other provisions before the actual amount of the reduction to be taken in a state's SFAG is determined. The November 14, 2003 letters noted these other provisions, stating that the announced penalty "will be imposed in accordance with the provisions of 45 C.F.R. § 262.1(b) through (e) . . ." States' Notice of Appeal, Exs. 1-9. However, there is no indication here that any penalties at issue are to be combined with penalties under portions of section 409(a) other than section 409(a)(8), or exceed the maximum reduction that may be taken in a state's SFAG. Since there were not multiple penalties here, ACF could inform each State of the amount of its SFAG reduction in the notice letters that ACF issued pursuant to section 305.66 on November 14, 2003.

Part 262 also contains procedures for states found subject to penalties for the other types of TANF violations that do not apply to the IV-D penalties at issue here, including the opportunity to present arguments, demonstrate reasonable cause, and submit a corrective compliance plan, prior to ACF's decision to impose a penalty that may be appealed to the Board. 45 C.F.R. § 262.4. These procedural differences are another reason why ACF was reasonable here in combining the notice in section 305.66 and the notice in section 262.7. (3)

2. A state's obligation to take corrective action is self-implementing and not conditioned on the receipt of notice from ACF.

Section 409(a)(8) of the Act imposes penalties on a state that fails to attain the required level of performance in the same penalty performance measure, or to submit reliable data needed to calculate performance, for two consecutive years. The two years follow one upon another automatically, and the statute does not require or make any reference to notice being required to commence the second of the two consecutive years. This is apparent from the relevant language in section 409(a)(8)(A) (emphasis added):

--If the Secretary finds . . .

(i)(I) . . . that the State program failed to achieve the paternity establishment percentages . . . or to meet other performance measures that may be established by the Secretary;
(II) . . . the State data submitted pursuant to section 454(15)(B) is incomplete or unreliable; or
(III) on the basis of the results of an audit or audits conducted under section 452(a)(4)(C) that a State failed to substantially comply with 1 or more of the requirements of part D . . . and
(ii) that, with respect to the succeeding fiscal year--

(I) the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance as described in subparagraph (A)(i); or
(II) the data submitted by the State pursuant to section 454(15)(B) is incomplete or unreliable; the amounts otherwise payable to the State under this part . . . shall be reduced . . .

The regulation mirrors the statute, providing that a state is subject to a penalty if it failed to achieve the paternity establishment percentages, or to meet the support order establishment and current collections performance measures; or if it did not submit reliable data, and, "[w]ith respect to the immediately succeeding fiscal year, the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance or the data submitted by the State are still incomplete and unreliable." 45 C.F.R. § 305.61(a) (emphasis added).

The circumstances requiring penalties are thus, as relevant here, failing performance or unreliable data in one year, and failure to correct those shortcomings with respect to the succeeding year.

Significantly, the structure of the system of penalties that Part 305 implements necessarily does not permit the start of the corrective action year to be conditioned on notice from ACF of a state's performance or data reliability with respect to the performance year, and further does not afford a state a full year following notice to correct its deficiencies before being subject to penalties. A state has until December 31st to submit its data relating to performance in a given fiscal year, which ends on the previous September 30. 45 C.F.R. § 305.32(f). As this deadline is approximately 90 days into the immediately succeeding corrective action year, it does not permit review of the data and notification to the state of data or performance findings in time for the state to have a full year to take corrective action. Language from the preamble to the notice regulation that the states cite as describing the assessment of penalties as an event that follows notice to the states confirms that the decision that a state is liable for penalties is a determination that "will be made as soon as possible after the end of the corrective action year." 65 Fed. Reg. 82,192. This aspect of the penalty system also demonstrates the untenability of the States' reading of 45 C.F.R. § 305.66 as requiring notice of a state's deficiencies prior to the time that a state has failed to attain required performance levels, or has submitted unreliable data, with respect to the same performance measure, for two consecutive years. As we discussed above, the notice regulation is implicated only when a state is subject to a penalty, which, according to 45 C.F.R. § 305.61, does not occur until the passage of the two consecutive years. (4)

The States also cite language from the preamble to the notice of proposed rulemaking for Part 305 describing the phase-in of performance penalties, providing that states would be subject to the performance penalties based on data reported for FY 2001, that data reported for FY 2000 would be used as a base year to determine improvements in performance during FY 2001, and that penalties would be assessed and then suspended during the statutory one-year corrective action period. 64 Fed. Reg. 55,074, 55,089 (Oct. 8, 1999). This language is not inconsistent with the penalties here, which were not imposed until the end of the corrective action period. This language does not alter the self-implementing nature of the penalty system in which initiation of the penalty process is not conditioned on notice and states are "subject to" penalties only after two consecutive years of failing performance and/or data.

The States effectively acknowledge that the penalty system does not permit a full year for corrective action following notice from ACF, but argue that the regulations require notice "at some point" during the corrective action year, and they clarify that they are not arguing that notice was required to "trigger" the corrective action year. States Joint Br. at 19; Joint Reply Br. at 6, n.1. As we concluded above, such notice is nowhere required by statute or regulation. Nevertheless, even if we were to accept the States' argument that 45 C.F.R. § 305.66 requires that a state be given notice during the corrective action year of its performance or data reliability failures in the performance year (which it does not), the States here either received such notice or were in fact aware of their performances during the corrective action year.

As a state must report its performance data to ACF, it is aware, at the time it prepares the data, of whether it has met the required performance levels for the penalty performance measures. Section 454(15)(B) of the Act; 45 C.F.R. § 305.32. (5) Language from the preamble to Part 305 that ACF cites in its briefs emphasizes that states are responsible for monitoring their own performances and the reliability of their data. The preamble cautions that a state "should be continuously monitoring its own performance and taking action to improve performance which its own data shows may fail to achieve the performance measures. The State is also responsible for maintaining proper procedures and controls to ensure data reliability and completeness." 65 Fed. Reg. 82,192. The preamble goes on to note that "the State should not wait or rely upon the Secretary's determination of a data or a performance deficiency in order to begin corrective action. Two consecutive years of failure (either poor data or poor performance) in the same performance measure criterion will trigger a penalty imposition." Id. Elsewhere, the preamble describes the automatic corrective action period as a "delay which allows States to identify and to correct either reporting or performance problems prior to being assessed a financial penalty," and again warns states that they "should be diligent in continuously monitoring their own performance and data reliability." 65 Fed. Reg. 82,205. In light of the States' responsibility for monitoring their own performances on the penalty performance measures, and the fact that it was they who provided the performance data to ACF, they cannot assert that they were not aware of their own performances and whether they were at risk for penalties if deficient performance continued for a second consecutive year.

The record also shows that ACF informed each of the States of problems with the reliability of their FY 2001 data in time for them to take corrective action before the deadline for submitting their data for FY 2002, the corrective action year. Between April 15 and August 1, 2002, ACF sent reports of its audits of the data that states submitted for FY 2001 (called data reliability audits, or DRAs) to each of the States informing them of whether their FY 2001 data met the standard of reliability. ACF sent earlier draft DRA reports to eight of the nine appellant States between April 29 and June 20, 2002. (6) States Exs. AL-9, AL-10; DE-10, DE-12; DC-9, DC-11; HI-10, HI-12; KS-8, KS-10; LA-11, LA-13; NH-12; NM-11, NM-13; RI-9, RI-11. As performance data submissions for FY 2002 were not due until December 31, 2002, the States had time periods ranging from four to seven months following release of the FY 2001 DRA reports, and six to eight months after release of the draft reports, to correct any problems that may have caused their FY 2001 data to be found unreliable and thus avoid penalties for two years of unreliable data.

The States argue, however, that the DRA reports did not provide adequate notice of their data reliability problems because the reports, provided by the OCSE Office of Audit, each stated that the final determination as to the reliability of the State's reported performance data would be made by the appropriate ACF official. See, e.g., Ex. AL-9, at 2. That argument is not a basis for finding that any further notice was required from ACF to trigger the imposition of a penalty. The record shows that eight of the nine appellant States received both draft and final FY 2001 DRA reports during FY 2002, the corrective action year. (The one State for which the record does not contain a draft DRA report was found to have submitted reliable data.) States are permitted to submit comments on the draft reports, and the record shows that at least some of the States submitted comments and/or additional or revised data to ACF in response to the draft DRA report findings, and that ACF altered its findings as to the FY 2001 data for three of the States in response to comments or data they submitted. 45 C.F.R. § 305.64. The States thus gave consideration to and utilized the information provided in the draft and final DRA reports. The statute, the regulations, and ACF's correspondence informed the States of the self-implementing nature of the penalty process and the States' obligations for monitoring their own data and performance. The statute moreover places great emphasis on the importance of reliable data submitted on a timely basis, as it penalizes not merely inadequate performance but also unreliable data, even where accurate data would have shown that the state had passed the applicable performance measure. In light of these requirements, the cautionary language in the Federal Register, and the information about data reliability timely provided to the States, it cannot be credibly asserted that the States had no knowledge of their data reliability because of the single cited sentence in the DRA reports. (7)

We also find it significant that section 409 of the Act withholds from a state some of the opportunities to avoid a IV-D penalty that it affords with respect to the penalties for some of the other, mostly title IV-A violations listed in section 409(a). For those other violations, not at issue here, section 409(b) forbids the Secretary from imposing a penalty if he finds that there was reasonable cause for the violation, and section 409(c) grants the state the opportunity to enter into a corrective compliance plan prior to suffering a penalty. However, section 409 states that those ameliorative measures do not apply to any penalty under section 409(a)(8). Sections 409(b)(2), (c)(4) of the Act. (8) That they do not apply here underscores the self-implementing nature of the IV-D penalty process. Unlike the penalties for which a corrective compliance plan may be submitted, the operation of the IV-D penalty process is not dependent on any intermediate action by ACF, such as approving a corrective compliance plan, or providing a state notice of its performance failures, before the commencement of the automatic corrective action year, after which a state must be penalized for its failure to correct any performance or data failures that occurred during the previous year.

The States cite a statement by the Assistant Secretary in the November 14, 2003 letters, recognizing that states did not have a full year to correct deficiencies following the release of the final DRA reports, as acknowledging that the States had been denied the full period for corrective action to which they were entitled. In that letter, the Assistant Secretary stated that-

I recognize that states do not have a full year to correct deficiencies following the release of final DRA reports. I have been discussing a possible solution with those working on the TANF reauthorization bill. In September, the Senate Finance Committee approved a bill which contains a technical fix to the Federal statute to ensure that states receive a full year for corrective action following notice of a deficiency. However, until Congress enacts the bill that includes this technical change, I have no recourse but to comply with the current statute and to proceed with the penalty process.

The referenced bill was apparently the Personal Responsibility, Work, and Family Promotion Act of 2003, H.R.4, as reported in Senate on October 3, 2003. Section 321 of that bill, "Timing of corrective action year for State noncompliance with Child Support Enforcement program requirements," according to the Senate Report, would have "change[d] the corrective action year to the fiscal year following the fiscal year in which the Secretary makes a finding of noncompliance and recommends a corrective action plan." The reason for this change was that "[c]urrent language does not recognize the time necessary to conduct federal audits and that those audits now occur during what is, under current law, a state's corrective action year. This technical correction will give states a full year to correct identified deficiencies." S. Rep. No. 108-162, at 53-54 (Oct. 3, 2003). The bill was apparently never brought to a vote. 150 Cong. Rec. S3520, S3529, S3538 (Apr. 1, 2004).

Rather than supporting the States, this statement in the legislative history acknowledges, consistent with our decision here, that under the law as currently written the corrective action year commences automatically after the performance year, without any action by ACF in the form of audit reports or notices.

The States also argue that specific notice that they were at risk for penalties based on their FY 2001 failures was needed to alert their State TANF agencies that TANF funding was jeopardized by actions of their IV-D agencies. States Joint Br. at 17. They assert that it is highly unusual to penalize one state agency for the actions of another, and that the November 14, 2003 letters came as a surprise to their TANF agencies. Id. at 5, 13. This argument is unavailing, as the Board has long held that a state as a whole must be viewed as a single unit responsible for the administration of federal grant programs and funds. See, e.g., Colorado Dept. of Personnel & Administration, DAB No. 1872 (2003); Alabama Dept. of Finance, DAB No. 1635 (1997). (9) Section 409 of the Act notified states that failure to correct the IV-D violations listed therein would result in the reduction of the funding for their entire TANF programs. The preamble to the regulations that we cited above emphasizes that states are responsible for monitoring their own performances and the reliability of their data. 65 Fed. Reg. 82,192, 82,205. Thus, the assignment of TANF and IV-D functions to different state agencies provides no basis to ignore the self-implementing nature of the IV-D penalty system.

3. Statements in federal auditing standards do not provide a basis to reverse the penalties.

The States cite language from a publication by the U.S. General Accounting Office (GAO, now the Government Accountability Office), "Government Auditing Standards," in support of their argument that they should have been notified of their performance and/or data reliability failures during FY 2001 before the start of FY 2002, the corrective action year, or at least, they argue, soon enough after the start of the corrective action year to grant them sufficient time to take corrective action. The publication in question states that one of the reporting standards for performance audits is timeliness, and contains language emphasizing the importance of delivering timely audit reports. The States note the Single Audit Act, 31 U.S.C. § 7501 et seq., requires federal agencies to conduct all audits of state governments receiving federal funds under standards published by the GAO.

The Part 305 regulations do provide that ACF will audit state IV-D programs "in accordance with standards promulgated by the Comptroller General of the United States in 'Government Auditing Standards.'" 45 C.F.R. § 305.60(d). However, general timeliness provisions in the auditing standards do not override the self-implementing nature of the penalty process in which the corrective action year begins upon the end of the performance year and is not conditioned on notice from ACF, as established by the statute and regulations and confirmed by the subsequent legislative history quoted above. While it is incumbent on ACF to complete the audits as quickly as possible, that goal is constrained by the requirements of the audit process.

The audit provisions of Part 305 require ACF to conduct an entrance conference prior to each audit and an exit conference at its conclusion, at which ACF must discuss preliminary audit findings and a state "may present any additional matter it believes should be considered in the audit findings." 45 C.F.R. § 305.64. ACF must then provide an interim audit report and receive written state comments which must be noted and incorporated in the final report. Id. The audit may entail review of state records or other supporting documentation as well as any contact with state personnel needed to conduct or complete the audit. 45 C.F.R. § 305.65. As discussed earlier, the States utilized their ability to comment on the draft audit reports and ACF made changes to its audit findings in response to those comments. ACF's ability to issue audit reports promptly is thus dependent on state circumstances beyond ACF's control. And as noted earlier, this process cannot begin until after a state submits its data, which is not due until 90 days after the start of the corrective action year. The States have not shown that anything in the GAO audit standards would provide a basis to disregard the self-implementing structure of the IV-D performance penalty system or require reversal of the penalties here.

4. The States' claim that ACF failed to comply with the law requiring 5-day notice of an adverse action does not provide a basis to reverse the penalties.

The Act's only requirement of notice regarding the penalties imposed by section 409(a) is at section 410(a):

Within 5 days after the date the Secretary takes any adverse action under this part with respect to a State, the Secretary shall notify the chief executive officer of the State of the adverse action, including any action with respect to the State plan submitted under section 402 or the imposition of a penalty under section 409.

Sections 410(b) and (c) provide for Board and judicial review of adverse actions taken under section 410(a). (10)

The States argue that the November 14, 2003 letters did not comply with section 410 because ACF took "adverse actions" more than five days prior to those letters, at some earlier, unspecified time, when ACF accepted the audit findings supporting the penalties, notably the findings in the DRA reports that State data for a given FY were not complete and reliable. The States thus argue that the adverse action addressed in section 410 of the Act occurs prior to an appealable determination that a state is subject to a penalty, and that the notice required by section 410 is a critical component of a state's ability to make full use of the corrective action year. ACF argues that the Secretary supplied the notice required by section 410, in the form of the November 14, 2003 letters.

The States' argument is not supported by section 410 or its implementing regulations. Section 410(b) provides that a state may appeal an adverse action to the Board, and seek review of the Board's decision in federal district court. The appeal rights that section 410 provides for adverse actions do not apply to data reliability findings or other intermediate determinations made prior to the determination that a state is subject to a penalty. Instead, the regulations implementing section 410 in the TANF program permit states to appeal determinations that a state is subject to a penalty but do not provide any right to appeal intermediate determinations short of that determination, such as a determination that a state's data for a given year are unreliable. 45 C.F.R. §§ 262.1(a), 305.66(b)(2). The TANF penalty regulations of Part 262 that are referenced in the IV-D regulations at Part 305 interpret section 410 of the Act, stating that ACF will formally notify the governor and the state agency of an adverse action within five days after ACF determines that "a State is subject to a penalty under parts 261 through 265 of this chapter." 45 C.F.R. § 262.7(a)(1). As discussed above, a state is not subject to a penalty, triggering the notice requirement, until after two consecutive years of failing performance and/or unreliable data with respect to the same performance measure. While the statute requires the Secretary to impose penalties on a state following two consecutive years of failing performance and/or unreliable data with respect to the same performance measure, neither it nor the regulations specify a time frame within which ACF must take adverse action following completion of its review of a state's data for a corrective action year.

Additionally, Part 305 affords states the opportunity to comment on draft audit reports, and requires that their comments be included in the final audit report along with the auditors' responses. This suggests that those comments and responses will be reviewed by ACF prior to a final decision on data reliability or performance being made, which supports a finding that the DRAs were not adverse actions triggering the notice requirement of section 410 of the Act.

Thus, we conclude that ACF complied with the applicable notice requirements in determining that the States were subject to penalties.

B. ACF in determining data reliability was not required to disregard errors that understated PEP performance.

Eight of the nine States argue that ACF's findings that they submitted unreliable data for at least one of the two fiscal years should be reversed because ACF, in determining whether data were reliable, counted data errors that lowered their PEP scores. The States argue that such errors increase the likelihood that a state will be assessed a penalty or denied incentives and thus hurt only the state itself, and that ACF should have disregarded those errors pursuant to the Secretary's authority to disregard data unreliability of a technical nature that does not adversely affect the determination of the state's performance. The States argue that the previous use of this authority, to accept unreliable FY 2000 data submitted by all 23 states that submitted unreliable FY 2000 data, created a precedent for accepting unreliable data. In addition, some States argue that ACF should have accepted unreliable data based on circumstances particular to their appeals, arguments that we address in the portions of the analysis addressing individual States.

We first set out the applicable legal provisions requiring states to submit reliable IV-D performance data, and authorizing the Secretary to accept unreliable data.

1. Reliable data: applicable legal provisions and background

As noted earlier, states must submit data used to determine incentives and penalties, following instructions and formats as required by HHS, by December 31st after the end of the fiscal year; only data submitted as of December 31st will be used to determine the state's performance for the prior fiscal year. (11) 45 C.F.R. § 305.32. States report data to ACF using form OCSE-157, the Child Support Enforcement Annual Data Report; ACF has issued this form and instructions for completing it through a series of Action Transmittals. See, e.g., AT-98-20 (July 10, 1998); AT-99-15 (Dec. 22, 1999). ACF conducts data reliability audits (DRAs) to determine if data are complete and reliable for incentive and penalty purposes. 45 C.F.R. § 305.60; 65 Fed. Reg. 82,181.

The regulations at 45 C.F.R. § 305.1 define the terms complete and reliable data:

(i) The term reliable data, means the most recent data available which are found by the Secretary to be reliable and is a state that exists when data are sufficiently complete and error free to be convincing for their purpose and context. State data must meet a 95 percent standard of reliability effective beginning in fiscal year 2001. This is with the recognition that data may contain errors as long as they are not of a magnitude that would cause a reasonable person, aware of the errors, to doubt a finding or conclusion based on the data.
(j) The term complete data means all reporting elements from OCSE reporting forms, necessary to compute a State's performance levels, incentive base amount, and maximum incentive base amount, have been provided within timeframes established in instructions to these forms and § 305.32(f) of this part.

As noted earlier in the decision, for the sake of simplicity we refer to complete and reliable data as reliable data.

Section 409(a)(8) authorizes the Secretary to accept a state's otherwise unreliable data, and to find a noncompliant state to be in substantial compliance with IV-D requirements:

(C) DISREGARD OF NONCOMPLIANCE WHICH IS OF A TECHNICAL NATURE.--For purposes of this section and section 452(a)(4), a State determined as a result of an audit--

(i) to have failed to have substantially complied with 1 or more of the requirements of part D shall be determined to have achieved substantial compliance only if the Secretary determines that the extent of the noncompliance is of a technical nature which does not adversely affect the performance of the State's program under part D; or
(ii) to have submitted incomplete or unreliable data pursuant to section 454(15)(B) shall be determined to have submitted adequate data only if the Secretary determines that the extent of the incompleteness or unreliability of the data is of a technical nature which does not adversely affect the determination of the level of the State's paternity establishment percentages (as defined under section 452(g)(2)) or other performance measures that may be established by the Secretary.

Section 409(a)(8)(a)(C) of the Act. The implementing regulation provides-

45 C.F.R. § 305.62 Disregard of a failure which is of a technical nature.

A State subject to a penalty under § 305.61(a)(1)(ii) or (iii) of this part may be determined, as appropriate, to have submitted adequate data or to have achieved substantial compliance with one or more IV-D requirements, as defined in § 305.63 of this part, if the Secretary determines that the incompleteness or unreliability of the data, or the noncompliance with one or more of the IV-D requirements, is of a technical nature which does not adversely affect the performance of the State's IV-D program or does not adversely affect the determination of the level of the State's paternity establishment or other performance measures percentages.

The authority to accept unreliable IV-D data was applied to all of the 23 states that submitted unreliable data for FY 2000. ACF announced the determination in substantively identical letters to all states from the OCSE Commissioner dated December 19 or 27, 2001. (12) The letters stated that the basis for that determination was that the new incentive system was being phased in during FY 2000, and that penalties based on state performance would not apply prior to state performance in FY 2001. See, e.g., Ex. NH-14.

2. Arguments and analysis

The States argue that the authority to disregard data unreliability should have been applied in the case of those States whose data were found to be unreliable due to errors that understated PEP performance. As an example of such an error, the States cite the definition of the PEP, essentially the percentage of children born out of wedlock for whom paternity has been established. See 45 C.F.R. § 305.2(a)(1). Any error that lowers the number of children born out of wedlock whose paternity has been established, such as omitting children whose paternity has actually been established, or that increases the number of children born out-of-wedlock, such as mistakenly reporting children that were actually born in wedlock, will lower the state's PEP, making it more likely that the state will be subject to a penalty, and less likely that the state will earn an incentive. States Joint Br. at 27, citing Ex. RI-9, at 4-5 (Rhode Island's FY 2001 DRA report). The States note that ACF's definition of reliable data says that data must be "sufficiently complete and error free to be convincing for their purpose and context," and argue that the purpose and context here is to determine a state's performance on the various performance measures that determine incentives and penalties. 45 C.F.R. § 305.1(i). They maintain that ACF should concentrate on errors that overstate performance and which might result in a state avoiding penalties it deserves, or earning incentives it does not. The parties agree that data inaccuracies are often due to a state converting its IV-D data to a new computerized record-keeping system that contains data elements that did not exist in its previous manual or automated system in a readily available format, and to clerks or caseworkers in a state's IV-D office incorrectly entering information from source documentation into the computer system. ACF Br. at 40-41; States Joint Reply Br. at 15.

The States' arguments provide no basis to ignore or reverse ACF's findings that data failed to meet the required standards for reliability specified in the regulations. At no place do the statute or regulations distinguish among unreliable data based on whether data errors improperly increase or decrease a state's score on the penalty and incentive performance measures. The statute's exception for unreliability of a technical nature encompasses only those errors that "do not adversely affect the determination of" the performance measure levels. The errors here resulted in inaccurate PEP scores and thus failed to satisfy that standard. Moreover, the States have not shown that the errors were so de minimis that they resulted in misstating the States' performance percentages by a only few hundredths or thousandths of a percent.

Congress signaled the importance of a state's responsibility for providing reliable data when it enacted the current IV-D penalty system. Former section 403(h) of the Act imposed penalties for a state's failure to comply substantially with requirements of title IV-D, which was then, as now, defined at section 452(g) as encompassing a state's failure to attain specified PEP levels. Section 403(h), however, did not separately impose penalties for failure to submit reliable data needed to calculate the performance levels. The current penalty system at section 409(a)(8) regards a state's failure to submit reliable data needed to assess penalties and reward incentives as equivalent to failing to achieve the required performance levels. A high incidence of data errors, regardless of their type or effect on a state's score, increases the likelihood that there may be other errors that go undetected and calls into question the reliability of the entire body of state data. Data containing enough errors to render them unreliable are not convincing for the purpose of assuring that the performance calculation may be relied upon as a reasonably accurate measure of a state's performance, regardless of whether the errors might overstate or understate performance. The States' argument that ACF must consider the effect of the errors could result in a state with unreliable data being treated the same as a state that submits error-free data.

Additionally, the fact that errors resulted from clerical or data-entry mistakes, or from converting data to newer automated systems, provides no basis to reverse ACF's findings that the States submitted unreliable data. In explaining the selection of the 95% standard for data reliability, ACF in the final rule stated:

This standard is consistent with the recognition that "data may contain errors as long as they are not of a magnitude that would cause a reasonable person, aware of the errors, to doubt a finding or conclusion made based on the data." Part of this definition is lifted verbatim from Chapter 1, Introduction of the U.S. General Accounting Office, Office of Policy Booklet (Standards) entitled, Assessing the Reliability of Computer-Processed Data, dated September 1990.

65 Fed. Reg. 82,181. Thus, the emphasis of the requirement that data be sufficiently error-free is on the magnitude of the errors, as opposed to the type of errors. If clerical and conversion errors evidence serious discrepancies between the written documentation and the computer records, they would call into question the reliability of the data for calculating the state's performance. Excusing "understating" errors would also result in states being able to pass the PEP performance measure in succeeding years based on reported increases in performance greater than the improvement that the state actually achieved.

Finally, the States argue that the Secretary could have used the "technical nature" exception to waive their penalties, "as he did for all States in 2000." States Joint Br. at 18 (emphasis in original). They cite the language from the November 14, 2003 letters about the possible "technical fix" to the statute, discussed above, that would have delayed the corrective action year until after determinations of performance-year data reliability and performance, and argue that the letter's language meant that their deficiencies were technical in nature. However, the determination for FY 2000 and the language regarding a possible Act amendment do not support excusing the States' failures for FYs 2001 and 2002, for the following reasons.

That ACF might have considered a statutory change delaying the start of the corrective action year does not mean that waiver under the "technical nature" exception in the statute is appropriate for all states that have data reliability or performance failures in the corrective action year under the current statute. The "technical nature" exception in the statute applies only when the Secretary determines that a state has submitted adequate data because "the incompleteness or unreliability of the data is of a technical nature which does not adversely affect the determination of the level of the State's paternity establishment percentages . . . or other performance measures." Section 409(a)(8)(C)(ii) of the Act. Thus, the exception is limited to data reliability failures that have no substantive effect on whether penalties are imposed (or incentives are awarded) based on state performance in meeting IV-D goals.

Contrary to what the States assert, moreover, ACF did not use the "technical nature" exception to waive all penalties for FY 2000. ACF invoked the provision when determining to disregard the unreliability of data that 23 states had submitted for FY 2000. Since, under the regulations, penalties for IV-D performance would not be imposed for years prior to FY 2001, ACF had determined that the identified data reliability problems did not affect performance measures, for purposes of any penalty that would have been based on unreliable data in FY 2000. That is far different from the determination that the States seek here -- that all failures after corrective action periods (whether poor data or poor performance) be waived because of a perceived unfairness in the existing statutory scheme. Such a blanket determination would surely be contrary to congressional intent.

We thus conclude that the States were not entitled to have data unreliability disregarded as technical in nature merely because the data errors understated their performance.

C. ACF's use of different methods to select samples for audits of "State PEP" and "IV-D PEP" data was not arbitrary and capricious.

Seven of the nine States argue that they were prejudiced by their decisions to use the IV-D PEP measure to calculate their PEPs, because ACF, without notice, employed a more stringent standard for determining the reliability of IV-D PEP data than it did for Statewide PEP data.

1. Background

As noted above, the PEP is essentially the percentage of children born out of wedlock for whom paternity has been established or acknowledged. The IV-D PEP is based solely on births occurring in the caseload of a state's IV-D program, and the Statewide PEP is based on all births in a state. The regulations define the two PEP measures as the following ratios:

IV-D PEP:

Total # of Children in IV-D Caseload in the Fiscal Year or, at the option of the State, as of the end of the Fiscal Year who were Born Out-of-Wedlock with Paternity Established or Acknowledged


Total # of Children in IV-D Caseload as of the end of the preceding Fiscal Year who were Born Out-of-Wedlock

Statewide PEP:

Total # of Minor Children who have been Born Out- of-Wedlock and for Whom Paternity has been Established or Acknowledged During the Fiscal Year


Total # of Children Born Out-of-Wedlock During the Preceding Fiscal Year

45 C.F.R. § 305.2(a)(1). States may select either one as the basis for calculating their PEPs and determining the reliability of their PEP data. (13) Id.; section 452(g)(1) of the Act.

States report data for the components of the two PEP ratios to ACF at specific lines on the OCSE-157. Calculation of the PEP for the current year requires data from the data reports for the current and preceding fiscal years, because the denominator of a given year's PEP ratio uses information on out-of-wedlock children from the preceding fiscal year.

State data must meet a 95% standard of reliability. 45 C.F.R. § 305.1(i). In conducting DRAs, ACF examines a sample of state records and data sources to determine if the state has reported the data accurately. ACF calculates reliability based on the findings for the sample using a 95% confidence interval, and data are deemed reliable if the upper bound of the confidence interval equals or exceeds 95%, the reliability standard.

To determine the reliability of PEP data submitted by a state that has selected the IV-D PEP measure, ACF examines sample cases to determine whether (1) the state reported data (i.e., children) that should not have been included (for instance, if they were not born out of wedlock, or if their paternity was not established), so-called "inclusion errors," and (2) whether the state failed to include children who should have been included, so-called "exclusion errors." To determine the reliability of PEP data submitted by a state using the Statewide PEP measure, however, ACF does not use a sampling technique that would enable it to determine if the state failed to include a child who should have been included (for instance, a child born out of wedlock). Instead, ACF's sampling method enables it to determine only if the state included data (i.e., children) that should not have been included.

The difference in the methods for auditing IV-D PEP and Statewide PEP data was noted in an ACF issuance, "Guide for Auditing Data Reliability," March 2002, that ACF sent to all state IV-D directors as an attachment to "Dear Colleague" letter DC-02-07, dated April 1, 2002. Under the heading "Selection of Sample Cases," the guide states, "One sample will be selected from the states' child support universe to evaluate performance indicator data reported on lines 1, 2, 5, 6, 24, 25, 28 and 29 of the OCSE-157 Report. . . . If a state chooses to use lines 8 and 9 for the paternity indicator [for reporting by states that selected the statewide PEP measure], these lines will be sampled separately." A footnote to that last sentence states:

Information on lines 8 and 9 may come from a combination of sources including, Vital Statistics, IV-D, and other state agencies. Should this option be selected by the state, auditors will select a random sample of 50 cases from each source to determine if cases from each source appear to be correctly included.

Guide for Auditing Data Reliability, 2002, at 13, part of ACF Ex. 2. (14)

The footnote's statement that the auditors will determine if cases "appear to be correctly included" is different from the general instructions, on the subsequent page of the guide, for reviewing data from other lines on the OCSE-157. These other lines, unlike lines 8 and 9 used to calculate the Statewide PEP, relate solely to the state's IV-D caseload. Under the heading, "Review Process," the guide states:

1. Match the selected sample cases with each audit trail provided by the state.

2. Review the matching cases for each line to see if they should have been reported on that line. If a case was included on a line but should not have been included, count it as an error.

3. Also review the cases to determine if they were excluded from any line being reviewed but should have been included. If incorrectly excluded, count that as an error.

Id. at 14.

A copy of an earlier version of the Guide for Auditing Data Reliability that ACF submitted, issued February 14, 2000, its pages all stamped "draft," appears to contains no specific instructions for reviewing data reported on lines 8 and 9. 2000 Guide, also part of ACF Ex. 2. The 2000 Guide also contains no statement that data reported on lines 8 and 9 (Statewide PEP data) would be treated differently than data reported on lines 5 and 6. ACF does not argue that the 2000 Guide provided notice of the different methods that would be used to assess the reliability of Statewide and IV-D PEP data. (15)

2. Arguments and analysis

Seven States that chose the IV-D PEP measure and were found to have submitted unreliable data in at least one of the two years at issue argue that consideration of exclusion errors in auditing IV-D data meant that they were more likely to fail the data reliability test than states that chose the Statewide PEP measure. (16) They argue that use of the different auditing methods was arbitrary and capricious because ACF provided no rationale for the difference, and no notice prior to the 2002 Data Reliability Guide. They note that 10 out of 26 states that used the IV-D PEP in FY 2002 failed to meet the data reliability standard, compared to only one out of the 28 states using the Statewide PEP measure. (17) See ACF Ex. 8. They argue that some of them would have passed DRAs in at least one of the two years if exclusion errors had not been counted in auditing IV-D PEP data.

The States cite a Board decision rejecting the use of different methods to allocate hospital survey costs to Medicare and Medicaid, where there was no rationale offered and the allocation methodology was internally inconsistent and contrary to CMS policy. Michigan Dept. of Social Services, DAB No. 872 (1987). They cite a court decision for the proposition that an agency acts arbitrarily and capriciously when it relies on factors Congress had not intended to be considered and may not make distinctions among similarly situated groups that are not supported by statute. Motor Vehicle Mfrs. Ass'n of the United States, Inc. v. State Farm Mut. Auto Ins. Co., 463 U.S. 29, 43 (1983). Asserting that the Act and the legislative history of section 452(g)(2) make no distinction between the IV-D and the Statewide measures, the States argue that there is no indication that Congress intended that states choosing the IV-D PEP measure would be held to what the States call a higher standard. Sections 409(a)(8)(A)(i)(I), 452(a)(5) of the Act; H.R. Conf. Rep. 104-725 (July 30, 1996). (18) The States also cite preamble language from the proposed Part 305 to the effect that the assessment of data reliability underpins the fairness of the incentive and penalty system and argue that using different auditing methods undercuts the data audits' purpose of treating states fairly. 64 Fed. Reg. 55,074, 55,076. They argue that ACF should have provided timely explanation of the different methods so that the States could, if necessary, have opted to use the Statewide PEP formula.

ACF argues that differences between the two measures, such as the size and number of their data sources, justified different auditing methods. ACF argues that the IV-D PEP measure uses data limited to the IV-D caseload and under control of the state IV-D agency, whereas Statewide PEP data reflects all births in a state and is taken from many sources, such as public and private hospitals and birth facilities, making it impractical for federal auditors to determine if there were out-of-wedlock births that the state failed to report. ACF also argues that the IV-D universe is properly subject to greater scrutiny because it is within the control of the IV-D agency, and thus more potentially subject to manipulation (although ACF disclaims any suggestion that the States manipulated their IV-D data). Because Congress created two different types of PEP measurement and did not treat them as if they were the same, ACF argues, it was justified in developing different methodologies to assess the reliability of each type of PEP data. ACF also disputes that states were adversely affected by selection of the IV-D PEP measure, noting that when both DRA and performance failures were considered, the failure rates in each year were comparable for both measures (although ACF appears to acknowledge that states that used the IV-D measures were more likely to have had some kind of failure in both years, requiring a penalty). ACF Br. at 33.

The States dispute ACF's assertions of data source differences, on the grounds that the Statewide PEP measure by definition considers children born out of wedlock during one fiscal year only, while the IV-D PEP measure encompasses all out-of-wedlock children in the IV-D caseload, which would include children born over an 18-year period, and also on the grounds that states account for all registered births.

We conclude that differences in the auditing methods do not require reversing the penalties or reassessing the reliability of the States' PEP data.

In several decisions addressing whether a federal agency permissibly employed statistical sampling methods to recover federal funds (e.g., to calculate a disallowance or determine whether a state provided services in the required proportion of cases), the Board considered whether the findings resulting from the disputed techniques were reliable evidence of what they purported to show. Commonwealth of Puerto Rico Dept. of Social Services, DAB No. 1253 (1991); New Mexico Human Services Dept., DAB No. 1224 (1991); Ohio Dept. of Human Services, DAB No. 1202 (1990); Tennessee Dept. of Health and Environment, DAB No. 898 (1987). That analysis is appropriate here, where the Act and regulations require that state IV-D performance be measured using reliable data, that states maintain and report reliable data, and that ACF audit state data for reliability. Sections 458(b)(5)(B), 452(a)(4)(C), 454A(c)(2) of the Act; 45 C.F.R. §§ 305.31(e), 305.34(c), 305.60, 305.61, 305.65(b).

We first note that ACF's sampling methods for evaluating data reliability were very conservative methods. Rather than applying the single most likely estimate to determine whether the data reliability standard was met, ACF's methods for both the IV-D PEP states and the Statewide PEP states gave the states the benefit of the doubt by using the upper bound of the 95% confidence interval. Second, the choice of sampling method may need to vary according to the nature of what is being audited, since the purpose of using sampling is to get a reliable result while keeping the burden on the auditors and the entity being audited to a minimum.

ACF presented a reasonable rationale for choosing a methodology for auditing Statewide PEP data different from its methodology for auditing IV-D PEP data, even though the methodologies resulted in differences in the types of errors that could be identified. The source of data from which the IV-D PEP data are drawn is the caseload of a state's IV-D agency and thus the underlying records are readily available to that agency, permitting ACF auditors to more easily examine the underlying records. This enables the auditors to determine if the state has failed to report any data that should have been reported, in addition to determining if the state has reported data that did not meet the criteria for being reported. By contrast, the data reported by the IV-D agency of a state that has selected the Statewide PEP measure comes from a multiplicity of sources, rendering it difficult to audit underlying records at the sources. For ACF to identify exclusion errors, it would have to examine a sample of the underlying records from each source not only for the data that were reported on the OCSE-157, but for similar data that were not reported. This would have entailed examination of a substantially larger data universe and either the selection of a correspondingly larger sample or the use of a larger confidence interval to account for the relatively larger universe. Moreover, this would have been more burdensome, not only for the auditors, but also for states since they have to produce documentation needed for an audit. See 45 C.F.R. § 305.65. ACF determined, moreover, that data for Statewide PEPs that came from sources other than a state IV-D agency may be less likely to have deliberate exclusion errors than data for IV-D PEPs. ACF Br. at 31-32.

The States' point that the IV-D PEP equation considers births of minor children over a period of years does not alter the fact that the equation is limited to only those data that come to the state through the portal of its IV-D caseload, which necessarily is more limited in scope than all children born in a state in a year who may have no connection to a state's social services or public assistance programs.

While IV-D PEP states nationwide appear more likely to have submitted data found to be unreliable than the Statewide PEP states, the States have not demonstrated that the difference is necessarily attributable to the choice of auditing methods. The difference could be attributable to other factors.

ACF was thus reasonable in its choice of audit methodology for assessing the reliability of Statewide PEP data. Although a consequence of that choice was an inability to identify certain exclusion errors, this is not a situation where an agency applied different interpretations of what constitutes a data error to different states for the same data universe.

Nothing that the States cite requires that an agency employ identical methods to audit data from obviously different universes. Moreover, even if the data universes were sufficiently similar as to require the use of identical auditing methods, the remedy would be to expand the audits of Statewide PEP data, rather than to ignore a category of errors with respect to the IV-D PEP data, contrary to the congressional intent that PEPs be calculated using reliable data and that data errors be disregarded only when of a technical nature.

The States have also not shown that they were entitled to prior notice of the different sample selection methodologies. The States cite no authority showing a right to prior notice of different sample selection methodologies, and did not present any expert evidence that employing the different methodologies was not reasonable in light of the differences between the two universes from which the samples were drawn. Nor did the States establish that they were prejudiced by the different techniques. Although the States argue that some of them might have opted for the Statewide PEP measure, they do not argue and have not shown that their PEP data would have met the data reliability standard if they had selected the Statewide PEP measure. Instead, they argue that some States would have passed DRAs if exclusion errors had not been counted in assessing the reliability of IV-D PEP data. (19)

The court decision the States cite for the notion that an agency acts arbitrarily and capriciously when it relies on factors that Congress has not intended it to consider and may not make distinctions among similarly situated groups that are not supported by statute does not support their position here. Here, the statute creates the two different types of PEP measure, and it is clear that Congress intended the agency to consider whether data that states submit is reliable. Nor does the Board's Michigan decision support the States, since ACF here provided a reasonable explanation for the differences in auditing methods, and the consideration of exclusion errors was consistent with the methodology for auditing other IV-D performance data.

Accordingly, we conclude that the States that selected the IV-D PEP are not entitled to have their exclusion errors disregarded merely because the audit samples of Statewide PEP data did not permit identification of exclusion errors.

We thus conclude that common issues raised by the States regarding notice and data reliability provide no basis to reverse ACF's determination that the States were subject to penalties. Next, we address the issues raised in the appeals of individual States.

II. Individual State appeals

Alabama Department of Human Resources, Docket No. A-04-45

The Alabama Department of Human Resources (Alabama) appeals a penalty of $532,692. ACF determined that Alabama is subject to a penalty on the grounds that Alabama's PEP for FY 2001 was 58%, meeting neither the minimum 90% level nor the required increase of 3% over Alabama's FY 2000 PEP of 57%, and that Alabama's PEP data for FY 2002 did not meet the data reliability standard. Ex. AL-4.

Alabama argues that it did not suffer from the same deficiency in two consecutive years and is thus not subject to a penalty, because the first deficiency was a performance failure, and the second a data failure. Alabama cites Part 305 and its preamble as imposing a penalty only for failure to meet the same "criterion" in two consecutive years, and not for non-identical deficiencies, or identical deficiencies in non-consecutive years. 45 C.F.R. § 305.66(e); 65 Fed. Reg. 82,190, 82,192. Alabama argues that its deficiencies were not identical.

Alabama argues further that ACF blurred the distinction between the two types of deficiency by asserting in the November 14, 2003 adverse action letter that Alabama "failed for a second consecutive year to demonstrate with reliable data the specified minimum level of performance in PEP or to improve over the previous year's performance." Alabama Supp. Br. at 5 (emphasis added by Alabama). This language, Alabama argues, means that a single failure, unreliable data, could be viewed as all three types of failure that can subject a state to a penalty: failure to submit reliable data, failure to meet the required level of performance, and failure to substantially comply with federal IV-D program requirements, since paternity establishment is a IV-D requirement. If each data deficiency were intended to trigger "concomitant unreliability, performance level, and substantial compliance penalties," Alabama argues, there would have been no need for three separate penalty categories, in violation of the principle against interpreting a statute in a manner than renders any of its parts mere surplusage. Alabama Supp. Br. at 7.

Alabama's position is contrary to design of the system of incentives and penalties as explained in the Part 305 preamble and confirmed by the statute and regulations.

States must meet specified levels of performance on the IV-D performance measures to earn incentives, and failure to meet minimum required performance levels for two consecutive years on the same measure subjects a state to a penalty. A state must demonstrate that it met performance levels with reliable data. A penalty is appropriate for a state's failure in two consecutive years to demonstrate that it met the required performance level for a performance measure, regardless of whether that failure is because the data for the performance measure are unreliable or because reliable data shows that the state did not achieve the required performance level for that performance measure.

This design of the penalty system under which a state is subject to a penalty for consecutive failures of either data reliability or performance with respect to the same performance measure is apparent from the following two statements in preamble to Part 305:

Two consecutive years of failure (either poor data or poor performance) in the same performance measure criterion will trigger a penalty imposition.

65 Fed. Reg. 82,192.

When data is incomplete or unreliable, it will be impossible to accurately determine the State's level of performance to either pay incentives or to assess performance. In such cases, a State's data must be complete and reliable by the end of the succeeding fiscal year and must demonstrate that the submitted data meets the performance measures in order to avoid the imposition of a penalty. Correcting incomplete or unreliable data within the automatic one-year corrective action period is not enough; the data must also show that the State performed at a high enough level during the corrective action year to avoid a financial penalty. For example, say a State is determined to have unreliable current collection performance data for FY 2001 and the State corrects the unreliable data for FY 2001 during FY 2002. The State must still have reliable FY 2002 data and meet the current collection performance standard for FY 2002 or incur a penalty in FY 2003.

65 Fed. Reg. 82,190.

ACF's position that a state is subject to a penalty for consecutive failures to demonstrate with reliable data that it met the same performance measure is consistent with the statute and the regulations, both of which impose penalties for consecutive failures to either submit reliable data or meet the required performance level. The statute in relevant part imposes penalties if the Secretary determines--

(i)(I) . . . that the State program failed to achieve the paternity establishment percentages . . . or to meet other performance measures that may be established by the Secretary;
(II) . . . the State data submitted pursuant to section 454(15)(B) is incomplete or unreliable; or
(III) on the basis of the results of an audit or audits conducted under section 452(a)(4)(C) that a State failed to substantially comply with 1 or more of the requirements of part D . . . and
(ii) that, with respect to the succeeding fiscal year--

(I) the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance as described in subparagraph (A)(i);
(II) the data submitted by the State pursuant to section 454(15)(B) is incomplete or unreliable . . .

Section 409(a)(8)(A) of the Act (emphasis added). This language is consistent with ACF's determination that Alabama is subject to a penalty for failing to achieve the required PEP performance level in FY 2001, followed by failure to submit reliable PEP data in FY 2002. As discussed, Alabama's submission of unreliable FY 2002 data meant that it failed to demonstrate that it attained the required PEP performance level for FY 2002. ACF's determination is also consistent with the regulation, under which a state is subject to a penalty if it failed to achieve the required PEP, or to meet the Support Order Establishment and Current Collections performance measures; or if it did not submit reliable data, and, "[w]ith respect to the immediately succeeding fiscal year, the State failed to take sufficient corrective action to achieve the appropriate performance levels or compliance or the data submitted by the State are still incomplete and unreliable." 45 C.F.R. § 305.61(a) (emphasis added).

ACF's interpretation does not render any of this statutory language superfluous since a penalty may be based on failure to submit reliable data in two successive years, as well as on failure to demonstrate with reliable data that the performance requirements were met.

On the other hand, Alabama's reading of the statute and regulation is unreasonable. As ACF notes, Alabama's position would enable a state with one year of failing performance to avoid a penalty by declining to submit any data for the second year. This would undercut the whole statutory scheme which recognizes that it is not sufficient to require states to meet performance standards if they cannot demonstrate that they have done so with reliable data.

Alabama cites an example in the preamble language quoted above, in which a finding of unreliability occurred first and the state is given an opportunity to submit reliable data in the succeeding year, and argues that Alabama should be given the same opportunity to correct its unreliable FY 2002 data. Alabama Supp. Br. at 8, citing 65 Fed. Reg. 82,190. Alabama argues that under 409(a)(8)(A)(ii), a state may incur a penalty only if it has failed to take adequate corrective action in the immediately succeeding fiscal year. Nothing in the cited language supports delaying the penalty here. The full preamble statement quoted above and its example make clear that a state is obliged to both submit reliable data on a performance measure and meet the required level of performance, and that consecutive failures of either obligation result in a penalty. In Alabama's case, the corrective action year for the purposes of the penalty being appealed was FY 2002, not FY 2003. Alabama may avoid continuation of the penalty in FY 2003 by demonstrating with reliable data that it met the PEP performance level in that year.

Alabama also argues that its position is supported by the first of the two preamble statements quoted above, which provides that "[t]wo consecutive years of failure (either poor data or poor performance) in the same performance measure criterion will trigger a penalty imposition." 65 Fed. Reg. 82,192. Alabama argues that "criterion" refers not to the particular performance measure (i.e., PEP or Support Order Establishment), but instead to the numerical standards against which state performance or data are judged, the 95% data reliability standard and the 90% PEP. As these criteria are different for data and performance, Alabama argues, a year of data failure followed by a year of performance failure are not two consecutive years of failure in the "same performance measure criterion" for purposes of the preamble language.

While we agree that the term "performance measure criterion" may refer to a numerical standard rather than to the performance measure itself, this does not mean that the 95% data reliability standard should be considered a performance measure criterion. The 95% data reliability standard is not exclusive to any single IV-D performance measure. Moreover, the parenthetical referring to "either poor data or poor performance" modifies the term "failure," not the term "performance measure criterion." When the provisions are read as a whole, the only reasonable reading is that the "failure . . . in the same performance measure criterion" refers to a state's failure to demonstrate that it has attained the required level of performance on any specific measure, such as the PEP, regardless of whether that failure is due to poor performance or to poor data needed to establish that the required performance level was met. In context, the preamble language that Alabama cites means that consecutive failures with respect to the same criterion can be failures with respect to either data or performance. Alabama's argument ignores the fact that a state must submit reliable data in order to establish that it has met the required performance measure criterion. Moreover, these arguments fail to counter the plain meaning of section 409(a)(8)(A) of the Act and 45 C.F.R. § 305.61(a) and are strained interpretations of the cited provisions. Alabama's argument is also inconsistent with other preamble statements warning that fixing data reliability in the second or corrective action year is not sufficient to avoid a penalty if the data do not show that the state has achieved the required level of performance.

Accordingly, we sustain the determination that Alabama is subject to a penalty.

Delaware Department of Health and Social Services, Docket No. A-04-46

The Delaware Department of Health and Social Services (Delaware) appeals a penalty of $293,489. ACF determined that Delaware is subject to a penalty on the grounds that the PEP and Current Collections data that Delaware submitted for FYs 2001 and 2002 did not meet the data reliability standard. Ex. DE-2.

Delaware disputes ACF's determination that Delaware's PEP data were unreliable. Delaware's argument concerns children in the IV-D caseload who were born in wedlock that Delaware included in its report of the total number of children in the IV-D caseload who were born out of wedlock. Delaware argues that because Delaware law prior to 2004 raised only a rebuttable presumption of paternity of children born in wedlock, a defendant male in a support action could deny paternity of children born during his marriage to the mother, and Delaware would thus have had to establish or verify the paternity of children in IV-D cases, regardless of whether they had been born in wedlock. Delaware argues that any data unreliability resulting from the inclusion of children born in wedlock should have been disregarded as technical in nature because the State had to devote the same level of effort to those cases as to cases involving children born out of wedlock. ACF does not dispute Delaware's assertion that if Delaware's reporting of these born-in-wedlock children had not been deemed erroneous, Delaware's PEP data would have passed the reliability standard for at least FY 2001 and possibly FY 2002. (20)

This argument provides no basis to reverse the penalty or to disregard the data errors. The relevant materials that define or describe the PEP do so in terms of children born out of wedlock. These include the statute, the Part 305 regulations, the December 1999 instructions for completing form OCSE-157 (e.g., "[r]eport the number of children in the IV-D caseload in cases open at the end of the fiscal year who were born out of wedlock"), and the OCSE-157 forms that Delaware completed and submitted for FYs 2001 and 2002. Section 452(g)(2) of the Act; 45 C.F.R. § 305.2(a)(1); AT-99-15 (Dec. 22, 1999); Exs. DE-8, DE-13. States that select the IV-D PEP measure, such as Delaware, are further instructed to report in-wedlock births on a different line of the form 157, not applicable here. The preamble to Part 305 indicates that a child born in a state where the law "presume[s] that a child born within a marriage is legitimate" could be counted as having been born out of wedlock. However, the preamble limited this exception by allowing the child to be counted "only if allowable under State law and then only if a court determined the presumed father could not have been the child's biological parent." 65 Fed. Reg. 82,200. Delaware did not argue or show that these conditions were met for the children that it seeks to include in its data.

Indeed, Delaware does not deny ACF's report that the case files of the particular in-wedlock children at issue contained no indication that paternity for any child had been challenged by the mother's spouse or former spouse. Delaware also does not dispute ACF's assertion that Delaware was not required to undertake any particular effort to establish the paternity of those children. This appears to undermine Delaware's rationale for including them among its IV-D PEP data.

Additionally, as ACF notes, these errors in Delaware's data affected calculation of Delaware's PEP performance and were thus not proper subjects to be disregarded as technical in nature, where that action depends on a finding that the unreliability "does not adversely affect the determination of the level of the State's paternity establishment or other performance measures percentages." 45 C.F.R. § 305.62.

Finally, Delaware also does not dispute that its Current Collections performance data did not meet the reliability standard for FYs 2001 and 2002, which alone rendered Delaware subject to a penalty.

Accordingly, we sustain the determination that Delaware is subject to a penalty.

District of Columbia Office of the Corporation Counsel, Docket No. A-04-47

The District of Columbia Office of the Corporation Counsel (DC) appeals a penalty of $701,495. ACF determined that DC is subject to a penalty on the grounds that DC's data for the Support Order Establishment (SOE) and Arrearage Collections performance measures for FY 2001 did not meet the data reliability standard, and that for FY 2002, DC failed to meet the required level of SOE performance or the required increase over the previous year's level, and its Arrearage Collections data did not meet the data reliability standard. Ex. DC-2.

DC argues that ACF should have accepted DC's SOE data for FY 2001 because the unreliability was due solely to DC having submitted data applicable to FY 2000 and DC later submitted the correct data, and that ACF could not impose a penalty based on DC's SOE performance because its SOE score for FY 2001, as established by the correct data, increased sufficiently over its FY 2000 SOE score to constitute a passing score. As to the latter argument, DC argues that the required increase, five percent, means five percent of the prior year's score, and not five absolute percentage points. Under the latter reading, the increase in DC's SOE performance was not sufficient to avoid a penalty. DC also argues that ACF could not impose a penalty based on unreliable Arrearage Collections data because that performance measure is used only to award incentives and not to impose penalties.

We address these arguments below.

A. ACF was not required to accept DC's corrected SOE data for FY 2001.

DC reports that ACF determined that DC's FY 2001 SOE data were not reliable because DC had provided ACF with data applicable to FY 2000. Ex. DC-10, at 4. DC says that it discovered this error during the ACF audit ending in March 2002 but that the auditors denied DC's request to submit data from the correct time period, and completed the audit using the incorrect data. DC later submitted a corrected OCSE-157 for FY 2001 to ACF in December 2002, but reported that ACF audited only those lines related to DC's PEP for FY 2001, which it found to be reliable. Exs. DC-3, DC-6.

DC argues that ACF should disregard any noncompliance with IV-D requirements resulting from the initially unreliable SOE data on the grounds that it was technical in nature and did not adversely impact DC's IV-D program, or that ACF should hold the penalty in abeyance until it completes an audit of the corrected FY 2001 SOE data. 45 C.F.R. § 305.62. ACF's finding that DC's FY 2002 SOE data and its resubmitted FY 2001 PEP data were reliable suggests that its resubmitted FY 2001 SOE data were also reliable, DC argues, noting that no major changes in SOE data collection and reporting took place between FY 2001 and 2002.

We conclude that ACF was justified in finding that DC's failure to submit reliable FY 2001 data by the applicable deadline was not merely technical noncompliance. States must submit performance data for a fiscal year by no later than December 31st following the end of the fiscal year and only data submitted by that time may be considered in determining a state's IV-D performance measures. 45 C.F.R. § 305.32(f). The regulations contain no provisions for extending that deadline. DC's FY 2002 data unreliability resulted essentially because DC failed to submit data actually attributable to FY 2001 by the deadline for submitting FY 2001 data, instead submitting data applicable to FY 2000. The FY 2001 SOE data that DC later submitted were untimely and could not be used to determine whether DC met required performance levels for FY 2001. As ACF points out, enforcement of the deadline is necessary to assure timely determination of incentives payments to states. State eligibility for incentives is based on a state's performance relative to all other states, determinations which could not be made if state performances were subject to ongoing recalculations based on revised data submitted after the applicable December 31st deadline.

In the Part 305 preamble, ACF explained that states may submit reliable data beyond the deadline for a given fiscal year for two limited purposes, not including passing the performance measure in the given year: to determine whether the following year's performance increased sufficiently to pass the performance measure in the following year, and to establish the number of children born out of wedlock during or as of the end of the given year, information that is needed to calculate the following year's PEP. 65 Fed. Reg. 82,184, 82,190. However, DC did not seek to have its late FY 2001 data accepted for the limited purpose of determining FY 2002 performance, but sought instead to establish its SOE performance level for FY 2001, and to show that it attained the required performance level based on improvement over FY 2000. The data were untimely for that purpose. (21)

Given that the regulations establish a deadline for data submission but do not provide for extensions, and that the permissible uses of late data do not include permitting a state to meet performance measures in the year with respect to which the data were late or were originally submitted in unreliable form, ACF could reasonably decline to conclude that DC's untimely submission of data was not mere noncompliance of a technical nature. It is also not clear that the "technical nature" determination would apply here in any event. The regulations require that performance determinations be made using data submitted by the applicable deadline. Those data here were from the wrong year, and their use would clearly have an adverse effect on the calculation of performance for the correct year, rendering a technical nature determination inapplicable. Granting a technical nature exception for late submission of correct data would lessen states' incentives to submit correct data in a timely manner. Additionally, as discussed below, even the late SOE data for FY 2001 did not show that DC had met the SOE performance measure.

B. DC's SOE performance did not increase enough to avoid a penalty.

To avoid a penalty, a state must attain an SOE performance level that is at least 40% or five percent more than its SOE level for the immediately preceding year. DC asserts that it is not subject to a penalty for SOE performance because its SOE performance level for FY 2001, as established by its late data, was 29.9%, an increase of more than five percent of its 26.2% SOE performance level for FY 2000. ACF argues, and we agree, that the required increase is five absolute percentage points, which was not satisfied by DC's 3.7% increase (29.9% minus 26.2%).

The SOE is the ratio of the number of IV-D cases with support orders during the fiscal year to the total number of IV-D cases during the fiscal year, expressed as a percent. 45 C.F.R. § 305.2(a)(2). The regulations express the minimum SOE scores a state must achieve to avoid a penalty and qualify for an incentive:

For purposes of the penalty with respect to this measure, there is a threshold of 40 percent, below which a State will be penalized unless an increase of 5 percent over the previous year is achieved--which will qualify it for an incentive. Performance in the 40 percent to 49 percent range with no significant increase will not be penalized but neither will it qualify for an incentive payment. Table 5 shows at which level of performance a State will incur a penalty under the child support order establishment measure.

TABLE 5--PERFORMANCE STANDARDS FOR ORDER ESTABLISHMENT (Use this table to determine the level of performance for the order establishment measure that will incur a penalty.)

Performance level Increase over previous year Incentive/Penalty

50% or more ......no increase over previous year required ...Incentive.
40% to 49% .......w/5% increase over previous year ............. Incentive.
.............................
w/out 5% increase.........................................No Incentive/No Penalty.
Less than 40%...w/5% increase over previous year..............Incentive.
.............................
w/out 5% increase....................................... Penalty equal to 1-2% of TANF funds for the first
..................................................................................................... failure, 2-3% for second failure, and so forth, up to a
..................................................................................................... maximum of 5% of TANF funds.

45 C.F.R. § 305.40(a)(2).

DC argues that it needed to increase its SOE score by only five percent of its previous year's score, rather than by the larger amount of five absolute percentage points, because the required amount of increase needed to avoid a penalty is expressed in percentages, and not in percentage points. DC argues that this is a reasonable reading of the regulation, because parts of the statute and Part 305 addressing performance increases needed to earn incentives refer to increases in percentage points rather than percentage increases. DC cites statutory language regarding the determination of a state's PEP for incentive purposes that uses percentage points. The cited provision is used to convert the PEP ratio into a figure called the "applicable percentage," which is used to calculate incentives:

Notwithstanding the preceding sentence [for determining the applicable percentage assigned to particular PEPs], if the paternity establishment performance level of a State for a fiscal year is less than 50 percent but exceeds by at least 10 percentage points the paternity establishment performance level of the State for the immediately preceding fiscal year, then the applicable percentage with respect to the State's paternity establishment performance level is 50 percent.

Section 458(b)(6)(A)(ii) (emphasis added). (The statute contains similar language regarding the determination of applicable percentages relative to all five performance measures.)

DC argues that the explicit use of "percentage points" in the statute regarding the payment of incentives means that the use of simply "percent" in the section of the regulations addressing the imposition of penalties means that the latter requires only relative increases and not increases of absolute percentage points. This use of the term "percentage points" at other places, DC argues, shows that Congress and HHS are capable of articulating the distinction between a percentage increase and a percentage point increase and have done so when that distinction was intended.

DC's reliance on the use of different terms at places in the statute and regulation is misplaced. Here, the context and the weight of the evidence indicates that ACF was using percent as shorthand for percentage points. Viewing the required increases as relative amounts or fractions of the previous year's performance level conflicts with the statute and moreover is contrary to common sense.

First, DC's reading is inconsistent with the statute. Determining a state's eligibility for incentives based on its SOE performance requires converting its SOE ratio into an "applicable percentage." Section 458 contains a table for assigning applicable percentages to different SOE ratios. An SOE ratio of at least 50% but less than 51% is assigned an applicable percentage of 60%. Section 458(b)(6)(B)(ii) of the Act. An SOE of at least 0% but less than 50% is assigned an applicable percentage of zero (0). The text following the table states:

Notwithstanding the preceding sentence [the table for determining applicable percentages], if the support order performance level of a State for a fiscal year is less than 50 percent but exceeds by at least 5 percentage points the support order performance level of the State for the immediately preceding fiscal year, then the applicable percentage with respect to the State's support order performance level is 50 percent.

Id. (emphasis added).

This language is significant because it and the table for determining applicable percentages mirror the regulation, quoted above, listing the SOE scores that will earn an incentive or warrant a penalty. The regulation and its "Table 5" provide that an SOE performance level less than 50% receives no incentive unless it increased over the prior year by 5%, in which case it earns an incentive. 45 C.F.R. § 305.40(a)(2). The statute provides that an SOE performance level less than 50% is assigned an applicable percentage of zero, meaning that it earns no incentive, unless it exceeds "by at least 5 percentage points" the SOE performance level for the immediately preceding fiscal year, in which case the applicable percentage is 50%, qualifying the state for an incentive payment. Section 458(b)(6)(B)(ii) of the Act. Reading the "percent" increases required by Table 5 of the regulation as relative increases would contradict the statute because it would enable a state with an SOE ratio of less than 50% to qualify for an incentive based on an increase of less than 2.5 percentage points (5% of 50%), whereas the statute clearly requires an increase of five percentage points.

Next, the statute consistently requires increases in absolute percentage points for determining eligibility for incentives and, more significantly, for assessing whether a state is in substantial compliance with the requirements of title IV-D for the purpose of imposing a penalty under section 409(a). Sections 452(g)(1), 458(b)(6) of the Act. DC provides no explanation of why that practice would be abandoned when it came to prescribing the increases needed to avoid a penalty based on failure to meet IV-D performance measures.

DC's reading is also inconsistent with the logic of the regulation. One apparent purpose of the provision permitting a state with a performance level below the required minimum to avoid a penalty if its performance has increased sufficiently is to reward significant improvement in performance by a state that would otherwise be subject to a penalty. ("State and Federal partners and Congress recognized that perfect performance was not possible and decided to focus on effective or significantly improved performance." 65 Fed. Reg. 82,199-82,200.) Under DC's reading, the worse a state's score, the smaller the improvement that would be needed to avoid a penalty. A state that barely missed attaining the minimum performance level would have to have achieved a much larger increase over its previous year's performance than a state with a much lower performance level, an illogical result. DC also points to no place where the statute or regulations state that the required increase is a percentage of the prior year's score.

Thus, we conclude that the regulation expresses the required performance increases in absolute percentage points, and not as relative fractions of the previous year's SOE ratio. DC would thus have to have increased its FY 2001 SOE performance level by at least five percentage points above its FY 2000 performance level, and the 29.9% SOE performance level that DC asserted it attained for FY 2001 was not a large enough increase over the 26.2% SOE performance level that it reports having attained for FY 2000 for DC to have avoided a penalty.

C. DC is subject to a penalty for failure to submit reliable data on the Arrearages Collections performance measure.

DC argues that it is not subject to a penalty for unreliable Arrearages Collections data because the Arrearages Collections performance measure is not one of the three performance measures that states must meet to avoid a penalty. Those three penalty performance measures are the PEP, the SOE, and the Current Collections Performance Level. (22) DC does not dispute that states are subject to penalties for failing performance on those three measures. 45 C.F.R. § 305.61(a)(1)(i). Two other measures are used to determine a state's eligibility for incentives, along with the three penalty performance measures, but not to assess penalties: the Arrearages Collections performance measure at issue here, which measures the state's success at collecting past-due child support, and the Cost-Effectiveness performance measure. (23)

DC argues that since states that fail to submit reliable data on the two incentive-only performance measures suffer the loss of possible incentive payments, it was not reasonable to subject them to further loss through penalties for unreliable data on those performance measures, which are not used for imposing penalties. DC argues that under the principle of statutory construction that provisions that are similar ("in pari materia") should be construed in a similar manner, section 305.61(a)(1)(ii) should not be read as imposing penalties for unreliable data related to those two measures because it does not impose penalties for performance on the two incentive-only performance measures.

We concur with ACF that the plain language of the statute and regulation subjects states to penalties for the failure to submit reliable data, with no distinction between data used both for penalty and incentive performance measures and data used for incentive measures only.

Section 409(a)(8)(A)(i) of the Act holds a state liable for a penalty if, on the basis of an audit under 452(a)(4), the state's data submitted pursuant to 454(15)(B) are unreliable. Section 454(15)(B) requires that a state be able to provide data and calculations "concerning the levels of accomplishment (and rates of improvement) with respect to applicable performance indicators (including paternity establishment percentages) to the extent necessary for purposes of sections 452(g) and 458;. . ." Section 458 in turn defines the five performance measures, authorizes incentives based on the states' performances on those measures, and requires that incentive determinations be made using reliable performance measure data. The language in section 409(a)(8)(A) imposing a penalty for unreliable data makes no distinction between data for incentive and penalty purposes. ACF's reading of section 409(a)(8)(A) as imposing penalties for the unreliability of any data that a state is required to submit, with no distinction between data for the performance measures used for penalties and incentives, and the performance measures used for incentives only, is fully supported by the plain language of those provisions. Consistent with the statutory language, section 305.61(a)(1)(ii) provides penalties for data reliability failures and does not limit those failures to only the three penalty performance measures.

The statutory provisions make the need for reliable data an important feature of the incentive process, and the threat of penalties is a further reason for states to submit reliable data on IV-D program performance in addition to the possible loss of incentive payments. Additionally, as ACF noted, the potential loss of incentives may be slim motivation in the case of a state with performance sufficiently poor that it is likely to earn little or no incentives. The motivation may also not be as great as DC suggests, because a state that submits unreliable data for one performance measure may still be eligible for incentives for its performance on the other measures. In the Part 305 preamble, ACF explained that "a State which has incomplete or unreliable data with respect to one (or more) performance measures may still qualify for incentive payments based on its performance levels for the remaining measures." 65 Fed. Reg. 82,201.

DC suggests that it could not have been the intent of Congress to impose penalties for unreliable data for the Arrearages Collections and the Cost-Effectiveness performance measures, because such penalties would have no purpose where there are no penalties for failing performance and no standards by which to impose penalties based on those measures. DC, however, cites no legislative history and nothing in the statutory language to support its argument, and certainly nothing to override the clear language of the statute requiring reliable data for both incentive and penalty purposes and assessing penalties for unreliable data. By referencing the requirement for reliable incentive data at section 454(15)(B) in the penalty statute at section 409(a)(8), Congress clearly intended to include data submitted for all of the five incentive performance measures within the scope of data audited for reliability for penalty purposes.

Accordingly, we sustain the determination that DC is subject to a penalty.

Hawaii Department of the Attorney General, Docket No. A-04-48

The Hawaii Department of the Attorney General (Hawaii) appeals a penalty of $921,048. ACF determined that Hawaii is subject to a penalty on the grounds that PEP data that Hawaii submitted for FYs 2001 and 2002 did not meet the data reliability standard. Ex. HI-2.

Hawaii presented no arguments in addition to the States' general arguments regarding notice and data reliability. We addressed these arguments in sections I.A through I.C of our analysis above.

Accordingly, we sustain the determination that Hawaii is subject to a penalty.

Kansas Department of Social and Rehabilitation Services, Docket No. A-04-49

The Kansas Department of Social and Rehabilitation Services (Kansas), appeals a penalty of $807,487. ACF determined that Kansas is subject to a penalty on the grounds that Kansas failed to achieve the required PEP, or the required increase over the previous year's PEP, for both FYs 2001 and 2002. Ex. KS-3.

Kansas argues that correction of errors in its IV-D case records and computerized data system caused its PEP to be lower than expected, and that it lacked resources to undertake other corrections that would have raised its PEP. Kansas argues that ACF should not assess a penalty, pursuant to the Secretary's authority to disregard failure to substantially comply with the requirements of title IV-D and to modify the standards for determining substantial compliance based on the PEP.

The portion of the statute authorizing the Secretary to disregard noncompliance with IV-D requirements, which we quoted earlier, provides that a state determined-

to have failed to have substantially complied with 1 or more of the requirements of part D shall be determined to have achieved substantial compliance only if the Secretary determines that the extent of the noncompliance is of a technical nature which does not adversely affect the performance of the State's program under part D . . .

Section 409(a)(8)(C)(i) of the Act. The regulation implementing this provision, 45 C.F.R. § 305.62, also quoted earlier, combines it with the provision permitting the disregard of unreliable data.

The section of the statute authorizing modification of the PEP standards used to determine whether a state has failed to comply substantially with title IV-D requirements states:

The Secretary may modify the requirements of this subsection [452(g)] to take into account such additional variables as the Secretary identifies (including the percentage of children in a State who are born out of wedlock or for whom support has not been established) that affect the ability of a State to meet the requirements of this subsection.

Section 452(g)(3)(A) of the Act.

Kansas argues that its PEP failures should be disregarded based on these provisions. Kansas argues that its FY 2001 data, while meeting the standard of reliability, still had errors due to the double counting of children born out of wedlock whose paternity had been established, and that correcting this problem lowered its PEP for FY 2002. The double counting, which was noted in the FY 2001 DRA report, resulted from Kansas's IV-D agency and its Office of Vital Statistics each reporting the same children for whom paternity had been established. (24) Ex. KS-8.

Kansas also argues that in the course of making programming changes to its automated IV-D system that were necessary to prevent the double counting, it discovered that its staff had incorrectly entered data for a significant number of IV-D cases, which caused paternity establishments to be under reported, decreasing Kansas's PEP. An affidavit of the Kansas Director of Child Support Enforcement indicates that because this data problem was discovered after the deadline for submitting FY 2002 data, Kansas was unable to take credit for all its paternity establishments. Ex. KS-1, at ¶ 3. Kansas reported that over 4,276 cases were identified for correction, with priority given to those to be included in the data report for FY 2003, but that budget restrictions affected Kansas's ability to train staff and limited the correction process to the entry of correct codes for out-of-wedlock births and paternity establishments as they occur. Id. at ¶¶ 4-6.

Kansas argues that its request to have its PEP failures disregarded as technical in nature is supported by ACF's statements in the Part 305 preamble that it is impossible to foresee all the circumstances under which a penalty might be assessed for technical noncompliance, and that technical non-compliance is defined in a broad way allowing it to be applied to unknown situations that may occur. 65 Fed. Reg. 82,207. Such relief, Kansas argues, should apply to a state that discovers that it was omitting paternity establishments from its data. "Given the agency's failure to provide proper notice and the availability of the technical noncompliance safeguards to the present situation," Kansas argues, the penalty should be set aside as an abuse of discretion. Kansas Supp. Br. at 4. By lack of notice, Kansas apparently refers to the States' argument that ACF failed to afford states the notice of their performances and data failures required by the regulation. In section I.A of the analysis above we discussed why we do not agree with the States on this point.

In effect, Kansas argues that its FY 2001 data, though found reliable, were not actually reliable, and that if it had had the resources to make systemic corrections, revised data reflecting those corrections would have elevated its PEP for FY 2002.

The statute does not afford the relief Kansas seeks. Section 409(a)(8)(A) of the Act provides three bases to assess a penalty: (I) failure to achieve the PEP or other penalty performance measures, (II) failure to submit reliable data, and (III) failure to substantially comply with one or more of the requirements of title IV-D of the Act. Section 409(a)(8)(C) permits the Secretary to disregard as technical in nature a state's failure to substantially comply with the requirements of title IV-D of the Act, and its submission of unreliable data, but does not mention a state's failure to attain the required performance levels, the basis of the penalty against Kansas. The regulation tracks the statute, authorizing the disregard of data unreliability or noncompliance with IV-D requirements but not failure to attain required performance levels. 45 C.F.R. § 305.62.

Even assuming that the disregard authority could be applied to a state's failure to attain the required PEP, it is not clear that Kansas would qualify for its application.

The affidavit from the Kansas Director of Child Support Enforcement explains that the programming changes to Kansas's IV-D automated data system made to prevent double counting of paternity establishments caused Kansas's PEP for FY 2003 to be much lower than expected. The affidavit further explains that data entry errors discovered during the programming changes, which were too costly to correct timely, caused paternity establishments to be under reported and lowered Kansas's PEP for FYs 2002 and 2003. Ex. KS-1.

Regarding the double counting, Kansas's explanation alleges only that correction of the double counting lowered its PEP. It does not address why the actual PEP, as established through correction of the double counting problem, failed to meet the required level. It does not establish the actual cause for the PEP performance failure, or why that failure would not adversely affect the performance of the paternity establishment aspects of Kansas's IV-D program. Additionally, the affidavit states that prior to discovery of the double counting Kansas had believed that its PEP would meet the required 90% level for FY 2003, a year not at issue here.

As to the data entry errors that caused the under reporting of paternity establishments, the affidavit does not allege that Kansas would have attained the required PEP level in the absence of those errors. Instead, the affidavit presents Kansas PEP figures to illustrate the difficulty of achieving annual PEP improvements in light of increases in out-of-wedlock births. These figures seem to indicate that the actual PEP, even with the corrected data, would not meet either the required minimum standard to avoid a penalty or the required level of improvement. Id. Thus, Kansas has not established that it would have qualified for the "technical nature" provision in any event.

Kansas also seeks relief from the penalty under section 452(g)(3)(A) of the Act, which authorizes the Secretary to modify the requirements of section 452(g) "to take into account such additional variables as the Secretary identifies (including the percentage of children in a state who are born out of wedlock or for whom support has not been established) that affect the ability of a state to meet the requirements [of section 452(g)]." The additional variables that Kansas cites were an increase in the number and percentage of children born out of wedlock, in conjunction with state budget deficits, both nationally and in Kansas, which it describes as unforeseeable and uncontrollable circumstances that have made compliance with the federal PEP requirements increasingly difficult. Kansas states that it would have petitioned the Secretary for relief under section 452(g)(3)(A) if ACF had notified Kansas of the possibility of a penalty at the time that the States argue that ACF was required to provide notice.

First, as we discussed with respect to the technical nature exception for failure to comply substantially with the requirements of title IV-D, the Secretary's authority is limited to such noncompliance and does not by its terms extend to failure to meet the performance measure levels. Second, the statute authorizes but does not require the Secretary to modify the title IV-D requirements; the Secretary, through ACF, did not implement this provision in the Part 305 regulations and has not identified additional variables that would support waiver of the PEP standards. Finally, and most significantly, Kansas does not demonstrate or allege that budget deficits or increases in the number and percentage of out-of-wedlock births were greater in Kansas than in other states or were of such magnitude as to warrant relaxing the requirements by which all states were bound. Indeed, Kansas implies that its experience is not unique, referring to these trends as nationwide occurrences.

Accordingly, we sustain the determination that Kansas is subject to a penalty.

Louisiana Department of Social Services, Docket No. A-04-50

The Louisiana Department of Social Services (Louisiana) appeals a penalty of $1,096,723. ACF determined that Louisiana is subject to a penalty on the grounds that PEP data that Louisiana submitted for FYs 2001 and 2002 did not meet the data reliability standard. Ex. LA-2.

Louisiana presented no arguments in addition to the States' general arguments regarding notice and data reliability. We addressed these arguments in sections I.A through I.C of our analysis above.

Accordingly, we sustain the determination that Louisiana is subject to a penalty.

New Hampshire Department of Health and Human Services, Docket No. A-04-51

The New Hampshire Department of Health and Human Services (New Hampshire) appeals a penalty of $385,213. ACF determined that New Hampshire is subject to a penalty on the grounds that New Hampshire failed to provide information needed to establish its PEP for FY 2001, and that the PEP data that New Hampshire submitted for FY 2002 did not meet the data reliability standard. Ex. NH-4.

ACF determined that New Hampshire failed to provide information needed to establish a PEP for FY 2001 because calculation of New Hampshire's PEP for FY 2001 required data from FY 2000, but that these FY 2000 data had failed to meet the data reliability standard and New Hampshire did not resubmit corrected, reliable FY 2000 data by the deadline for submitting data for FY 2001. The data in question are the total number of children in the IV-D caseload as of the end of FY 2000 who were born out of wedlock, reported at line 5 of the OCSE-157 for FY 2000; the total reported at line 5 is the denominator of the PEP ratio for FY 2001.

New Hampshire argues that its FY 2000 data could be used to calculate its PEP for FY 2001 because ACF determined to disregard the unreliability of FY 2000 data, which meant that the unreliability was of a technical nature that did not adversely affect the calculation of New Hampshire's PEP. New Hampshire also argues that ACF's communications in response to New Hampshire's inquiries led New Hampshire to believe that it needed to resubmit corrected, reliable FY 2000 data only to earn incentives but not to avoid penalties.

We conclude that ACF's decision to disregard the unreliability of FY 2000 data for the purpose of state performances during FY 2000 did not render those data reliable for FY 2001 nor relieve New Hampshire of its responsibility to submit reliable data to the extent needed to calculate performance for FY 2001. We also find that New Hampshire could not reasonably have concluded, based on ACF's communications, that it did not need to submit reliable FY 2000 PEP data to avoid a penalty. In our analysis below, we provide some background on the use of a year's data in calculating performance in a subsequent year, and the decision to disregard the unreliability of data submitted for FY 2000.

A. The decision to disregard the unreliability of FY 2000 data did not render those data reliable for the calculation of performance for FY 2001.

1. Background

The numerator of the PEP for a given fiscal year is the number of children in the IV-D caseload as of the end of the preceding fiscal year who were born out of wedlock (or, for the Statewide PEP, the number of children born out of wedlock during the preceding fiscal year). 45 C.F.R. § 305.2(a). Calculation of a state's PEP thus requires data from two consecutive fiscal years. Since the OCSE-157 that a state submits for a given fiscal year reports numbers of paternity establishments and out-of-wedlock children relative to that year only, calculation of a state's PEP for a given year requires data for that year and the immediately preceding year. Similarly, the OCSE-157 that a state submits for a given fiscal year is generally used to calculate the PEP for that year and for the next year. (25) If a state submits unreliable data on the number of out-of-wedlock children, data that are used to calculate the following year's PEP, then it follows that its PEP for the following year cannot be determined, unless, as discussed below, it resubmits the data in reliable form by the deadline for submitting the following year's data.

As noted earlier, a state must submit data for a fiscal year no later than December 31st following the end of the fiscal year, and only data submitted by that time may be considered in determining a state's IV-D performance for that year. 45 C.F.R. § 305.32(f). However, in the preamble to Part 305, ACF stated that reliable data from a prior year may be submitted after the prior year's deadline when needed to calculate the current year's PEP, or when needed to establish prior year performance for the purpose of demonstrating an improvement in the current year. 65 Fed. Reg. 82,184.

At issue are data that New Hampshire submitted on its OCSE-157 for FY 2000 that are needed to calculate its IV-D PEP for FY 2001. ACF determined that data New Hampshire submitted at line 5 (and line 6) of its OCSE-157 for FY 2000 (IV-D PEP data) did not meet the data reliability standard. Ex. NH-19. The unreliability was due to a coding error in some of New Hampshire's IV-D case records that caused some children to be wrongly excluded from the totals of children born out of wedlock that New Hampshire reported at those lines of the OCSE-157. Id.; Exs. NH-24, NH-25.

However, as noted in section I.B of our analysis above, ACF determined to disregard the unreliability of FY 2000 data submitted by all 23 states that submitted unreliable data, including New Hampshire. ACF announced this decision in substantively identical letters to the states from the OCSE Commissioner dated December 19 and 27, 2001. The letter informed states that this determination was made because incentives were being phased in during FY 2000, and because the system of penalties for IV-D performance was not effective until FY 2001. The letter stated:

However, we note that the Congress provided a phase-in period for the new incentive formula that covers FFY 2000. In addition, the performance standards established by the Secretary that are used to impose penalties are not effective until FFY 2001. In recognition of this phase-in for both incentive and penalty performance standards, we believe that a determination was warranted that the incompleteness or unreliability of these 23 states' data for FFY 2000 was of a technical nature that did not adversely affect the determination of the performance measures.

Therefore, in accordance with the authority granted the Secretary, no state will be subject to a penalty for unreliable or incomplete FFY 2000 data. This determination applies only to unreliable or incomplete FFY 2000 data and does not affect potential data or performance penalties that may result from states' FFY 2001 or subsequent years' data or performance.

Ex. NH-14, at 2. The letter cautioned states that (among other things) they would be subject to penalties for poor performance as of FY 2001:

If the Secretary finds for FFY 2001 that a state either failed to achieve the level of performance required on any of the three penalty performance standards for paternity establishment, support order establishment, and current collections in 45 CFR 305.40, or that the state's FFY 2001 data were unreliable or incomplete, the state would have to correct any data deficiency and meet the applicable performance standard(s) during the succeeding year, FFY 2002. If the state has either unreliable or incomplete data or fails to meet the same performance standard(s) for the corrective action year FFY 2002, a penalty of 1 percent of its Federal TANF funds for FFY 2003 will be assessed.

Id. at 2-3.

2. Arguments and analysis

New Hampshire challenges ACF's determination that New Hampshire's submission of unreliable FY 2000 PEP data prevented calculation of its PEP for FY 2001, on the basis that ACF determined to disregard the unreliability of those data. New Hampshire argues that under the terms of the statute, that determination meant that the unreliability was of a technical nature that did not adversely affect the calculation of the performance measures, and that this determination applied to the data relating to out-of-wedlock births that New Hampshire reported at line 5 of its OCSE-157 for FY 2000 and which were used only to calculate New Hampshire's IV-D PEP for FY 2001.

New Hampshire argues that the December 19, 2001 letter explained that the determination to disregard the data unreliability meant that the State "will be determined to have submitted adequate data for penalty purposes," that "no state will be subject to a penalty for unreliable or incomplete FFY 2000 data," and that states "will be subject to penalties for poor performance as of FFY 2001, based on data reported for FFY 2001." New Hampshire Supp. Br. at 7, citing Ex. NH-14 (New Hampshire's emphasis). New Hampshire argues that it reported the data at issue as part of its data submission for FY 2000 and not FY 2001, and is thus being penalized for the unreliability of its FY 2000 data and not for its performance or data during FY 2001, contrary to the letter's assurances.

ACF argues that the decision to disregard all FY 2000 data unreliability was made so that states would not be subject to penalties for unreliable data reported for FY 2000, one year before states were first subject to penalties for failure to meet IV-D performance measures. ACF Response to New Hampshire Supp. Br. at 15-16, 28; 45 C.F.R. § 305.42. (26) ACF argues that the December 19 letter informed states that the determination to disregard the unreliability of FY 2000 data did not apply to the determination of performance for FY 2001. ACF argues that the regulations and preamble establish a penalty system in which determinations of reliability and performance are made discretely for each year and apply to that year only. ACF notes that any unreliable data from one year that are needed to determine performance in the next year must be corrected and resubmitted in reliable form by the deadline for submitting that next year's data. These aspects of the penalty and incentive system, ACF argues, establish that a determination to disregard the unreliability of data submitted in a particular year do not render those data reliable for use in determining performance in a following year.

The issue is thus whether the determination to disregard any unreliability of data submitted for FY 2000 rendered those data reliable for calculating the performance measures for FY 2001. For the reasons explained below, we agree with ACF that this determination applied only to the assessment of performance for FY 2000 and did not relieve states of their responsibility to submit reliable data needed to determine performance for FY 2001, including any data from FY 2000 that were needed to calculate performance for FY 2001 but had been found to be unreliable when submitted for FY 2000.

First, we agree with ACF that data submissions, reliability determinations, and performance calculations are made yearly and apply to each year individually. The provision permitting ACF to disregard data unreliability begins by referring to data that has been found to be unreliable pursuant to the statutorily-required audits of data that states submit annually. Section 409(a)(8)(C) of the Act. States submit their data on OCSE-157 forms for discrete, individual years, and performance is calculated yearly. The regulation applies the disregard provision to a state that is "subject to a penalty," and states are subject to a penalty for each successive year of unreliable data or failing performance. 45 C.F.R. § 305.62. Thus, a determination to disregard the unreliability of data submitted on one year's OCSE-157 applies only to that year and to the calculation of that year's performance, and effectively means that the unreliability does not adversely affect the determination performance measures for that year only.

A determination that the unreliability of data is merely technical and does not adversely affect the calculation of performance cannot apply to a subsequent year, as state data and performances for the subsequent year are not yet known. That determination is thus necessarily linked to calculating the performance for the year of the OCSE-157 on which the data were first submitted. Here, the determination to disregard the unreliability of FY 2000 data was made shortly before the deadline for submitting FY 2001 data, before ACF was aware of states' data or performances for FY 2001. ACF could not determine at that time whether or not data unreliability, including the unreliability of data for FY 2000, would adversely affect determination of state performances for FY 2001.

Moreover, the provision permitting the disregard of data unreliability says only that the data may be considered "adequate." Application of the disregard provision does not render the subject data reliable when they are not. Language in the regulation stating that a penalty will be imposed if data in a succeeding year "are still incomplete and unreliable" does not except data that are unreliable or incomplete but have been found to be adequate. 45 C.F.R. § 305.61(a)(2).

The definition of the PEP in the statute and regulations and the content of information reported on the OCSE-157 establish that calculation of the PEP for FY 2001 required data from the preceding fiscal year. New Hampshire was aware that line 5 data from its OCSE-157 for FY 2000 were not reliable, via an exit conference following the FY 2000 DRA on February 1, 2001, and ACF's draft and final FY 2000 DRA reports, dated July 16 and August 2, 2001, respectively. Exs. NH-19, NH-20, NH-21. The Part 305 preamble established that unreliable data from one year that are needed to determine performance for the following year, such as the data at issue here, had to be corrected and resubmitted as reliable data, by the deadline for submitting the following year's data:

Section 305.32(f) specifies that States are required to submit data used to determine incentives following instructions and formats required by HHS and on Office of Management and Budget (OMB) approved reporting instruments, and sets December 31st of each calendar year as the final deadline for the submittal of State data for a fiscal year. It includes any necessary data from the previous fiscal year needed to calculate the paternity establishment percentage or any improvements over that fiscal year's performance necessary to earn incentives or avoid penalties for the current fiscal year.

65 Fed. Reg. 82,184.

Thus, where calculation of New Hampshire's PEP for FY 2001 required data from FY 2000, and the data submitted on the OCSE-157 for FY 2000 had been found to be unreliable, New Hampshire was aware that it had to resubmit corrected, reliable data by the deadline for submitting FY 2001 data, December 31, 2001. Because a determination to disregard the unreliability of data submitted for FY 2000 would apply to the determination of performance for FY 2000 only, the determination announced in the December 19, 2001 letter did not relieve New Hampshire of its obligation to submit reliable data regarding the number of children born out of wedlock in the preceding fiscal year by the deadline for submitting data for FY 2001.

Additionally, the basis for the determination to disregard the unreliability of FY 2000 data clearly applied only to the calculation of performance for FY 2000, and not for FY 2001. As the December 19, 2001 letter states, that determination, which applied to all 23 states that submitted unreliable FY 2000 data, was made so that states would not be subject to penalties for unreliable data beginning with data for FY 2000, prior to the start of the penalty process for IV-D performance in FY 2001. The letter cautioned that the determination did not apply to performance penalties that could result from performance during FY 2001. It thus did not apply to performance during FY 2001, nor to data for FY 2000 which were needed to calculate performance for FY 2001.

Thus, we conclude that ACF's position that the determination to disregard the unreliability of FY 2000 data applied only to the use of those data for calculating FY 2000 performance and did not relieve states of their obligation to submit reliable data needed to calculate performance for FY 2001 is the only reasonable interpretation of the regulations. Even if New Hampshire's interpretation were a reasonable one, New Hampshire did not argue that the reason it did not submit revised, reliable data was that it had determined that it did not need to. Instead, the record indicates that New Hampshire understood the need to take action to correct the data but had questions and concerns about the effort entailed that led it to delay corrective action. The December 19, 2001 letter moreover was received by New Hampshire on December 24, 2001. It was thus received too close to the deadline for submitting FY 2001 data for New Hampshire to have relied on its reading of the letter in declining not to submit reliable data on children born out of wedlock in its IV-D caseload for FY 2000 that were needed to calculate its PEP for FY 2001. Ex. NH-14. New Hampshire also has not argued that it relied on reading the letter to mean that it did not need to submit corrected data. New Hampshire does argue that other communications with ACF led it to believe that it did not need to resubmit corrected, reliable data. We address these arguments next.

Accordingly, we conclude that ACF's decision to disregard the unreliability of FY 2000 data did not render those data reliable for the calculation of performance for FY 2001 nor relieve New Hampshire of its obligation to submit reliable data needed to calculate its PEP for FY 2001.

B. New Hampshire could not reasonably have concluded, based on ACF's communications, that it did not need to submit reliable FY 2000 PEP data to avoid a penalty.

New Hampshire argues that ACF led New Hampshire to believe that it did not need to resubmit corrected, reliable FY 2000 data. New Hampshire argues that it repeatedly sought ACF's guidance about whether to correct and resubmit its unreliable FY 2000 data, beginning shortly after it learned of ACF's findings in a FY 2000 DRA exit conference in February 2001. New Hampshire argues that ACF at first failed to respond to New Hampshire's inquiries and later replied only that New Hampshire would have to resubmit corrected data in order to qualify for incentives based on increased PEP performance in FY 2001, and failed to inform New Hampshire that it needed to resubmit corrected FY 2000 data in order to avoid a penalty. New Hampshire argues that it reasonably concluded, based on ACF's responses and failures to respond to its inquiries, that it did not need to correct the coding errors in its IV-D case records that caused its FY 2000 data to be unreliable. New Hampshire further requests that, if the Board does not reverse the penalty, then New Hampshire should be permitted to submit a revised OCSE-157 for FY 2000 containing the results of its review of IV-D cases, which New Hampshire says it would have submitted if ACF had instructed New Hampshire that it needed to do so.

However, the record does not support New Hampshire's arguments. New Hampshire's inquiries to ACF are unclear and do not establish that New Hampshire asked ACF if it had to resubmit reliable FY 2000 data in order for those data to be used to calculate its PEP performance for FY 2001. Instead, New Hampshire appears to have been asking that ACF permanently excuse the coding errors in New Hampshire's open IV-D cases that caused the data reported for FY 2000 to be unreliable, and to not count those errors in future DRAs, so that New Hampshire would not have to undertake the considerable effort that reviewing all of its open IV-D cases would entail.

New Hampshire's communications indicate that the unreliability of the data that it reported at lines 5 and 6 of its OCSE-157 for FY 2000 was due to a coding error in some of its IV-D case records that caused some children to be wrongly excluded from the totals of children born out of wedlock that New Hampshire reported at those lines. As New Hampshire explains, these children were assigned a code meaning "paternity not an issue" (which New Hampshire called "code S"), for use in the data field that its computerized IV-D system uses to record whether paternity has been established. New Hampshire's correspondence indicates this code was meant to be used for children who were born in wedlock, but had been mistakenly assigned to some children who had been born out of wedlock and thus should have been included among the data that New Hampshire reported at lines 5 and 6 of the OCSE-157. Exs. NH-20, at 2; NH-24; NH-25.

New Hampshire's inquiries to ACF about whether New Hampshire should correct the coding errors are not clear as to precisely what action New Hampshire sought from ACF. These communications proceeded as follows:

  • New Hampshire's initial inquiry, a letter to the ACF Boston area audit office dated February 21, 2001 (and re-sent on May 30, 2001) shows that New Hampshire was concerned that correcting the coding errors would cause "a temporary, invalid, and therefore, unreliable inflation of the number of paternities established on the next OCSE 157 report" and that "[a]ttempting to correct the errors for lines 5 and 6 will result in a faulty and unreliable performance measure." Exs. NH-24, NH-25. New Hampshire requested "reconsideration of data associated with those children added to NECSES [New Hampshire's computerized system] prior to 10/01/00. . . . so that if they were coded S, and therefore do not appear on lines 5 or 6, and should have been on both lines, that we be permitted to leave them coded S. We will delay implementation of any corrective action pending your decision regarding this request." Id. New Hampshire also opined that the errors did not affect New Hampshire's PEP ratio because they caused cases to be excluded from both lines 5 and 6. Id.


  • In its next communication, a July 24, 2001 e-mail to an ACF official, New Hampshire asked "whether an error in a case is restricted to [the] given fiscal year or for the life of a case." Ex. NH-23, at 1. New Hampshire quoted ACF's response, in the Part 305 preamble, to a comment that "[t]he rule is unclear whether an error in a case applies to the 'life of the case' or is restricted to a given fiscal year. We recommend that the error be restricted to a given fiscal year." ACF replied in the preamble that "[a]n error in a case is restricted to a given fiscal year." 65 Fed. Reg. 82,207. The e-mail from New Hampshire asked if "failing the DRA on paternity coding in 2000 is restricted to those cases (and only year 2000 cases)? Does it mean that if we submit an acceptable corrective action plan and implement it going forward that we cannot be penalized or suffer loss of incentives for cases in year 2000 and before?" Ex. NH-23, at 1; New Hampshire Supp. Br. at 4. ACF did not respond to New Hampshire's e-mail inquiry.


  • Finally, in its July 26, 2001 comments on the draft FY 2000 DRA report, New Hampshire repeated its earlier explanation of the problems that caused children born out of wedlock to be excluded from the data that New Hampshire reported on its FY 2000 OCSE-157, and again opined that these errors did not affect the calculation of New Hampshire's PEP. New Hampshire then asked that it not have to correct the coding errors in its open IV-D cases:

    We specifically ask that you reconsider the treatment of cases coded "S" which do not, but should, appear on lines 5 or 6, by permitting New Hampshire to leave them coded "S" with no negative consequence for future DRAs. All field staff have been provided a training guide on paternity codes to reduce the occurrence for data entry errors in the future. We have also established plans to re-train our workers on the use of paternity codes, and we are re-writing policy for release to field staff. We have already addressed the problem on over 1,300 cases in our automated system where the "S" code was used inappropriately, and the change could be made centrally without a need to review the individual case files. A complete fix would involve the manual review of New Hampshire's entire caseload, less the 1,300 cases fixed on the system. Absent responses to our previous letters and now receipt of the draft DRA findings, we have made plans for such a review and will begin within days, despite the cost in dollars and ongoing service levels provided to clients. It is questionable whether a total case review can be completed before the conclusion of the current fiscal year.

    Ex. NH-20, at 3. New Hampshire later reported in an e-mail to ACF dated December 31, 2001 that its staff spent approximately six weeks at the end of FY 2001 reviewing and where necessary correcting the paternity coding on all of the open cases, over 37,000 in number. Ex. NH-13, at 1.

  • ACF finally responded to New Hampshire's inquiries in a letter from the ACF Office of Audit dated November 26, 2001, advising that any request for reconsideration "of your situation" would have to be made directly to the OCSE Commissioner. Ex. NH-15, at 1. In response to New Hampshire's concern that correcting the coding problem in its case records would cause an inflation in the number of paternities established and result in faulty and unreliable performance measures for three years, the letter states:

    However, in accordance with the preamble of the "Incentives, Penalty and Audit Final Rule" in AT-01-01 [Part 305], you may need to correct and resubmit your FY 2000 data if you need to demonstrate improvement which would qualify for incentives. However, if your State will otherwise achieve the minimum performance level without showing an increase over the prior year, the correction to the FY 2000 data would be unnecessary. By resubmitting the prior year's data, this will not result in an inflation of the data for this year.

Id. at 2.

While it is regrettable that ACF did not respond earlier to New Hampshire's inquiries, the correspondence from New Hampshire indicates that New Hampshire understood that the coding errors from periods prior to October 1, 2000, the end of FY 2000, would have to be corrected unless it got an assurance from ACF that those errors would not be treated as errors in any future audit or would be disregarded. That New Hampshire chose to delay implementation of corrective action in hopes that it could obtain such an assurance did not change its recognition of its obligation to correct those errors, in the absence of such an assurance.

New Hampshire argues that ACF's advice in the November 26, 2001 letter from the ACF Office of Audit misled New Hampshire into believing that it did not have to submit corrected FY 2000 data to avoid a penalty based on its FY 2001 PEP, because the letter addressed only incentives, and not penalties, as if the only reason New Hampshire would have to resubmit reliable FY 2000 data would be to qualify for incentives based on improvements in FY 2001. However, New Hampshire provided no evidence that it abandoned its corrective action efforts or determined not to resubmit the FY 2000 data based on what was said in the letter, rather than for some other reason. New Hampshire's own correspondence suggests that it did not expect to be able to make the corrections in a timely manner.

In any event, even if New Hampshire did show that its failure to resubmit the data was attributable to its reliance on the November 26, 2001 letter, we would conclude that that reliance was unreasonable. First, New Hampshire did not show that the Office of Audit has the authority to set ACF policy on whether data needed to be resubmitted (or on any of the other issues raised). Second, to the extent that the Office of Audit letter reflects a view that correcting the FY 2000 data would be important only if New Hampshire wanted to qualify for an incentive payment, that view is in conflict with the statute and regulations, which contemplate that the IV-D PEP will be calculated using data on children born out-of-wedlock from the preceding year and that all of the data used for the performance measures in a particular year are subject to being audited for reliability. Sections 452(a)(4),(g)(2), 454(15)(B) of the Act; 45 C.F.R. §§ 305.1(i),(j), 305.2(a)(1), 305.32(f), 305.60, 305.65. New Hampshire knew from the definition of the IV-D PEP measure that calculation of its IV-D PEP for FY 2001 required data on children born out-of-wedlock for FY 2000 that had been found to be unreliable, and also that unreliable line 5 data had to be resubmitted to permit calculation of the next year's PEP based on reliable data.

New Hampshire's inquiries broadly sought a permanent disregard of the coding errors in any of its open IV-D cases. This is apparent from New Hampshire's requests that it not have to review all of its open IV-D case records to uncover and correct the coding errors and that it instead be permitted to leave them coded as they were "with no negative consequence for future DRAs," and its request for "reconsideration of data associated with those children added . . . prior to 10/01/00. . . . so that if they were coded S, and therefore do not appear on lines 5 or 6, and should have been on both lines, that we be permitted to leave them coded S." Exs. NH-20, at 3; NH-24, NH-25. Such a permanent disregard of errors is nowhere authorized in the statute or regulations.

To the extent that New Hampshire was requesting that ACF determine that New Hampshire's unreliable FY 2000 data were adequate for FY 2000 purposes, that request was effectively granted when ACF made that determination for unreliable FY 2000 data submitted by 23 states. The communications concerning New Hampshire's coding error, which took place before that determination, do not address the effect of that determination on New Hampshire's use of the unreliable data for FY 2000 to calculate its PEP for FY 2001, and do not otherwise provide a basis for New Hampshire to have concluded that it did not need to resubmit reliable data for FY 2000 for that data to be used to calculate its PEP for FY 2001.

We also note that New Hampshire's inquiries were to a certain extent based on erroneous premises. First, merely because New Hampshire's data entry errors affected both lines 5 and 6, does not necessarily mean that they would not affect the PEP calculation, as New Hampshire argued in its correspondence. (27) Second, the preamble regarding the effect of "case errors" in a future fiscal year does not in context refer to the type of data entry errors at issue here. Third, New Hampshire did not explain its assertion that correction of the coding error in cases other than the 1,300 that had already been corrected electronically entailed manual review of all 37,000 cases in its IV-D caseload, instead of only those coded "S" (which could be identified through its computerized system).

Thus, we conclude that New Hampshire's argument based on its communications with ACF has no merit.

Accordingly, we sustain the determination that New Hampshire is subject to a penalty.

New Mexico Human Services Department, Docket No. A-04-52

The New Mexico Human Services Department (New Mexico) appeals a penalty of $946,877. ACF determined that New Mexico is subject to a penalty on the grounds that PEP data that New Mexico submitted for FYs 2001 and 2002 did not meet the data reliability standard. Ex. NM-3.

New Mexico argues that it did not submit unreliable data with respect to the same performance measure for two consecutive years because it switched from using the Statewide PEP measure for FY 2001 to the IV-D PEP performance measure for FY 2002. Thus, New Mexico argues, the data reliability errors that ACF identified for FY 2002, at lines 5 and 6 of New Mexico's FY 2002 OCSE-157, the lines used by IV-D PEP states, were not the second year of failure required for a penalty, but rather the first time that New Mexico had submitted unreliable data for those lines. New Mexico argues that before being subject to a penalty it should be given a corrective action year during FY 2003 with respect to its unreliable IV-D PEP data for FY 2002.

New Mexico argues that a state is subject to a penalty only for consecutive identical deficiencies, citing 45 C.F.R. § 305.66 ("[a] consecutive finding occurs only when the State does not meet the same criterion or criteria"), and the Part 305 preamble at 65 Fed. Reg. 82,192 ("[a] new corrective action year will be triggered by a data deficiency or performance failure under a different criterion than was cited in the prior penalty notice"). New Mexico argues that although the IV-D PEP and the Statewide PEP both measure paternity establishment, they are not identical or interchangeable. In this regard, New Mexico notes that ACF considers the IV-D PEP and Statewide PEP data universes sufficiently different to justify different auditing methods, counting exclusion errors against IV-D PEP data but not against Statewide PEP data. New Mexico argues that deficiencies in its Statewide PEP data for FY 2001 had no bearing on the errors in its IV-D PEP data for FY 2002.

ACF characterizes the distinction advanced by New Mexico as semantic rather than substantive, and argues that the actual distinction made by the statute and regulations for the purpose of imposing penalties based on consecutive year failures is among the three performance measures established for imposing penalties (and, for data purposes, among the five performance measures established for awarding incentives). ACF also argues that there is nothing in the statute supporting the notion that consecutive data failures warranting a penalty must be with respect to the same lines in the OCSE-157 form, and that the statute imposes penalties for two consecutive years of unreliable data, which ACF says covers all data submitted by a state.

We concur with ACF that a state's penalty liability for two consecutive years of unreliable PEP data is not limited to data specific to only one type of PEP measure, and is not abrogated or delayed by a state's decision to switch from one to the other. In either case, unreliable data will render impossible the calculation of the state's PEP. Additionally, as ACF argues, New Mexico's reading could permit a state to avoid liability for unreliable data by switching between the two PEP measures from year to year.

The statute and regulations provide for a penalty if a state submits unreliable data in one year, and, with respect to the succeeding fiscal year, the data submitted by the state are unreliable. That was the case here. (28) The deficiency that needs to be corrected to avoid a penalty is the failure to submit reliable data, and not the presence of any particular type of error.

Earlier, in addressing arguments raised by Alabama, we concluded that under the statute, regulations and preamble, a state is subject to a penalty for two consecutive years of unreliable data or deficient performance with respect to the same performance measure. In either case, the state has failed to demonstrate with reliable data that it met the required level of performance. While we recognize that both the IV-D PEP and the Statewide PEP are referred to as measures in section 305.2(a)(1), elsewhere the term performance measure is used to refer to the PEP as one of the five performance measures established by the statute for awarding incentives, of which three are also used to assess penalties. The preamble consistently refers to the PEP as one of the five incentive and three penalty performance measures, with no distinction between the IV-D and the Statewide PEPs as different incentive or penalty performance measures. 65 Fed. Reg. 82,183, 82,184, 82,187, 82,201, 82,210.

Accordingly, we sustain the determination that New Mexico is subject to a penalty.

Rhode Island Department of Human Services, A-04-53

The Rhode Island Department of Human Services (Rhode Island) appeals a penalty of $945,007. ACF determined that Rhode Island is subject to a penalty on the grounds that PEP data that Rhode Island submitted for FYs 2001 and 2002 did not meet the data reliability standard. Ex. RI-2.

Rhode Island argues that the unreliability of its data should be disregarded as technical in nature because it resulted from computer programming errors and because the ACF auditors relied on information from Rhode Island's IV-A agency that was not available to its IV-D agency when it submitted its data. Rhode Island also disputes two of ACF's data error determinations for FY 2002.

We find that the data errors were substantive and that Rhode Island has not demonstrated that those errors should be disregarded as merely technical in nature, and we sustain the penalty. As explained below, we do not rule on Rhode Island's dispute of two data error determinations for FY 2002.

A. Unreliability of FY 2001 data resulting from computer programming errors was not merely technical in nature.

Rhode Island argues that exclusion errors for FY 2001 resulted because the State's IV-D computer system was not programmed to include, among the data that Rhode Island reported on the OCSE-157, children who turned 18 during the fiscal year, or children for whom the IV-D agency was seeking only medical support (health insurance) but not child support. Rhode Island argues that it was not aware of these programming flaws prior to the FY 2001 DRA. Rhode Island does not dispute ACF's determination that these cases should have been included in its data.

Rhode Island argues that its programming errors were technical in nature, and did not affect or had a minimal impact on the calculation of its PEP, because the wrongly excluded cases were only a small fraction of the IV-D case load, as evidenced by the absence of such cases from the sample that ACF selected in auditing Rhode Island's data for FY 2000, and the fact that Rhode Island was able to fix the computer problems during the audit of FY 2001 data. Rhode Island also asserts that the exclusion of these cases from its data did not affect the performance of its IV-D program. Rhode Island Supp. Br. at 4-5. Rhode Island asserts that if ACF had not counted these exclusions from its data as errors, its FY 2001 PEP data would have met the data reliability standard.

We sustain ACF's findings for FY 2001 because Rhode Island has not shown that the exclusion of these cases did not adversely affect the determination of its PEP. Rhode Island asserts that their effect on the PEP was minimal, but does not provide any evidence or analysis supporting that assertion. Moreover, Rhode Island does explain what is meant by a minimal effect, and the mere fact that such cases were not included in ACF's sample for FY 2000 does not establish by itself that their effect was minimal. Rhode Island also does not allege that it would have met the required PEP performance level for FY 2001 if these excluded cases were not counted as errors.

Even if Rhode Island were to show, which it did not, that the exclusion errors did not alter its PEP enough to affect whether Rhode Island met the required performance level, any alteration of its PEP, however minimal, would, as ACF argues, affect Rhode Island's performance relative to other states and thus compromise the accuracy of the system for determining incentives, which is based on relative state performances.

Rhode Island's inability to submit reliable data was not rendered technical in nature simply because the computer programming errors might not have been symptomatic of a failure to deliver IV-D services, as Rhode Island alleges. It appears that these cases were wrongly excluded due to systemic errors in Rhode Island's computer programming that compromised its ability to report accurate IV-D performance data and were not uncovered prior to ACF's audit of the data that the computer system produced. The Act requires each state to operate an automated data processing and information retrieval system capable of providing data needed to calculate the IV-D performance measures and makes failure to submit reliable data a distinct basis for a penalty apart from any consideration of whether a state is delivering IV-D services or otherwise complying with IV-D requirements. Sections 409(a)(8)(A), 454(15)(B), 454(16), 454A, 458(b)(5)(B) of the Act. Audits look not only at a state's data, but also at a state's compliance with the requirements to maintain automated systems. 45 C.F.R. § 305.60(c)(1). These provisions signal the importance of proper programming to generate reliable data, which are necessary for ACF to make incentive and penalty determinations based on data submitted by a state without having to conduct physical reviews of the underlying case records. Finally, as ACF points out, the 95% data reliability standard permits the existence of truly minimal errors, whereas the computer programming flaws in question here called into question the reliability of Rhode Island's data and thus any determination of performance based thereon.

In arguing that the data errors did not affect its performance of IV-D functions, Rhode Island also cites 45 C.F.R. § 305.63. Rhode Island Supp. Br. at 5. That section outlines the standards for determining whether a state is in substantial compliance with IV-D requirements. That is not the basis of the penalty here, however; this penalty is based on the failure to fulfill the requirement to submit reliable data, an integral part of the IV-D performance incentive and penalty system.

Thus, we conclude that Rhode Island has shown no basis to reverse the finding that it submitted unreliable FY 2001 PEP data or to disregard the unreliability of those data.

B. ACF's use of information from Rhode Island's IV-A agency in auditing Rhode Island's FY 2002 data was not improper.

Next, regarding its FY 2002 data, Rhode Island argues that ACF should not have assessed data errors based on information that ACF obtained from Rhode Island's IV-A agency, in April 2003, that Rhode Island's IV-D agency did not have when it submitted its data. See Ex. RI-5, at 2-3, comments on draft FY 2002 DRA report. According to the FY 2002 DRA report, Rhode Island failed to include, in its report of children in the IV-D caseload born out of wedlock whose paternity had been established (line 6 of the OCS-157), children whose fathers were identified on their birth certificates. Ex. RI-4, at 5. Birth records containing paternity data for these children were not in Rhode Island's automated IV-D system or IV-D case files and were located after Rhode Island's IV-D agency referred the ACF auditors to the IV-A agency, which had some birth records that had not been transferred to the IV-D agency. Birth records for other excluded children were found through "the IV-D/Vital records interface." Id.

Rhode Island does not dispute that paternity of these children had been established, based on the birth certificates. Instead, Rhode Island argues that data errors should not be assessed because its IV-D agency did not have the birth certificates and was thus unaware that the children's paternity had been established when it submitted Rhode Island's FY 2002 data. Rhode Island argues that ACF was wrong to utilize those records because "reliable data" is limited to "the most recent data available" that are reliable. 45 C.F.R. § 305.1(i). Rhode Island asserts that its PEP data for FY 2002 would have met the data reliability standard if the auditors had limited their review to information that was available to the IV-D agency when it submitted the FY 2002 data. Rhode Island also argues that the regulations do not provide for ACF to use information held by third parties in assessing the reliability of data. Rhode Island further argues that ACF would have been unable to assess errors if Rhode Island's IV-D agency had refused to retrieve the birth certificates and that Rhode Island is thus being penalized for cooperating with the audit. Rhode Island Supp. Br. at 7. In its comments on ACF's draft FY 2002 DRA report, Rhode Island described the ongoing and "frustrating" difficulties its IV-D agency encountered in attempting to obtain information from its IV-A agency, efforts which at that time had not come to fruition. Ex. RI-5, at 2-3.

Rhode Island does not argue that the materials ACF consulted did not exist at the time that Rhode Island submitted its data for FY 2002, and has not established that the information was not available at the time that Rhode Island submitted its OCSE-157. That some of these materials (birth certificates) were in possession of Rhode Island's IV-A agency but had not been obtained by its IV-D agency did not mean that they were not available. The record indicates that Rhode Island was aware that its IV-A agency likely possessed information that it needed to assure that its IV-D data were reliable. Personnel of Rhode Island's IV-D agency told the ACF auditors where they could obtain this information. Ex. RI-4, at 5.

We do not agree with Rhode Island that its IV-A agency was a "third party" from which ACF was not authorized to obtain information during the DRA (Part 305 does not forbid ACF from utilizing information from third-party sources). In other contexts, as noted earlier, the Board has held that a state as a whole must be viewed as a single unit responsible for the administration of grant programs. That principle is applicable here, where it is not disputed that the needed records were within Rhode Island's possession.

A state's difficulty in sharing information among components of its executive branch does not permit it to avoid its responsibility to submit reliable data on a timely basis. The regulations outlining the requirements for state computerized support enforcement systems require those systems to be able to share information with other state agencies, including those administering programs under titles IV-A. 45 C.F.R. § 307.11(b), (f). Additionally, Rhode Island's arguments about the difficulty it had obtaining birth certificates do not explain its failure to utilize the information that ACF reported obtaining from the IV-D/Vital records interface. Ex. RI-4, at 5. Clearly that information was available not only to the state as a whole, but to the IV-D agency.

Thus, we conclude that it was proper for ACF to obtain from Rhode Island's IV-A agency the information that it needed to assess the reliability of Rhode Island's IV-D data.

C. We do not rule on Rhode Island's dispute of ACF's FY 2002 error determinations for two children.

Rhode Island disputes ACF's determination that Rhode Island improperly included, among the total of children in IV-D cases who were born out of wedlock (line 5 of the OCSE-157 for FY 2002), two children who were born in wedlock. Rhode Island asserts it was justified in classifying these two twin children as having been born out of wedlock because a court had determined that the father who was listed on the birth certificates and had been married to the mother at the time of birth was not their biological father, and because the mother had identified a different man as their biological father when she applied for IV-A benefits. ACF asserts that the court ruling that Rhode Island cites was insufficient under Rhode Island law to rebut the presumption of paternity that applied to the father listed on the birth certificates.

ACF acknowledges that a determination that the two children were born out of wedlock would cause the data that Rhode Island reported at line 5 of the OCSE-157 for FY 2002 to meet the data reliability standard, but asserts that Rhode Island would still be subject to a penalty for FYs 2001 and 2002 based on two years of unreliable PEP data, because its data at line 6 the OCSE-157 for FY 2002 still failed to meet the data reliability standard. ACF Response to Rhode Island Supp. Br. at 16. Rhode Island does not dispute that conclusion. Accordingly, it is not necessary for us to rule on ACF's determination that these two children should have been reported as having been born in wedlock.

However, reversal of the unreliability finding for line 5 could affect a subsequent penalty that ACF has assessed for FY 2003, on the basis that the unreliability of line 5 data from FY 2002 prevented ACF from calculating Rhode Island's PEP for FY 2003. ACF assessed that penalty in a letter dated February 17, 2005, and Rhode Island's appeal of that penalty has been assigned Board Docket No. A-05-47 and stayed pending this decision. The parties may in that appeal fully address ACF's determination that the two children should not have been included in the data that Rhode Island reported at line 5 of its OCSE-157 for FY 2002 and whether reversal of that determination would affect the penalty assessed on Rhode Island for FY 2003.

Accordingly, we sustain the determination that Rhode Island is subject to a penalty.

Conclusion

After fully considering the common issues and the issues raised by individual States' appeals, we sustain the penalties in full for all nine States.

JUDGE
...TO TOP

Judith A. Ballard

Cecilia Sparks Ford

Donald F. Garrett
Presiding Board Member

FOOTNOTES
...TO TOP

1. The amounts of the penalties appealed are:

Alabama Department of Human Resources.......................................... $ 532,692
Delaware Department of Health and Social Services.............................. 293,489
District of Columbia Office of the Corporation Counsel........................... 701,495
Hawaii Department of the Attorney General............................................. 921,048
Kansas Department of Social and Rehabilitation Services..................... 807,487
Louisiana Department of Social Services............................................... 1,096,723
New Hampshire Department of Health and Human Services................ 385,213
New Mexico Human Services Department................................................ 946,877
Rhode Island Department of Human Services......................................... 945,007

2. The States each argue that the requirement to replace federal IV-A penalty funding reductions with state funds doubles the fiscal impact of the penalties. See, e.g., DC Supplemental Brief (Supp. Br.) at 2. This may be true, but is required by the statute and regulation, by which we are bound. The States provided no basis to ignore this requirement, and do not challenge ACF's determination of the amount of penalties, which are the minimum one percent provided in the statute.

3. Part 305 does afford a state the opportunity to comment on draft audit reports, giving the state some notice of problems that may lead to a penalty and an opportunity to present its views before a determination is made that the state is subject to a penalty. 45 C.F.R. § 305.64.

4. Because we find that the States' interpretation of the regulations is not supportable, a federal court decision that the States cite addressing the consequences of a federal agency's failure to follow its own regulations is not applicable here. States Joint Br. at 20. The States also cite Supreme Court decisions to the effect that federal agencies must clearly state conditions for the receipt federal funding. States Joint Reply Br. at 2, 13. Those principles are not implicated here, where the Act and regulations clearly presented the conditions that states had to meet to avoid penalties and earn incentives.

5. Section 454(15)(B) requires that a state's IV-D plan must have "a process of extracting from the automated data processing system . . . and transmitting to the Secretary data and calculations concerning the levels of accomplishment (and rates of improvement) with respect to applicable performance indicators (including paternity establishment percentages) . . ." The regulation requires states to submit data used to determine incentives and penalties, following instructions and formats as required by HHS, by December 31st after the end of the fiscal year, and provides that only data submitted as of December 31st will be used to determine the state's performance for that fiscal year. 45 C.F.R. § 305.32. The States do not argue that they were not aware of these requirements.

6. The record does not contain a draft FY 2001 DRA report for New Hampshire, and its final FY 2001 DRA report, dated April 15, 2002, does not refer to a draft report for FY 2001.

7. The States characterize the November 14, 2003 letters as suggesting that the DRA reports and other correspondence from ACF fulfilled the notice requirements of 45 C.F.R. § 305.66. The States argue that the DRA reports and other communications were insufficient for that purpose. Since we reject the States' argument that the regulation required notice of performance or data failures prior to the time that a state is subject to a penalty, we need not address this argument. Nevertheless, as we explained above, the draft and final DRA reports did provide the States notice of their data reliability problems during the corrective action year.

8. States formerly had the opportunity to submit a corrective action plan with respect to IV-D penalties under former section 403(h)(1) of the Act. As amended by the Child Support Enforcement Amendments of 1984, Public Law No. 98-378, that section mandated reductions in federal funding for a state's AFDC program if the state was found not to have complied substantially with IV-D requirements. Rather than directing the immediate imposition of a penalty on a state that failed an audit for the first time, the statute provided that the penalty could be suspended while the state was given an opportunity to bring itself into compliance through a corrective action plan approved by OCSE. Former section 403(h)(2)(A)-(C) of the Act. If a follow-up review of a state's performance during the corrective action period showed that the state still had not achieved substantial compliance, a penalty of one to two percent would be imposed. Former section 403(h)(1)(A) of the Act; see former 45 C.F.R. § 305.99 (1998).

9. The Board held that this principle was apparent from the definition of "grantee" then at former 45 C.F.R. § 74.3. That principle is still applicable in the current Part 74, which defines the comparable term "recipient" as--

the government to which an HHS awarding agency awards funds and which is accountable for the use of the funds provided. The recipient in this case is the entire legal entity even if only a particular component of the entity is designated in the award document.

45 C.F.R. § 74.2. This definition applies to entitlement programs listed at 45 C.F.R. §§ 92.4(a)(3), (a)(7), and (a)(8), including the Child Support Enforcement Program. Id.; see also 45 C.F.R. § 301.15(e), applying provisions of Part 74 to title IV-D programs.

10. Section 410(b) of the Act provides that the Board will decide an appeal "not less than 60 days after the date the appeal is filed." The States acknowledge that provision, but note that the legislative history of section 410 describes it as requiring a decision within 60 days. The States request a "prompt determination," based on their belief that the penalties were not assessed in accordance with law. States Joint Br. at 3. While the Board seeks to act as promptly as its resources permit, the legislative history the States rely on does not bind us to any time frame since it conflicts with the plain language of the statute. Additionally, the regulation governing these appeals makes issuance of a decision within 60 days unfeasible, as it provides 45 days for ACF to submit its brief after receipt of an appeal, and 21 days for the appellant state to submit its reply after receipt of ACF's brief (the States here submitted a joint reply). 45 C.F.R. § 262.7(c),(d). The regulations provide no time frame for issuance of the Board's decision, but do provide that the Board may seek additional information (including at a hearing) and must conduct a thorough review of the record. 45 C.F.R. § 262.7(f). This decision resolves multiple appeals which raise many significant issues of first impression.

11. Although data for a year are due by December 31st following the end of the fiscal year, the preamble to Part 305 provides that states may submit reliable data beyond that deadline, when those data are needed in reliable form to calculate performance in a subsequent year, such as when a state, to avoid a penalty, must show an improvement in its PEP in the second of two consecutive years, but submitted unreliable data for the first year, or to calculate the PEP, which requires data from two consecutive years. 65 Fed. Reg. 82,184.

12. The Assistant Secretary for Children and Families (and thus ACF) has been delegated the Secretary's authority to administer the TANF and child support enforcement programs at titles IV-A and IV-D of the Act. 62 Fed. Reg. 52,133 (Oct. 6, 1997); 56 Fed. Reg. 42,332, 42,350 (Aug. 27, 1991). Thus, we refer to ACF as sharing the Secretary's authority to determine that data unreliability is of a technical nature.

13. The ratios in the regulations reflect the textual definitions at section 452(g)(2) of the Act, quoted earlier.

14. Lines 5 and 6 are used for reporting data for the calculation of the IV-D PEP ratio. Lines 1, 2, 24, 25, 28 and 29 report other data related to a state's IV-D caseload, such as cases open at the end of the FY, cases open without support orders established, total amount of current support due for the FY, total amount of support distributed as current support, and cases paying towards arrearages during the fiscal year. ACF Ex. 9.

15. A IV-D "case" is "a parent (mother, father, or putative father) who is now or eventually may be obligated under law for the support of a child or children receiving services under the title IV-D program." 45 C.F.R. § 305.1(a).

16. The seven States are Alabama, Delaware, Hawaii, Louisiana, New Hampshire, New Mexico and Rhode Island. New Mexico selected the Statewide PEP measure for FY 2001 but switched to the IV-D PEP measure for FY 2002. All seven States were found to have submitted unreliable PEP data for FY 2002. ACF Ex. 8.

17. Under title IV-D, "state" means the several states, the District of Columbia, the Commonwealth of Puerto Rico, the Virgin Islands, Guam and American Samoa. 45 C.F.R. § 301.1. The cited ACF exhibit did not list any findings for American Samoa.

18. These Act provisions hold a state noncompliant if it fails to achieve the PEP percentages and require the Secretary to establish uniform definitions; the legislative history describes the two versions of the PEP as similar.

19. Alabama, Hawaii, Louisiana, New Hampshire and Rhode Island assert that their IV-D PEP data would have met the data reliability standard in at least one of the two years at issue if exclusion errors had not been counted. ACF does not dispute these assertions. See, e.g., ACF Response to Louisiana Supp. Br. at 2. New Mexico asserts that it would have had far fewer data errors in FY 2002 had exclusion errors not been counted, but could not tell whether some other errors were exclusion errors because they were not clearly described in the DRA report. ACF asserts that it reviewed those other errors and that disregarding exclusion errors would not have made a difference. Delaware initially asserted that disregarding exclusion errors would have caused its IV-D PEP data to be reliable, but later conceded that disregarding exclusion errors would not have made a difference.

Alabama and New Hampshire assert that they would have passed the PEP performance measure if exclusion errors had not been counted and that there would have been no basis for a penalty. Hawaii, Louisiana, and Rhode Island assert that they would have passed the DRA and would not have been subject to a penalty based on unreliable data for both years, but do not allege that they would also have met the required PEP performance levels for both years. ACF does not dispute these assertions.

20. ACF's final DRA report of the data that Delaware reported on its OCSE-157 for FY 2001 shows that ACF examined a sample of records relating to 152 children, and determined that data regarding 27 sample children were deemed to be in error, because Delaware wrongly reported these children in, or omitted them from, the number of children that Delaware reported on line 5, "Children in IV-D Cases Open at the End of the Fiscal Year Who Were Born Out of Wedlock." Ex. DE-10, at 4. This resulted in an efficiency rate of 82%, with a 95% confidence interval of 75-88%, below the required 95% data reliability standard. Of those 27 children, 24 were categorized as "Children Born in Wedlock in a State with a Presumptive Paternity Law." Id. Not counting those 24 children as errors would have meant that there were only 3 errors out of 152 sample children, for a passing data efficiency rate of approximately 98%. The final FY 2002 DRA report shows that for line 5, of 129 sample children reviewed, 9 out of 14 errors (efficiency rate 89%) were "Children Born in Wedlock." Not counting those children as errors would have resulted in an efficiency rate of approximately 96%. Ex. DE-5, at 4.

21. ACF accepted DC's late, resubmitted FY 2001 PEP data for the purpose of showing improvement in the PEP in FY 2002, and did not find DC subject to a penalty based on its PEP for FYs 2001 and 2002. Exs. DC-2, DC-3. While ACF could have accepted late FY 2001 SOE data for the purpose of determining whether there was sufficient improvement in FY 2002 for DC to have met the performance level requirement, DC's unaudited data do not appear to show sufficient improvement for DC to avoid a penalty. DC asserts that its late FY 2001 SOE data established an SOE performance level for FY 2001 of 29.9%, and its FY 2002 SOE performance level was 30%, less than the five percent increase over FY 2001 required to avoid a penalty. Ex. DC-2; 45 C.F.R. § 305.40(a)(2).

22. We have already addressed the PEP and the SOE. The Current Collections Performance Level is the ratio of the number of dollars collected for current support in IV-D cases to the total dollars owed for current support in IV-D cases, expressed as a percent. 45 C.F.R. § 305.2(a)(3).

23. The Arrearages Collections Performance Level is the ratio of the total number of eligible IV-D cases paying toward arrears, to the total number of IV-D cases with arrears due, and the Cost-Effectiveness Performance Level is the ratio of total IV-D dollars collected, to total IV-D dollars expended; each ratio is expressed as a percent. 45 C.F.R. §§ 305.2(a)(4),(5).

24. ACF auditors found that data for line 9 (the number of children born out of wedlock for whom paternity has been established) were not reported correctly. In the sample of 50 cases that ACF auditors selected in reviewing the line 9 data, comprising 45 cases from the Kansas Office of Vital Statistics and five cases from the Kansas IV-D agency, all five children that the IV-D office reported had already been reported by the Office of Vital Statistics. Ex. KS-8, at 4-5. These errors resulted in a data efficiency rate of 90% for line 9, which was below 95% and considered a "major deficiency." Id. at 1, 4. However, the upper end of the 95% confidence interval that ACF used to calculate the efficiency rate for that line was 97%, above the 95% standard for data reliability and thus a passing score. Id.

25. A state that has selected the IV-D PEP uses line 6 of the OCSE-157 to report the number of children in IV-D cases open during or at the end of the fiscal year with paternity established or acknowledged (the numerator of the PEP for the current year), and line 5 to report the number of out-of-wedlock children in IV-D cases open at the end of the fiscal year (the denominator of the next year's PEP). A state that has selected the Statewide PEP uses line 9 of the OCSE-157 to report the number of children in the state with paternity established or acknowledged during the fiscal year (the numerator of the PEP for the current year), and line 8 to report the number of children in the state born out of wedlock during the fiscal year (the denominator of the next year's PEP).

26. That regulation, "Penalty phase-in," provides:

States are subject to the performance penalties described in § 305.40 based on data reported for FY 2001. Data reported for FY 2000 will be used as a base year to determine improvements in performance during FY 2001. There will be an automatic one-year corrective action period before any penalty is assessed. The penalties will be assessed and then suspended during the corrective action period.

The preamble to this section explains that--

States will be subject to penalties for poor performance as of fiscal year 2001 . . . . For example, if the Secretary finds with respect to FY 2001, that the State had either failed to achieve the level of performance required or that the State's FY 2001 data was unreliable or incomplete, then the State would be required to correct the deficiency and meet the performance measure during the succeeding year, i.e., FY 2002. If the State has either unreliable or incomplete data or fails the performance measure for the corrective action year, FY 2002, a penalty will be assessed.

65 Fed. Reg. 82,188-89.

27. Fairly simple mathematical tests show that this is not true. For example, taking 100 from both the numerator and the denominator of the ratio 1000 to 4000 results in the ratio 900 to 3,900, and changes the measure, expressed as a percent, from 25% to 23.1%.

28. To the extent that ACF's arguments -- that the statute in penalizing unreliable data does not distinguish among the different data lines on the OCSE-157 and that its imposition of a penalty for two years of unreliable data "covers all data submitted by the State" -- could be read as supporting penalties for two years of unreliable data with respect to different performance measures (e.g., unreliable PEP data followed by unreliable SOE data), ACF has rejected such a reading in the preamble, which provides that "[t]wo consecutive years of failure (either poor data or poor performance) in the same performance measure criterion will trigger a penalty imposition." ACF Response to New Mexico Supp. Br. at 6; 65 Fed. Reg. 82,192.

CASE | DECISION | ANALYSIS | JUDGE | FOOTNOTES