Increasing Efficiency in Rule Making with Natural Language Processing
The goal of this project was to increase efficiency in the processing of public comments on regulations. Currently, the public submits comments on proposed regulations on the Regulations.gov website. For certain regulations, comments can number in the thousands. After the public submits its comments, agency staff and/or contractors then process the comments to get them to the subject matter experts. The subject matter experts then review the pre-sorted comments to determine which comments apply to their portion of the regulation. The agency then addresses the comments in the final rule.
The current method is in need of reform, as it varies from office to office, is costly and inefficient, and is burdensome on staff. For example, for a sample Centers for Medicare & Medicaid Services rule, it took over 1,000 hours just to sort the public comments before the comments were even addressed. The process is also duplicative at times: When working under tight deadlines, contractors and agency staff may be performing the same sorting tasks in an effort to make sure the categorization is complete and accurate.
This project tested a tool that categorizes the comments to decrease the amount of time that contractors and staff spend sorting them. The tool specifically was “Content Analyst Analytical Technology tool (CAAT)” which sorted comments after agencies pull the comments from FDMS.gov, the docket management system used to collect public comments. Currently there is no such tool to our knowledge being used across the federal government.
The CAAT tool has two potential methods to sort comments. One is a user-defined function where the user trains the software (“the brain”) with related sample documents; defines the categories and provides examples; feeds the comments into the tool; and runs the categorization. The second is an auto-categorization function where the tool creates the categories without user input.
The categorization tool project has produced successful results in its first testing phase with HHS Ignite support, demonstrating savings of millions of dollars for just one pilot agency. The tool demonstrated the potential to save time and money, increase staff satisfaction, and do so with calculated accuracy rates.
A project supported by the: HHS Ignite Accelerator
Oliver Potts (Project Lead), OS
Katerina Horska, OS
Sheila Bayne, OS
Emma Di Mantova, OS
Mindy Hangsleben, ONC
Jim Wickliffe, CMS
Martique Jones, CMS
Craig Lafond, OS
Kristin Tensuan, EPA
Bryant Crowe, EPA
July 2013: Selected into the HHS Ignite Accelerator
August 2013: Time in the Accelerator began
February 2014: Time in the Accelerator ended
Jennifer Cannistra, Executive Secretary, Immediate Office of the Secretary, OS