Increasing Efficiency in Rule-Making with Natural Language Processing

Increasing Efficiency in Rulemaking

The current manual process for sorting public’s comments to proposed regulations is costly, inefficient, and burdensome. This team used lean startup methodology to map out and identify the inefficiencies in current processes, and then tested a tool that used natural language processing to auto-categorize the public’s comments. This test validated the approach of using this natural language processing tool: The tool showed successful results in its first testing phase. Were this effort to be expanded, the team projects thousands of employee hours and millions of dollars in savings.


Increasing Efficiency in Rule-Making:
Watch the 5 minute project presentation and pitch.


 

Product Summary

The goal of this project was to increase efficiency in the processing of public comments on regulations. Currently, the public submits comments on proposed regulations on the Regulations.gov website. For certain regulations, comments can number in the thousands. After the public submits its comments, agency staff and/or contractors then process the comments to get them to the subject matter experts. The subject matter experts then review the pre-sorted comments to determine which comments apply to their portion of the regulation. The agency then addresses the comments in the final rule.

The current method is in need of reform, as it varies from office to office, is costly and inefficient, and is burdensome on staff. For example, for a sample Centers for Medicare & Medicaid Services rule, it took over 1,000 hours just to sort the public comments before the comments were even addressed. The process is also duplicative at times: When working under tight deadlines, contractors and agency staff may be performing the same sorting tasks in an effort to make sure the categorization is complete and accurate.

This project tested a tool that categorizes the comments to decrease the amount of time that contractors and staff spend sorting them. The tool specifically was “Content Analyst Analytical Technology tool (CAAT)” which sorted comments after agencies pull the comments from FDMS.gov, the docket management system used to collect public comments. Currently there is no such tool to our knowledge being used across the federal government.

The CAAT tool has two potential methods to sort comments. One is a user-defined function where the user trains the software (“the brain”) with related sample documents; defines the categories and provides examples; feeds the comments into the tool; and runs the categorization. The second is an auto-categorization function where the tool creates the categories without user input.

The categorization tool project has produced successful results in its first testing phase with HHS Ignite support, demonstrating savings of millions of dollars for just one pilot agency. The tool demonstrated the potential to save time and money, increase staff satisfaction, and do so with calculated accuracy rates. This project can be replicated and scaled not only across HHS, but also across the whole federal government.

Team Photo

Team Members

Oliver Potts (Project Lead), HHS Office of the Secretary
Katerina Horska, HHS Office of the Secretary
Sheila Bayne, HHS Office of the Secretary
Emma Di Mantova, HHS Office of the Secretary
Mindy Hangsleben, HHS Office of the National Coordinator for Health IT
Jim Wickliffe, Center for Medicare & Medicaid Services
Martique Jones, Center for Medicare & Medicaid Services
Craig Lafond, HHS Office of the Secretary
Kristin Tensuan, Environmental Protection Agency
Bryant Crowe, Environmental Protection Agency

Project Lead’s Approving Supervisor:
Jennifer Cannistra, Executive Secretary, Immediate Office of the Secretary, HHS Office of the Secretary

 

HHS Ignite

HHS Ignite is the IDEA Lab’s incubator for Department staff with ideas on how to modernize government. Selected teams are introduced to startup methodologies for problem identification and project implementation. In the entrepreneurial spirit, Ignite projects are iterative, their impacts measurable, and their solutions scalable. This is one of 13 projects that participated in the beta year of Ignite which ran from June 2013 to February 2014.

 

A federal government Website managed by the U.S. Department of Health & Human Services
200 Independence Avenue, S.W. - Washington, D.C. 20201