Since it's inception in 2002, the Program Assessment Rating Tool (PART) has received both criticism and praise from stakeholders, researchers, and Congress. Let's explore the pros and cons of the PART:
- The PART increased attention paid to program results.
Since the PART was initiated in 2002, stakeholders have paid increased attention to program performance. Department of Education budget analysts and program staff, as well as staff from OMB, have worked together to assess individual programs. As a result, many programs have created short term and long term performance measures that allow them to continually focus on demonstrating results.
- The PART assessment asks some useful questions that highlight program management and design issues.
Overall, the PART assessment asks some useful and important questions. Examining 93 programs at the Department of Education and over 1,000 programs government-wide through the four PART sections (Purpose & Design, Strategic Planning, Management, and Results) has helped to reveal weaknesses and capitalize on strengths. PART assessments have also provided Congress with direction when re-authorizing legislation and help the Department of Education improve strategic planning and management.
- PART made performance information more transparent and easily accessible.
OMB has made PART information transparent and accessible to the public. The OMB website has background and guidance information on the PART. OMB has also created a website, ExpectMore.Gov, which includes PART assessments for every program with a PART rating. All information is stored in a database that is searchable by program, agency, and rating. It includes annual and long term performance measures as well as program improvement plans.
- The PART is a blunt instrument with a one size fits all approach.
The Program Assessment Rating Tool assesses different types of programs across all federal agencies. While it allows for a small amount of customization based on program type, it's primarily a one size fits all approach that overlooks the nuances involved in program design and implementation. The questions on PART don't always work well for Department of Education programs, particularly those that are small or designed with flexibility in mind.
- The PART is dependent, in part, on research that is not available.
The PART rating is partially based on program evaluations that demonstrate effectiveness. Unfortunately, many Department of Education programs have not participated in the rigorous evaluations required for credit on a number of PART questions. Such evaluations are often expensive and time consuming, and the Department of Education has not provided the funds to support them. This is one of the reasons that more than half of the education programs are rated "Results Not Demonstrated."
- The PART presents capacity challenges.
Assessing program performance takes time. PART assessments require staff from the Department of Education as well as the Office of Management and Budget. This can be a time consuming process, especially for OMB analysts who oversee assessments for many different types of programming. A GAO report found that the PART substantially increases the workload for agency and OMB staff, even after reviewers are familiar with the process.
- The PART assessment is subjective, inconsistent, and not fully trusted.
Congress and other stakeholders do not necessarily trust the PART. PART reviews are conducted by different reviewers within and across different agencies, creating inconsistent results. Because it's a subjective process, it is also subject to political bias. Because the PART is an executive branch initiative that doesn't have Congressional buy in, Congress rarely uses it to inform budget decisions.
Measuring program performance is a difficult and complicated task, especially for federal government programs that perform a wide variety of services. While the PART can be improved in many ways, its creation and implementation is a step in the right direction.
Up next in the Ed Money Watch PART Series: Recommendations for the New Administration on Program Performance and Evaluation