Are the Performance Pieces Finally Falling Into Place?
Back in 1993, reformers thought that if agencies developed strategic plans, operating plans, and measures of progress, that decision makers would use the resulting information to manage better. That didn’t work. In 2001, the Bush Administration thought that if a scorecard of more discrete performance information at the program level was created, that decision makers would use it to manage better. That didn’t work either. In fact, a recent article in Public Administration Review by professors Donald Moynihan and Alexander Kroll concluded: “Performance reform initiatives in the U.S. federal government in the last 20 years fit a general pattern of dashed expectations.” In fact: “the most damning fact is that the reforms failed to meet their own basic goal of making the use of performance data the norm.”
A reform to the reform law in 2010, the Government Performance and Results Modernization Act set out to fix various flaws. Did it work? Early evidence suggests “yes,” according to Moynihan and Kroll, and this was recently reinforced by a new studyreleased by the Government Accountability Office (GAO). But with some caveats.
The Right Routines Matter. In their article, Moynihan and Kroll reanalyze GAO survey data of federal managers’ use of performance information that was conducted in 2014 and concluded: “our analysis offers reason for optimism. We suggest that the Modernization Act put in place a series of routines that established organizational conditions for greater use of performance data.”
They go on to say: “Performance management reforms typically create routines of data creation and dissemination; prior analyses suggest that such routines do little to increase the desired behavior of performance information use.” However, under the Modernization Act, federal managers “report higher performance information use” as a result of the newly-mandated routines of:
- goal coordination, especially cross-agency priority goals;
- goal clarification, such as agency priority goals; and
- data-driven reviews of quarterly progress towards these goals.
The authors note: “A basic problem for public organizations is that they inherently pursue multiple and possibly conflicting goals.” The newly-mandated routine of greater goal clarification -- where agencies would not set more than five priority goals -- the authors found that when “organizational goals are clarified, employees are more motivated by and attentive to goals.” They also found that this routine also fosters greater leadership commitment.
The 2010 law also creates routines of quarterly data-driven reviews of progress on priority goals. The authors find that: “Well-run data-driven reviews will generate higher performance information use than poorly run reviews.”
In addition to examining routines, Moynihan and Kroll analyze GAO survey data of federal career managers in 2007 and 2012-13. In their analysis, they use pooled regression analyses to examine four types of data use at the level of the individual respondents (GAO focused its analysis at the agency-level):
- performance measurement,
- program management,
- problem solving, and
- employee management.
Their statistical analysis found that “. . . all three routines are significantly related to purposeful data use.” More importantly, they did not find similar results when looking at the pre-Modernization Act survey data.
Progress on Cross-Agency Goals. The 2010 law also required the Office of Management and Budget (OMB) to designate a small handful of priority goals that span multiple agencies. This was the first of the three routines that Moynihan and Kroll assessed. A recent GAO report assesses this new routine and commends OMB for its implementation efforts during the first two years of implementation
GAO reviewed 7 of the 15 Cross-Agency Priority (CAP) goals. GAO says they are “4-year outcome-oriented goals covering a number of crosscutting mission areas – as well as goals to improve management across the federal government. . . . intended to drive progress in important and complex areas, such as improving information technology and customer service interactions.”
It finds that OMB and the cross-agency Performance Improvement Council: “incorporated lessons learned from the 2012-2014 interim CAP goal period to improve the governance and implementation of these cross-cutting goals.” For example, OMB changed governance structure of each CAP goal to include agency leaders, not just White House officials; held regular senior-level reviews; and provided ongoing assistance to CAP goal teams.
GAO says that OMB and the Council implemented a set of strategies to build agency capacity to work more effectively across agency lines. For example, they:
- Assigned an agency-level co-lead, instead of all the leads being in the White House.
- Provided guidance and assistance to goal teams. For example, providing techniques for the collection of data, leading seminars on how to develop metrics, and assistance in how to develop milestones and actionable next steps.
- Held senior-level reviews. For example, the OMB deputy director for management holds three meetings a year for the 8 management-related CAP goals.
- Obtained funding to support activities. For example, the Lab-to-Market team will use $1.9 million to develop an interface for the 17 Department of Energy national labs to interact with external stakeholders.
- Launched the White House Leadership Development Program, whose participants provide staff support to CAP goal leaders.
GAO concludes that: “efforts to build the capacity of the CAP goal teams to implement the goals has resulted in increased leadership attention and improved interagency collaboration for these goals.” GAO noted that goal teams reported that being designated as a CAP Goal led to increased leadership attention and collaboration. For example, the Smarter IT Delivery team told GAO that “obtaining the hiring authority [for digital service experts in agencies] was a direct result of the CAP goal.’
GAO also found that the goal teams were better able to leverage expertise across agencies. For example, the Open Data CAP goal created an interagency working group that meets biweekly with a diverse range of employees and the group’s efforts has resulted in “every agency now knows how to produce machine-readable documents.”
While assessing CAP goal implementation, GAO found that the most problematic element was the development of performance metrics to determine progress. CAP goal teams “are using milestones to track and report on progress quarterly. However, they are still working to improve the collection and reporting of performance information for the CAP goals.” While “The use of milestones is a recognized approach for tracking interim progress toward a goal” it is not sufficient.
What Happens Next Matters. The real test will be with the onset of a new Administration. Will it build on the efforts to date, or will it want to go back to the drawing board? Moynihan and Kroll conclude: “A new president may be tempted to look for another approach or to simply deprioritize the Modernization Act. This would be a mistake . . . . We find that the quality of routines – not just the routines themselves – matter.”
Graphic Credit: Courtesy of renjith krishnan via FreeDigitalPhotos.net