Using Behavioral Science to Improve Federal Outcomes (Part IV)
The Centers for Medicare and Medicaid Services (CMS) recently analyzed its antipsychotic drug prescription data for elderly patients and found some doctors were over prescribing these potent and costly drugs. It sent letters to high volume prescribers, telling them that their prescriptions were extremely high in comparison to other doctors in their state. Without taking any other action, CMS found that merely sending the comparison letters reduced prescriptions by 11 percent – thereby saving money and increasing patient safety.
CMS undertook this initiative with the help of a small office in the General Services Administration (GSA) that specializes in the use of data analytics and behavioral science techniques, such as the use of comparison letters.
Background. As noted in earlier blog posts in this series, behavioral science is a growing field that is still somewhat diffuse in nature. It is international in scope and multi-disciplinary in its approach and its use has rapidly evolved over the past 5-7 years in the public sector. This growth is in tandem with the growth of evidence-based government, data and analytics, rapid cycle testing, and pressures to improve customer experience with government services.
In the U.S. federal government, these different threads intersect in GSA’s Office of Evaluation Sciences. This small office was created in 2015 to provide a cadre of talent to help agencies use these new techniques to get better results in their programs.
Interestingly, this office preceded the adoption of the Evidence Act earlier this year, which will create an even greater demand for its specialized talents as agencies are pressed to develop their own evidence and evaluation strategies – which also include the use of behavioral science techniques. For example, the Department of Labor has already developed a guide for its operational bureaus on how to best use behavioral interventions in their programs.
The OES Team. The Office of Evaluation Sciences is a multidisciplinary team that blends a range of professional disciplines comprising the field of behavioral science. These include psychology, economics, political science, ethnography, statistics, and program evaluation. Under the leadership of Kelly Bidwell, the office conducts work that spans behavioral science, evidence, and evaluation. It supports agencies, for example, in implementing OMB’s implementation guidance for the recently-passed Foundations for Evidence-Based Policymaking Act of 2018.
The office is located in GSA’s Office of Governmentwide Policy and has a staff of about 15-20 specialists that are a mix of career civil servants and rotational staff from academia or nonprofits who serve one- to four-year terms. Staff members typically oversee 2-4 projects at a time. Office director Bidwell says the use of rotational staff keeps the career staff connected to cutting edge intervention design techniques such as appropriate sample size, evaluation design, analytic techniques, etc.
She also says that, because they are federal employees, they have greater access to the use of federal administrative data sets for analyses than would academics or other non-federal researchers.
The OES team’s approach is to undertake rapid cycle projects, using low-cost solutions (e.g., redesigning a notification letter). Their core deliverables are actionable results to drive better programs and policies – all projects are posted and summarized on their website.
What They Do. Agencies approach OES to help them conduct projects that require expertise that they may not have on their own staffs. OES typically works on 20-30 projects at a time with a wide range of agencies to help clarify identified problems (e.g., define the gap between a program’s goal and reality in order to identify the key trip points), test interventions (often using randomized control trials and large existing data sets), and where successful – help agencies determine how to scale the pilot to a larger population.
According to Bidwell, many of the OES team’s solutions are inexpensive to apply and can be implemented relatively quickly, based on 6-12 month trials. Their proposed interventions typically don’t require legislation, regulatory changes, or significant funding. Where possible, they like to conduct large-scale testing using federal administrative data, develop rigorous findings and results, and use evaluation techniques. Their approach is experimental -- typically iterative, and trial-and-error. Oftentimes their solutions involve changing the way a program is described, timing, and/or the sequence of choices being offered.
Bidwell says her team likes to work in partnership with agencies with the goal of transitioning ownership of the project to the agency partner. Over the long run, Bidwell says, they hope to create an appetite for using behavioral and analytic techniques and create a new capacity for them to use.
Actions taken by their agency clients might vary from scaling up a successfully-tested intervention to advice on reorganizing their administrative data so it can be used to answer related questions or retest a successful intervention on a different population. So far, they found agencies are more reluctant to changing a program’s design (such as changing default settings on application forms) than they are to making small changes (such as fine-tuning the presentation of information). However, they hope to generate evidence on the effects of more substantial changes in the near future.
Examples of the Range of Projects They Undertake. What kind of projects does OES undertake with different federal agencies? Team members work across the government to provide end-to-end support in the design of an evidence-based programmatic change and test the change to measure its impact. Bidwell says that sustaining such change is more effective when the OES team collaborates with internal agency champions who drive the process, participate in the design and implementation of an evaluation, assist in the analysis and interpretation of results, and make decisions about scale and program implications.
Recent projects they’ve undertaken span a wide range of policy areas, such as:
- Simplifying applications for school lunch eligibility
- Encouraging vaccination uptake rates
- Improving participation in programs to reduce student loan defaults
- Increasing retirement savings for active duty servicemembers
Bidwell says that lessons learned in one program are sometimes transferrable to programs in other agencies.
Potential Next Steps. OES staff have created informal networks of peers across agencies, but not formalized these contacts. They currently leverage existing and new networks, such as agency performance improvement officers, chief data officers, and chief evaluation officers. As the use of behavioral insights matures across the government, there may be a collective effort to create a network of peers. This has happened in other fields that cut across agencies, such as among risk managers, evaluation experts, and strategic foresight practitioners.
Another potential future step might be the creation of a “playbook” of behavioral insights for use by agencies. Bidwell, however, says that the work done by OES and other federal agencies may not have created a sufficient critical mass of evidence to warrant a playbook, especially given the breadth of different behavioral techniques. In the interim, it has begun to develop a series of more technical “methods guide.” In addition, it holds an annual event with federal agency collaborators each Fall to share what they have learned and the results from every project completed the previous year. Finally, as it completes more projects and inventories efforts from other agencies, it has begun to identify those techniques on which to focus future attention, and how to effectively implement them in a federal context. For example, OES has identified some of the clearance barriers for modifying agency forms as it has helped agencies improve their design and wording.
* * * * *
Note: Here are links to related posts on this topic:
Part I: How Can Behavioral Science Improve Program Outcomes?
Part II: What Are Some Basic Behavioral Science Concepts?
Part III: How Is Behavioral Science Influencing Public Administration?
Part IV: Using Behavioral Science to Improve Federal Outcomes
Part V: Using Behavioral Insights to Reduce Miner Injuries
Part VI: Nudge in the City: Behavioral Science in Government
Part VII: Creating a Critical Mass of Talent and Resources in the Use of Behavioral Science in Government
Part VIII: Behavioral Science: A Revolutionary Potential for Government?
Graphic Credit: Courtesy of chatchai_stocker via FreeDigitalPhotos.net