More than an Axe: Use Evidence to Improve Programs and Policy

More than an Axe: Use Evidence to Improve Programs and Policy

May 05, 2017

They did it. Congress managed to trade just enough partisan turf for a precious parcel of common ground and narrowly avoided a partial government shutdown. Barely.  And though the ink is not yet dry on the FY 2017 appropriations package, lines are already being drawn in the sand for FY 2018, as advocates are girding for battle over the administration’s stated plans to cut spending on various anti-poverty, education, arts, and climate research programs.

In the so-called “skinny budget” proposal released in March, the Trump administration declared that it “will take an evidence-based approach to improving programs and services—using real, hard data to identify poorly performing organizations and programs.” After the release of the skinny budget, Office of Management and Budget Director Mick Mulvaney said, “We're going to spend money, we're going to spend a lot of money, but we're not going to spend it on programs that cannot show that they actually deliver on the promises that we've made to people.”

This use of research to better understand what works in government programs is laudable. But it risks falling prey to the tendency of many policymakers to use research simply to maintain or defund a program by declaring it a success or failure. While in some cases this is appropriate, more often research can and should be used to help policymakers and program administrators improve programs and better serve target populations.

College preparation programs illustrate the point. Research has shown that several federal programs designed to prepare disadvantaged students for college were only modestly successful. In a Brookings Report, Ron Haskins (former adviser to the President George W. Bush) and Cecilia Rouse (former adviser to President Barack Obama) proposed that—rather than scrapping the programs—policymakers consolidate them into a single grant program. This program would require spending to be backed by rigorous evidence and would use research and demonstration programs to develop and rigorously test several approaches to college preparation. The emphasis is on using research to refine and improve programs, rather than to simply continue or cut them.

The Trump skinny budget proposes cuts to TRIO, a set of federal programs providing educational support services to students from low-income backgrounds, first-generation college students, and students with disabilities. TRIO includes Upward Bound, which is one of the largest and longest-running federal pre-college programs for economically disadvantaged students—and provides a perfect example of how research could be used for program improvement. A national evaluation of Upward Bound, led by Mathematica Policy Research and released in 2009, found no clear evidence that the program enabled enrolled students to perform better in high school and to enter and complete college at higher rates, compared with peers who were not in the program.

Not surprisingly, the lack of positive program impacts generated controversy over the program’s future. The Bush administration proposed to defund Upward Bound, while the youth advocacy community attempted to cast doubt on the findings and worked to prohibit any further rigorous research on Upward Bound. These debates culminated in Congress canceling a subsequent evaluation of Upward Bound and prohibiting any similar studies. This was a lost opportunity, because more research could have made a real difference. Careful observers will see that the study’s comparison group had a high school graduation rate of 90 percent—higher than that for the general population and well above the rates for most disadvantaged populations. So the Upward Bound students who had a high school graduation rate of 89 percent were actually being selected from, and compared to, a high-achieving group. These findings, rather than being cited as justification for keeping or eliminating Upward Bound, could have generated a useful discussion about changing how the program was targeted and the potential effects of such a change. Instead, they led to a pass-or-fail stalemate, without the benefit of further research to evaluate program improvement options.

Quote from articleIf rigorous research is used only to eliminate programs, we should not be surprised that policymakers and advocates resist subjecting their favorite programs to such research. The most recent case in point can be found in the FY 2017 omnibus budget package, which reauthorizes the D.C. Opportunity Scholarship Program, despite findings suggesting that the voucher program has a negative impact on student achievement. Rather than run the risk of putting an underperforming program on the chopping block, Congress simply prohibited the use of the rigorous, gold standard design that had been used to evaluate the program since its inception in 2004.

Unfortunately, formally evaluated programs like Upward Bound and the D.C. Opportunity Scholarship Program are still the exception, rather than the rule. Moneyball for Government, a bipartisan group of current and former government insiders, has argued that the vast majority of federal government spending still is not backed by current evidence. While the use of evidence in policymaking is on the rise, the amount of money spent on evaluations is still not much more than a rounding error when it comes to federal spending.

Debates over government spending are like cherry blossoms in spring—you can’t imagine Washington without them. But my hope is that, as evidence becomes a larger part of these debates, more policymakers will see program evaluation as a key tool to achieve program improvement and improve well-being—not simply as an axe to kill programs.

 

About the Author

Paul Decker

Paul Decker

President and Chief Executive Officer
View More by this Author