The Distinctions Between Policymakers and Researchers, Decision Making and Evidence

The Distinctions Between Policymakers and Researchers, Decision Making and Evidence

Jun 21, 2018
Julie Stone

I have dedicated my professional career to delivering authoritative and objective evidence to policymakers about Medicaid and Medicare. Whether in the federal government, in higher education, or now at Mathematica, each vantage point has given me a window into the differences between how policymakers and researchers employ evidence, and how these differences hamper the potential for evidence-based policymaking. This post outlines some of the challenges policymakers face in using objective evidence produced by researchers. Future posts will offer specific tips for how researchers and policymakers can overcome these challenges to advance policies and programs that work.

Julie Stone

During her time at the Congressional Research Service and throughout her career, Julie Stone has been dedicated to delivering objective evidence to policymakers about Medicaid and Medicare.

Policymakers and researchers often have different priorities for the use of evidence.
Despite the good intentions of many policymakers to solve problems with evidence-based solutions, their decisions are often influenced by ideology, political pressure, and partisan information. I have seen many policymakers dismiss evidence-based policy solutions in favor of those not based on evidence. This happens in part because they are under intense pressure to advance the priorities of their party’s political leaders and to address the preferences of their supporters. At the Congressional Research Service, I was occasionally asked to compile evidence for members to support pre-established legislative solutions. These requests often included instructions to exclude evidence that supported alternative solutions. This work was disheartening not necessarily because I agreed or disagreed with the policies in question, but because my foundational training as a researcher emphasized avoiding bias toward particular outcomes. Researchers are taught to objectively gather, analyze, and present evidence with a focus on minimizing bias.

Even when policy is established without adequate evidence, opportunities might remain for evidence to drive the more nuanced specifications of new policies or programs. In 2004, members of Congress imposed new restrictions on treating the assets of Medicaid applicants ages 65 and older with long-term services and supports needs. At that time, sufficient data were not available to estimate the prevalence of asset sheltering or its cost to federal and state governments. Even without sufficient data supporting the need for new asset sheltering restrictions, the Congressional Research Service was able to deliver factual policy guidance to members that helped shape the operational specifics of the new law. This guidance minimized unintended consequences of the new law and ensured that states could operationalize the changes.

Researchers’ priorities are influenced by different factors. Because research costs money, funders have a lot of influence over which questions researchers explore. In addition, access to information, such as program data, surveys, and publication databases, significantly drives the types of research questions that can be pursued. Throughout my career, I have been frustrated by the number of times I had to let policymakers know that the data or findings they were looking for didn’t exist. For example, while I was at the California Medicaid Research Institute (CAMRI) at the University of California, San Francisco, state policymakers came to us looking for evidence to guide new legislation and regulations to design the expansion of managed care in California’s Medi-Cal program. At that time, the state’s Medi-Cal managed care data were not available to researchers. Instead, CAMRI’s only view into Medi-Cal data was with fee-for-service data, which represented an increasingly small share of the program. In short, scarce research resources were expended studying the program of the past instead of meeting the information needs of policymakers eager to shape the Medi-Cal of today and tomorrow.

Policymaking is usually messy and fast paced, while evidence collection is usually linear and slow.
Traditional research methods, such as randomized clinical trials, quasi-experimental studies, and single-case designs, often assess changes in a limited set of variables. These studies tend to have lots of caveats, are difficult to generalize, and work better in controlled situations. However, because policymakers are grappling with highly complex and multifaceted problems in dynamic environments, they need evidence with high clarity and broad applicability.

Further, program or policy expiration dates, the changing political climate, and frequent election cycles all contribute to the need for policymakers to seek quick answers. On the other hand, collecting information, analyzing it, and publishing results takes time. Studies often need to be replicated to validate findings, further delaying the availability of solid evidence. When time limitations and available evidence are misaligned, even decision makers who are inclined to use evidence often have little choice but to move forward without it.

For example, the Centers for Medicare & Medicaid Services and many states support increasing value-based purchasing in Medicaid. Paying for value, however, requires accurate measurement. Although some measures currently exist to assess the value of Medicaid services, we need more research to produce a complete set of validated quality and cost measures. In fact, Mathematica is a leading developer of value-based measures and is currently under contract to develop new measures for home- and community-based services, behavioral health integration, and quality of life, among others. These measures require validation before they can be publicly disseminated to ensure they will produce their intended results. Even while Mathematica is in the depths of this work, pressure is mounting on the Centers for Medicare & Medicaid Services to provide states with guidance.

The different perspectives and needs of policymakers and researchers results in too many missed opportunities for evidence to inform decision making. Mathematica, our research colleagues, and our policymaking partners are committed to minimizing future missed opportunities and overcoming any challenges to connecting evidence to policymaking. If we don’t make necessary changes, broad and routine evidence-based policymaking will largely remain a promise and not a norm. Please check out my future posts for tips on how researchers and policymakers can improve their communication and engagement to better prioritize evidence for decision making.

About the Author