My colleague, Frank Farrow and I, both of the Center for the Study of Social Policy, submitted the following Letter to the Editor to the New York Times, in response to its article on methods used by the CMMS Innovation Center to evaluate innovative health system reforms. The Times didn’t publish any responses to this article, so we are posting our letter here.
To the Editor:
We were surprised by the criticisms of the CMMS Innovation Center reported by Gina Kolata in “Method of Study is Criticized…” on the front page of the New York Times on February 2, 2014. The article contends that the Innovation Center does not rely enough on randomized experiments, apparently overlooking the reality that when the challenge is to improve whole systems, we need many kinds of evidence to guide reforms. As social problems and their solutions become increasingly complex, we need a full range of ways to generate, analyze and use new knowledge. Reformers, including the Innovation Center, should not be disparaged, but encouraged, to apply multiple methods to generate evidence in their quest for improved results.
Randomized trials have contributed greatly to our understanding of what works in well-defined, standardized interventions (think antibiotics or a math curriculum). But randomized experiments can’t be the only tool for understanding innovative policies to strengthen complex systems. First, innovations become effective through continuing adaptations, while randomized trials work by holding interventions constant. Second, randomized trials provide answers about whether an intervention worked, but implementers need additional data explaining how and why it worked. Third, policy changes occur in exceedingly complex contexts, not in the vacuums within which randomized trials shine.
Frank Farrow, Director
Lisbeth B. Schorr, Senior Fellow
Center for the Study of Social Policy