The International Journal of Drug Policy announced last week that they retracted a recently published meta-analysis on the health impacts of safer consumption spaces (SCS) due to serious concerns about the study’s methodology.
In contrast to dozens of previous SCS studies spanning decades, this analysis had suggested that there was little-to-no reduction in drug-related health harms associated with SCS. Its retraction means we can now strike that false claim from record and, once again, lean back on the body of research that does show that they are an effective harm reduction intervention.
However, this victory only came after serious damage had already been done.
The original analysis could not have been released at a worse time. It came on the heels of the latest CDC data that indicated we lost over 72,000 people to overdose just last year—and as advocates across the country were making tremendous progress towards opening the first legal SCS here in the US. It also preceded a highly publicized op-ed from Deputy Attorney General Rod Rosenstein that said SCS were “dangerous and would only make the opioid crisis worse.”
Media outlets jumped on the opportunity to highlight the study, and its suggestion that SCS would not have the effect on the overdose crisis that harm reduction advocates were promising. While some pieces advised that we take the findings with a grain of salt by acknowledging that SCS could still be helpful, most failed to provide a deeper analysis about the limitations of the study—or place it in the context of the other research that contradicted it.
Opponents of SCS also used the study to argue against opening SCS in California and other parts of the country. Just this Sunday, California Governor Jerry Brown vetoed AB 186, a bill that would allow San Francisco to open the nation’s first SCS. In a letter explaining his decision Governor Brown said that he did not “believe” that SCS would “reduce drug addiction.”
The retraction came, and that matters. But a retraction never receives the same kind of attention as the original study.
When a new study comes out and purports to add to our understanding of a controversial topic … it is our responsibility to pause to ask ourselves some vital questions.
And this isn’t the first “gotcha” study on a harm reduction intervention that received a lot of press; it’s just the first to be retracted.
Earlier this year, researchers Jennifer Doleac and Anita Mukherjee released a widely publicized study about the “moral hazard” of naloxone distribution. Naloxone is a medication that can reverse an opioid overdose and has been credited with saving thousands of lives. Their study, which also contradicted earlier research, provided fodder for those who wanted to argue that naloxone distribution is ineffective—or worse, that it increases crime because it keeps more people who use opioids alive.
As we’ve all heard by now, correlation does not equal causation. The Doleac and Mukherjee study also assumed that everyone immediately had access to naloxone in states that had naloxone laws on the books. In fact, there are many barriers to naloxone that mean access is still limited even in these states.
These two recent examples are instructive. We need to be wary of controversial new studies that grab headlines, and be more measured and reasoned in evaluating their claims.
When a new study comes out and purports to add to our understanding of a controversial topic, especially when it contradicts larger trends in the body of research, it is our responsibility to pause to ask ourselves some vital questions:
Why does this study look so different from all the others in that area?
What methods did they use?
What are the inclusion criteria
What variables and outcomes did they measure?
Is there something about the analysis itself that accounts for the different findings?
Are the authors of the paper experts in this area?
Who funded this research?
Do the authors have any conflicts of interest?
Policymakers, the media, and advocates need to partner with well-trained researchers who have the expertise to carefully evaluate the strengths and weaknesses of such studies before making claims (or policy) based on them.
There is no shortage of stigma, misinformation, fear, or panic surrounding drug use and harm reduction-informed public health policies. Poorly devised studies just make it worse.
If we really want to save lives and prevent more harm, we all have a responsibility to take a step back, look critically at the evidence, and base our efforts on high-quality, rigorous research—not on “belief,” sensationalized headlines, or shoddy studies.
*This article was co-written with Jules Netherland, director of the Office of Academic Engagement for the Drug Policy Alliance.