Some Dumpster Fires for Your Consideration

This is fine

Is it hot in here?

One type of reaction to my post yesterday (here) was surprised disbelief.

“Public money is being wasted? An outrageous claim!”

“Over time science self-corrects.”

“Maybe in some fields, but not in mine.”

“He’s just an iconoclast.”

“This is science denialism, right-wing propaganda.”

These reactions are familiar, and they are born of ignorance. I don’t spend time having discussions on twitter. Public spaces in which any rando can join the conversation are not the best for serious exchanges. In particular for complex topics, the turns of conversation need to be longer.

Anyway, in the interest of showing some reasons that many successful scientists think that the sciences are in bad shape, here are some raging dumpster fires for your consideration.

Mediation Analysis

Our coefficients are in the stars

In psychology, which is a very big field you might have heard, many people do a thing called “mediation analysis”. The context is a cross-sectional experiment. Sometimes there is randomization of a treatment. The goal of the mediation analysis is to decompose the total causal effect of a treatment into different pathways. However, in a cross-section design, this cannot in principle be done either in any unique way (many pathways are compatible with any sample) or even logical way (unmeasured confounding is real, even in experiments, once you start regressing on downstream variables).

Usually when I tell a psychologist about these problems, they are shocked. It’s like I am from another planet. Their research life passes before their eyes. Either I am wrong, or much of their field is nonsense. There are more options for sure, but still those two are a good summary.

To support the case that I am not wrong, go read this paper.

Junkyard Causal Inference

Robust standard errors for dessert

In most of ecology and evolution, the goal is causal inference. There is some pure forecasting work. But usually to goal is to understand causes, either for basic scientific reasons or because some conservation intervention is planned.

However, ecologists don’t seem to know much about causal inference outside experiments, and most of ecology is not experimental. It is in fact radically observational. How can I assert this? Because nearly all papers use either p-values or predictive criteria like AIC (or cross-validation) to choose among models and then interpret every coefficient as a causal effect. There is no school of statistics in which a model in which all the terms are significant is going to reveal the true causal effects of each variable. And tools like AIC are not designed for causal inference, but for forecasting. The best model for forecasting can and often will have a different structure than the model you need to produce a causal inference of choice.

The points above can be made logically and proven, beginning with a scientific model of a phenomenon. This is not rhetoric. There is no room for arguing that the use of p-values and AIC to decide the structure of an inferential model makes sense. Here’s a gentle paper, aimed at ecologists, that might be a good place to start. I also made a three hour introduction to causal inference just for you.

This problem is severe. People act as if multiple regression can inform us of miraculous things, without any consideration of structural confounding or calculation of marginal effects. When a researcher has been trained at a prestigious university, it can be hard to convince them that standard practice is wrong. They built their careers on these methods. Their advisor the same. If the standard approach to quantitative data analysis in their field is illogical, that would imply the emperor has no clothes. That prestigious institutions are teaching people how to light garbage on fire. Indeed it would. I’ll bring the marshmallows. I love the smell of burning prestige.

Genomics UMAP WTAF

Genomics is the new physics. Last century, physics charged ahead and made great advances in understanding the functioning of the universe. It become the standard of what a science should be. This century, eh not so much. Genomics is the new king.

A key problem in processing genomic data is that there is just so goddamn much of it. And biologists don’t get much training in statistics really, usually just one or two courses, which teach tools appropriate for simple randomized experiments, not high dimensional genomic data with complex structure. So dimension reduction is the rule. Okay, when done well, that’s not a bad thing.

But often it is not done well. The field of single-cell genomics is hot right now, on fire one might say. Popular methods like UMAP are from their very basis incapable of producing meaningful scientific summaries. But don’t take my word for it. Here’s a good paper on the methods, their problems, and theory-based alternatives.

Bandwagon fields like the microbiome have similar problems. Lots of theory-free data processing and qualitative interpretation. This doesn’t mean the work is all bad. But good papers don’t cancel bad ones, especially when the bad ones vastly outnumber the good ones.

The problem here, like in so much of science, is that people rarely posit a scientific model to start. And even when they do, they don’t justify the data analysis on the basis of it. Take the data, use intuition to design a pipeline, make some pictures, tell a story, convince an editor. It works for careers. It doesn’t work for scientific progress.

A Turn Towards Darkness

My view is not unusual at all. Leadership at many granting agencies, prestigious journals, and scientific societies agree in substance, if not tone. But don’t take my word for it. In 2015, the editor in chief of The Lancet wrote:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.

The entire editorial is worth a read. He mentions at the start how widespread these concerns are.

Lots of Hope

These problems are serious. They mean that large sums of public money are being wasted on what is at best incomprehensible research. It is easy for the consensus in a literature to be wrong and possible dangerous to the public.

I am hopeful however, because many people can smell the smoke. They are eager to look outside their field and learn structural modeling and how to build scientific models that they can analyze and then use to justify both research design and data analysis. But this means people like me have a duty to support them in this process, both through training and by defending them against senior people. Too many times, a junior does an analysis in a principled way, and then some reviewer insists they do it an illogical but traditional way. Demography suggests this problem will reduce with the passage of time.

Of course all of scientific publishing is trembling, about to destabilize. We should not plan for the future, with the current publishing and credit model in mind. But we should invest in basic theory building and skills and connecting theories to data. Because whatever set of corrupt institutions come next, we have to be ready.

Citations

Rohrer, Hünermund, Arslan, Elson on the horrors mediation analysis, especially in psychology [PDF]

Arif and MacNeil on structural modeling in observational ecology [PDF]

Chari and Pachter on “The Specious Art of Single-Cell Genomics” [PDF]

Richard Horton’s 2015 The Lancet editorial [text]

My own 3 hour intro to causal inference [Youtube]

My 2023 course on scientific modeling and causal inference [course outline and free lecture recordings]