13 June 2023 by Richard
Science and the Dumpster Fire
I’ve been in my current job since 2015. (previously) When I auditioned for this job, I said I would do it because it is a chance to work on research infrastructure and meta-science. I said the status quo is the biological and behavioral sciences is terrible. It makes little sense to invest in ambitious research projects when a typical researcher has no ability to define a non-null model of a phenomenon, to explore the implications of such a model, and to evaluate those implications with evidence. Success in the sciences does not, in the present day, depend upon those skills. But we should fix the foundations before wasting more public money.
I think it was a miracle that I could tell successful researchers that I had no intention of pursuing traditional academic goals and that they would applaud and give me a job. What they responded to is the mandate that we have to build better ways of working and training and evaluating. That is research of the most important kind. It is infrastructure development, and results will not be quick or easy or even probably much appreciated for a long time.
That’s okay. I don’t really like attention anyway. Let me work.
Under the lamppost
In my field, human evolution, the empirical and theoretical problems are severe. It is often not clear that alternative models can be tested at all, because the empirical record is so poor. Still we push on. But too often we focus effort on whatever we can measure, and invent explanations of those measurements, regardless of their importance for cumulative science. This is the proverbial looking for your keys under the lamppost, because that’s where the light is.
In my case, my focus has always been the role of cultural and behavioral adaptation in human evolution and the dynamics of human societies. This field has always been a bit too mathematical for anthropology. But that’s because it has to be. Population dynamics are dynamic. We can’t intuit them. We need mathematics. And it has also always been empirically difficult, because observational cross-sectional research in a long-lived primate like humans is going to be both confounded and insufficient to investigate the phenomena of interest, which unfold over developmental and inter-generational timescales.
So the focus of my department has been building sustained long-term field projects to study people in real communities. Not just or even mostly foragers. Most humans live in cities, and we should study them there as well. These projects are difficult in every respect, from the politics to the hardships of the work itself, to the time it takes for researchers to realize some career benefit from them. This is not under the lamppost. There is darkness here. But it is where we have to look. We know the keys are here.
At the same time, there has been a lot of investment in measurement, data control procedures, and connecting theoretical models to data. This work has benefited many people outside my field, since most of my audience these days is probably outside academia, the many people in industry and government who have used my book and course materials and code to solve problems that matter.
So we’re not looking under the lamppost. But I hope we are making some light of our own, even in other places.
An honest plan for modest research
My experience so far has been that insisting on an explicit workflow, for ourselves and the research we evaluate, does a lot of good. Without going into detail, here is the outline.
ONE. What are we even trying to learn? Too much research nominates some big problem, finds some data that is metaphorical related to it, computes some (adjusted) associations, then tells a story about the significant results. Along the way, it’s not clear what we’ve learned, because there was no clear quantitative goal at the start. We need to define the phenomenon, the alternative explanations, and what estimates would help us distinguish among them or refine them.
TWO. What is the ideal data for achieving the goals from ONE? This must be argued using an explicit, logical or computational model of the phenomenon. Prove it. Don’t appeal to intuition. Simulate or deduce. Yes, everything will depend upon assumptions. But a conclusion that doesn’t depend upon assumptions is rarely of value. Data themselves are insufficient.
THREE. What data do we actually have? What data can we acquire? How are these data sources different from the ideal on TWO? What is missing? Which proxies exist? How does error creep in? What are the causes of the missing data and measurement error? Is there selection bias? Nearly always. So model it as well. These are generative modeling assumptions, and we need them for the next part.
FOUR. Is there a statistical way to use TWO and THREE to learn about ONE? Again, this must be demonstrated logically. Ad hoc, non-generative estimators can work if you get lucky, but their track record is very poor. Prove the analysis will work on synthetic data or otherwise do your best to describe the problems that concern you. But the usual tactic of “we cannot exclude confounding but here are the associations and we want to base policy on the notion they are causal” is neither justifiable nor ethical.
Explaining why some research design or source of data cannot address an important question is an advance. It helps research. It needs to be done more often. Too many of my colleagues believe that if they can just find enough data and the right statistical model, they can answer any question. (With big enough Ns…) I think this is clearly wrong. There are information constraints, and inference always depends strongly upon assumptions that cannot be tested within sample.
Quality assurance for research is research
Too few of my own projects have met all these criteria in a satisfactory way. And we need to refine ways to organize and teach these skills. Most researchers cannot even begin to simulate the design of a research project. There is real research to be done here, organizing human talent to build theories and connect their implications to evidence. We have to do more than just tell people to do it. We need innovations in how we work, the tools we use, and our norms. And none of these things is obvious, I don’t think. They will take a lot of work. We should get started.
So for example in my department we have invested a lot of money in the Stan math library, which allows us to express theories more directly. We have built transparent data processing pipelines with proper version control. We are building friendlier simulation tools for people who study culture and demography and human gene-culture co-evolution.
Nothing we’ve done so far is perfect. But I like to think we are at least looking in the right place.
I have lots of recorded materials, but maybe start with my 2023 lecture series.