The Years of Covid – Listen to the Science

A common blurb invariably used to stop any arguments criticizing the Covid-related restrictions dead in its tracks is "listen to the science."

The statement first seems ironic because whatever guidelines were followed before the Covid pandemic were promptly ignored as if the virus was a beast of a wholly different nature than earlier pandemics. Didn't we have science before? Or did we get everything wrong before the spring of 2020, and we have to do the exact opposite?

At the same time, I can see the appeal of the argument of radically changing the handling of pandemics: the science of pandemics is new, so it's possible that new knowledge emerges that makes old knowledge obsolete.

However, epidemics science is a significant driver of public policy and is thus highly politicized. We cannot blindly trust it to be the unbiased study of natural phenomena that can help us understand the world and make better decisions.

Not to mention that the very essence of science is the process of observing facts, coming up with a hypothesis, and then trying to refute it.

Matt Ridley puts it as follows:

When ministers make statements about coronavirus policy they invariably say that they are “following the science”. But cutting-edge science is messy and unclear, a contest of ideas arbitrated by facts, a process of conjecture and refutation. This is not new. Almost two centuries ago, Thomas Huxley described the “great tragedy of science – the slaying of a beautiful hypothesis by an ugly fact.”

So the slogan "listen to the science" is non-sensical, especially in a relatively new field. It doesn't have to make scientific sense, though, as it is used for political purposes where the modus operandi is very different.

When the tail is wagging the dog

In several domains, mainstream, "accepted" science has become a tool wielded by the political class. When faced with a serious problem, most people look to their leaders to tell them what to do about it. The leaders feel pressured to do something, anything, that has even a remote chance of working. Whether that something works or not doesn't matter to the political decision-makers. What's important is that the populace be convinced that the enacted measures helped.

If most people believe that crucial decisions should be made based on empirical evidence –in other words, on science– then the best way is to come up with scientific arguments. And if they don't exist (remember, before March 2020, imposing lockdowns to control an outbreak was not recommended), well, they have to be made up quickly.

Let's see a few examples of this phenomenon in practice.

Doomsday modeling

First of all, the model based on which the UK Govt decided to impose stringent lockdowns and many other governments copied was deeply flawed. It was cooked up at Imperial College London (ICL) under the leadership of Neil Ferguson. Unfortunately, Mr. Ferguson has a patchy record of modeling pandemics:

In various years in the early 2000s, Ferguson predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu. The final death toll in each case was in the hundreds.

Following in their footsteps, the ICL model devised by Ferguson et al. predicted a catastrophic scenario that made Prime Minister Boris Johnson take a U-turn and lock down the UK. However, the program that implemented the model had such deplorable software development practices in place, it would make any programmer cringe.

It had fundamental "bugs" (a bug is an error in programming jargon) for starters. For example, running the model several times with the same inputs produced a difference of 80k (number of deaths in 80 days) in the output. What's worse, the modelers were aware of the shortcomings of their program but dismissed bug reports as non-significant. How could this shoddy, unscientific work serve as the basis of any decisions, let alone the most important ones of 2020?

Sage (the team that developed the model) impacted the lives of hundreds of millions of people. Yet they met in private, published no meeting minutes, and only released their members' names after being pressured to do so. I think people should've had more insight into the process that turned their lives upside down.

Get me the numbers I want to see

Lockdowns became the go-to government response to contain the virus even though they come at a colossal cost. Unfortunately, they don't seem to work. For example, Sweden, the only EU country that imposed no lockdowns, fared better than most others. In the US, comparing graphs that show cases or hospitalizations of lax vs. strict states, you couldn't tell which one is which – or when governors established specific NPI measures, for that matter.

Given that people feel the negative impact of lockdowns on their lives, it's understandable, then, that governments feel compelled to prove they were necessary to avoid a catastrophe. As most science in that domain is sponsored by governments, they prefer to fund studies that produce the expected results.

Horst Seehofer, an influential German politician, was more forward-looking. In the spring of 2020, when the lockdowns began he already instructed scientists to develop models that support the need for them.

How does that pass for good ol' science? Scientists are not simply given money to prefer certain domains to others; they are explicitly told what conclusions to arrive at.

Garbage in, garbage out

The push to establish scientific support for the new ways of managing a pandemic also affected the quality of accepted scientific studies.

Take the famous case of asymptomatic transmissions. If the disease almost exclusively spreads the way we all knew from before: sick people transmitting it to other people when getting into contact with them, we'd all be a lot safer, and we could keep others safer, too. It's easier to discern (and avoid) people that sneeze or cough, and if we don't feel well, we stay home.

However, if asymptomatic transmissions are responsible for a non-negligible part of infections, the case is a lot more complex: everybody becomes a menace, a potential enemy. Healthy-looking people might just be out there to get us, so we all have to wear masks (two of them, just to make doubly sure), keep our social distance and not wander away far from our home.

As if on schedule, a study found that "transmission from asymptomatic individuals was estimated to account for more than half of all transmission.". Sounds quite scary, right?

The problem is that the paper is extremely exceptionally crappy, as Bret Weinstein and Heather Heying explain in their podcast.

The study takes three empirical studies, completely misunderstands (or manipulates) the findings in those, and then creates a model assuming that asymptomatic individuals are 75-100% as infectious (likely to transmit the virus) as symptomatic ones. It then concludes that asymptomatic people account for more than half of the transmissions from the completely flawed model.

In modeling, this is known as "Garbage in, garbage out": if you create a model with lousy input, it'll reliably create lousy output. You can make a model say anything but that doesn't make it describe the real world accurately.

It turns out all six authors of the study are from the CDC, the public health organization responsible for managing the pandemic and which were in favor of the NPIs you've had to live with for the past year. As Ms. Heying notes in the podcast after resuming the garbage study: in theory, science should inform policy, not the other way around, but it's now become a two-way street.

Not all studies are created equal

Models are great when we can't extract actual data from the world, but by their definition, they are hypothetical (and should be refined or discarded when we have empirical data). Unfortunately, that distinction is rarely spelled out, and there's an increasing number of studies based on models.

Here's Matt Ridley again:

It has become commonplace among financial forecasters, the Treasury, climate scientists, and epidemiologists to cite the output of mathematical models as if it was “evidence”. The proper use of models is to test theories of complex systems against facts.

Continuing with the example of the asymptomatic spread of SaRS-CoV-2, we have a meta-study that aggregated the results of 54 empirical studies with 77,758 participants about the likelihood of people living in the same household transmitting the virus.

They've found that the chance of an asymptomatic person infecting another one living under the same roof is 0.7%. Now imagine what the chances of passing the disease to someone bypassing them in the street or spending half a minute standing next to them while picking apples in the supermarket are.

Listening with skepticism

Being skeptical about new scientific results is one of the tenets of the scientific method.

In an age where politics influences science, there's ample reason not to take scientific results at face value, especially in areas of intense political debate.

Coronavirus science is an excellent example of the dangerous impact that politics has on science. Feeling the pressure from the public to make the problem go away while at the same time wanting to keep political clout, politicians and scientists cooperate, to the detriment of the scientific ideal.

Further Reading