Dan Andrews, the divisive Premier of Victoria, triggered a collective sigh amongst his stateâs 6.4 million constituents recently when he announced that Victoriaâs lockdown restrictions would be extended for yet another two weeks, bringing the total term of the stateâs strict Stage 4 restrictions to at least 10 weeks.
At the commencement of his 90-minute press conference, below, Andrews played the science card.
âYou canât argue with this sort of data. You canât argue with scienceâ.
Whatever you think about Victoriaâs lockdown laws, playing the âbecause scienceâ card should not and does not immediately render one absolutely correct, and eliminate room for inquiry and discourse.
Firstly, I should stress that this piece is by no means an attack on science, nor the forecasting model Andrews describes, or his decision to extend the lockdown.
I for one am a massive advocate of science, having built a company around emerging tech startups over the past six years. Science has improved our lives by orders of magnitude over the past 150 years. Clean water, sanitation, automobiles, planes, electricity and lighting, communications, medicine and the doubling of the average lifespan since 1900, and of course, the internet (although it has its drawbacks).
As the saying goes, status and all of its trapping aside, a typical person today lives a better and more comfortable life than kings and queens did just several centuries ago.
But I am also a massive advocate for truth and reason, and for not pulling the wool over unsuspecting peopleâs eyes by pointing to the science and deeming it case closed, especially not when it comes to politicians wielding authority over millions of people, as is currently the case in Victoria.
The fact is that despite Andrewsâ rhetoric, you can argue with science.
The scientific method, at its core, is about falsifying assumptions. It is about bringing us closer to truth, but it rarely if ever amounts to absolute truth.
As the late Novel-prize winning physicist, Richard P Feynman, put it, âwe can never be sure weâre right, we can only ever be sure weâre wrongâ.
And this is because science can be both subject to error, known unknowns, unknown unknowns, and can also be gamed.
As Carl Bergstrom and Jevin D West, authors of the bestselling Calling Bullshit: The Art of Scepticism in a Data-Driven World, put it, âthereâs plenty of bullshit in science, some accidental and some deliberateâ.
They point out that every living scientist we know acts from the same human motivations as everyone else and that goes beyond their quest for understanding â money, reputation, influence, power.
These flawed incentives have contributed to the replication crisis that is currently plaguing the sciences. A significant number of scientific studies are difficult or impossible to replicate or reproduce. And this is particularly true of medicine, psychology, and economics.
Medicine: Of 49 medical studies from 1990â2003 with more than 1,000 citations, only 24% remained largely unchallenged, prompting John Ioannidis, Professor of Medicine at Stanford University, to publish an article on Why Most Clinical Research Is Not Useful. Furthermore, in a 2012 paper, Glenn Beglet, a biotech consultant, published a paper arguing that only 11% of pre-clinical cancer studies could be replicated.
Psychology: A report penned by Open Science Collaboration in August 2015 found that study replication rates for in the psychology department ranged from just 23% for social psychology up to about 50% for cognitive psychology. To take the more encouraging of these numbers, that translates to half of such studies not being replicable.
Economics: Perhaps this should be the least surprising of the lot. A 2016 study found that one-third of 18 studies from two top-tier economics journals failed to successfully replicate. A subsequent study argued that âthe majority of the average effects in the empirical economics literature are exaggerated by a factor of at least 2 and at least one-third are exaggerated by a factor of 4 or moreâ.
A single study doesnât tell you much about the world. Researchers weigh the evidence across multiple studies to form not a correct, but a âmore likely to be correctâ view of how the world works, and oftentimes, this view is revisited and updated over time. Hello butter and fat being nothing but bad for you.
Science is ultimately susceptible to all of the following failings.
Andrews said that âyou canât argue with this sort of dataâ.
But as computer scientists know all too well, the quality of output is determined by the quality of input. In order for data to be effective, it needs to be the right data, it needs to be interpreted correctly, and come to some type of meaningful conclusion free of error or intentional fudging.
Not only that, but things change. The past isnât always a timely predictor of the future, and it also fails to account for unknown unknowns and black swan events such as, say, a novel coronavirus bring down the world economy.
This is precisely why there is a growing trend in business away from being data-driven, towards being data-informed. The former puts all of your faith in data â flawed or otherwise, whereas the latter combines data with the professional judgment obtained from many years of experience in a particular domain, accounting for the nuances surrounding a particular decision.
We wouldnât think that artificial intelligence would be racist, but it turns out that when you feed an algorithm data based on how the world currently works, it adopts our failings too.
This refers to a tendency to search for or interpret information in a way that confirmâs oneâs preconceptions, and ignore all of the disconfirming evidence. Whatever argument I have, I could probably find supporting evidence for it depending on how I choose select and interpret the data.
I might want to argue that âpeople do their best workâ at night time, and to support my case I might just sample night owls, and exclude everybody else, including early-birds who do their best work in the morning.
Closely related to selection bias, cherry-picking is all about presenting the results of a study or experiment that best support an argument, instead of reporting all of the findings. If only 10% of your results support your argument, but the other 90% donât, itâs tempting and common to report only the 10%, especially if it leads to said reputation, influence, money and power.
The same goes with referencing studies that support your position on something like vaccines, but ignoring all of the studies that donât.
Just because A and B correlate, it does not mean that A causes B.
My friend, Daniel Cannizzaro, recently raised $1 million for his fintech company, at around the same time that Victoriaâs coronavirus case numbers started to plummet.
There is an inverse correlation here. I could make the wild claim that the Victorian Government should invest millions into his startup and see case numbers drop to zero but that would just be stupid. The two clearly have nothing to do with each other. Correlation does not mean causation.
A P-value ultimately refers to statistical significance. In order for findings to be meritable, they need to be statistically significant.
Scientists can engage in the conscious or subconscious manipulation of data in a way that produces the desired p-value and therefore a âstatistically significantâ result, in order to get published and build their brand and reputation.
Just like teenagers are prone to do whatever it takes to rack up likes on Instagram, scientists are chasing citations, and many will do whatever it takes.
Nowadays, there are numerous predatory journals that effectively operate on a âpay to playâ model.
Scientists can get all sorts of junk findings published in these journals, providing they pay up, and they can then point to their published journals to further their influence or careers.
For example, John McCool, a Seinfeld fan, submitted an article to the predatory Urology and Nephrology Open Access Journal. It was entirely based on the Seinfeld episode, âThe Parking Garageâ in which Jerry Seinfeld forgets where he parked his car, ultimately forcing him to urinate in public and get arrested for it.
He later pleaded his case, suggesting that he would die of uromycitisis poisoning if he didnât relieve himself. No such condition exists, but that didnât stop the journal from accepting McCoolâs mock paper, âUromycitis Poisoning Results in Lower Urinary Tract Infection and Acute Renal Failureâ, providing, of course, he fronted up with the $799 fee. He never did.
Just because something has been peer-reviewed, it does not make it uncontestable.
Usually, peer review amounts to high-level review of the methodology followed by researchers and scanning work to check for obvious errors or oversights, but it does not amount to a complete replication of the entire study.
One systematic review into the practice found that peer review sometimes picks up errors and fraud by chance, but that it is generally not a reliable method for detecting fraud.
As Bergstrom and West put it, âpeer reviewers make mistakes and they cannot possibly check every aspect of the workâŠpeer review cannot catch every innocent mistake, let alone uncover well-concealed acts of scientific misconductâ.
The list goes on. These are just some of the many ways that science can indeed be wrong.
âThe first thing to recognize is that any scientific paper can be wrongâ, says Bergstrom.
Science can and is gamed by researchers, politicians, entrepreneurs â basically, anybody who has a vested interest in the science telling a specific story to benefit them.
So the next time someone pulls out the âbecause scienceâ card, be they a politician or your friend, realise that itâs not the end of the conversation, but the start of it.
You might want to question:
If you know what to look for, you can call bullshit quite easily and have people scrambling to change the subject. But rather than let them, engage in a conversation instead.
In our efforts to be right and win arguments, finding out whatâs actually more right and improving our world view often becomes the victim.
The more informed we are about how the world works â and that includes science, the better our decisions will become.
Pick up Calling Bullshit, the most important book of the year as far as Iâm concerned, here.
Steve Glaveski is the co-founder of Collective Campus, author of Time Rich, Employee to Entrepreneur and host of the Future Squared podcast. Heâs a chronic autodidact, and heâs into everything from 80s metal and high-intensity workouts to attempting to surf and do standup comedy.
â
Steve Glaveski is on a mission to unlock your potential to do your best work and live your best life. He is the founder of innovation accelerator, Collective Campus, author of several books, including Employee to Entrepreneur and Time Rich, and productivity contributor for Harvard Business Review. Heâs a chronic autodidact and is into everything from 80s metal and high-intensity workouts to attempting to surf and hold a warrior three pose.