The Burden Of Proof On Climate Scientists—And Those Wishing For Its “Solutions”
By Matt Briggs
If you say a calamity will befall me, and ask me to pay to protect against it, the burden is on you to (a) prove the calamity is likely in all its details, (b) the cost of the protection is worth it in the sense the protection is likely to do the job asked of it, and (c) that no other forms of cheaper effective protection exists.
If you cannot do all three, then I am under no obligation to heed you. Showing only one element is insufficient to compel my action. That is, showing only that the calamity is likely isn’t enough.
For instance, if you convince me, based on some set of evidence, a moon-sized asteroid will ram into the earth in two years, but then offer to sell me at high price a magic spell book which, when used, might dissuade the asteroid, then I will not buy. Even if I agree the world will end.
Or you might show, given a different set of evidence, that a fire burning down my house has a reasonable chance. But if the cost of your insurance is higher than the price of the house, I will not pay. I can buy insurance from another vendor.
Again, you need to prove all three elements and in detail. A conclusion which is, or was not, in any way controversial.
They lament “scientists typically demand too much of themselves in terms of evidence, in comparison with the level of evidence required in a legal, regulatory, or public policy context.” This being so, they beg the IPCC to “recommend more prominently the use of the category ‘more likely than not’ as a level of proof in their reports” because certain courts do.
What they mean by “more likely than not” is what anybody does: better than 50-50. I’ll not comment on why courts choose this over other possibilities, but I will say what this or any probability-based criterion means.
First, except for one possibility, there is no one central claim of global cooling—or global warming, or climate change, or sustainability, or whatever. So there is no one claim for scientists to put a measure of uncertainty on. Except for this statement: man influences the climate. Which should be given full assent by any scientist, because it is deducible from simple premises every scientist claims to believe.
But how much man influences the climate is an open question, with many competing claims. As is what is best to be done about it, if anything. The uncertainties here are rife.
There are two crucial things to remember when speaking of any model uncertainty (solutions are also models):
(1) All models only say what they are told to say, because all models are lists of premises put there by scientists;
(2) Those premises determine the probability of the model’s conclusion (or model’s statements).
The authors “Climate scientists generally look for a probability of 90–100% before they call a scientific claim…’very likely’” and then complain “climate scientists have set themselves a higher level of proof in order to make a scientific claim than law courts ask for in civil litigation in the USA”. This is a silly complaint, followed by an odd table trying to map probability words to quantifications, going so far as to say likely means, sometimes, 100%. Which is false.
It’s silly because (a) no probability proves a model is true, and (b) model statements get probabilities from the premises scientists’ choose. They can pick what they like, and make the model’s statements appear as sure or as unsure as they like because of these choices.
It’s important to grasp model criticisms have nothing to do with the probabilities asserted. Critiques must focus on the premises themselves, the constructs of the model.
MISSING THE POINT
Oreskes and the others have a ridiculous goal. They assume that once a certain threshold probability is reached a scientific claim has been “proved”, which is why they carp on “scientists [who] strive to make sure that all possible complaints and objections are fully addressed” in their models.
That is not the way probability and decisions work.
If the model’s statements are uncertain, they will remain uncertain even though some activist—or court—has decided the uncertainty was small enough to ignore for their purposes—which is to implement some “solution”. But—those solutions are also models and have their own uncertainties.
Plus this: the model and the solution uncertainties have to be married before any decisions can be made, which is described next.
The authors don’t seem to understand this, and instead build a straw man and argue that waiting for 100% certainty could cause harm. “Proof in the climate change context is particularly urgent for two reasons: one is that there are legal cases that hinge in part on whether anthropogenic climate change is proven, and two because we are running out of time”.
This obviously assumes what it sets out to prove: we can only be running out of time if the worst models predictions are certain. Which they are not, as the authors admit. But they want the “solutions” so much, the authors try to bully scientists into saying “Close enough.”
A global cooling scientist deduces a certain probability quantification from a model, such as a statement like “There will be a global average temperature change of X, where GAT is measured by these processes.” Conditional on his assumptions, this probability value is, say, P_T. We saw above that this kind of judgement is disputable: change the assumptions/premises, change the value of P_T. But let him have his P_T.
Next, an activist comes along and offers a “solution” to either cut the size of this temperature change, or to mitigate its supposed effects if it can’t be cut. This solution, as said, is also a model. It has a list of assumptions from which a probability quantification can be deduced (though we don’t always need numbers). The activist will say P_S = 1, because activists never doubt.
But scientists (of the non-activist kind) will look at the proposed solution and come to a different, more sober value of P_S < 1 (usually done by removing the activist’s premise “I am right!”).
We then have a chain of uncertainties, first in the model projections—of the GAT itself and how it’s measured with error, or its supposed effects—and a solution which assumes the model is true. Which the scientist admitted was not, because P_T < 1.
Thus the probability for both the model to be true and the proposed solution to work is P_D = P_T * P_S < 1.
This is the number to put into decision calculations, not P_T or P_S, as Orekes asserts. Obviously, P_D is smaller than P_T or P_S. So any urgency must be less than the authors assumed.
In order to make real decisions, we recall the first section in which the model statements and the proposed solution are contrasted with other solutions, and especially the “solution” of doing nothing (as it were).
The authors bring up so-called attribution studies, making the same error of insisting that probabilities greater than 50% are as good as certainty.
Attribution studies are fundamentally flawed in the sense their interpretations guarantee over-certainty. I mean, they all assume model perfection, and draw their conclusions with that in mind. For details on that you can read my paper “The Climate Blame Game“.