Close this search box.

Covid-19 Lockdown Modelling Accused Of ‘Most Devastating Coding Mistake in History’

GWPF Newsletter 16/05/20
Covid-19 Lockdown Modelling Accused Of ‘Most Devastating Coding Mistake in History’
Modelling That Led To Lockdown Was ‘Totally Unreliable’ And A ‘Buggy Mess’, Say Experts

1) Modelling That Led To Lockdown Was ‘Totally Unreliable’ And A ‘Buggy Mess’, Say Experts
The Daily Telegraph, 16 May 2020

The code, written by Professor Neil Ferguson and his team at Imperial College London, was impossible to read, scientists claim

The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.

Professor Neil Ferguson’s computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on”.

The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.

“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”

The comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that Covid-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of Covid-19 is, which has resulted in vastly different models.

Up until now, though, significant weight has been attached to Imperial’s model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown.

It was said to have prompted a dramatic change in policy from the Government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.

The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.

In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.

Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.

“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.

After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.

However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters…otherwise, there is simply no way of knowing whether they will be reliable.”

It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.

Writing for, Sir Nigel Shadbolt, Principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”.

Like the Imperial code, a rival model by Professor Sunetra Gupta at Oxford University works on a so-called “SIR approach” in which the population is divided into those that are susceptible, infected and recorded. However, while Gupta made the assumption that 0.1pc of people infected with coronavirus would die, Ferguson placed that figure at 0.9pc.

That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown. Experts remain baffled as to why the government appeared to dismiss other models.

“We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, Professor of Mathematical Biology at Oxford University.

Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.

In the early 2000s, Ferguson’s models incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu.

“The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr Boudnik. “We don’t know for sure if the same model/code was used, but we clearly see their methodology wasn’t rigourous then and surely hasn’t improved now.”

A spokesperson for the Imperial College COVID19 Response Team said: “The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.

“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated  commentators.

“Epidemiology is not a branch of computer science and the conclusions  around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.”

2) Neil Ferguson’s Computer Model Could Be The Most Devastating Modelling Mistake Of All Time
The Daily Telegraph, 16 May 2020

The boss of a top software firm asks why the Government failed to get a second opinion before accepting Imperial College’s Covid modelling

David Richards and Konstantin Boudnik

In the history of the expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code.

But nobody died and the only hits were to NASA’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.

Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The model appears to be totally unreliable and you wouldn’t stake your life on it.

First though, a few words on our credentials. I am David Richards, founder and chief executive of WANdisco, a global leader in Big Data software that is jointly headquartered in Silicon Valley and Sheffield. My co-author is Dr Konstantin ‘Cos’ Boudnik, vice-president of architecture at WANdisco, author of 17 US patents in distributed computing and a veteran developer of the Apache Hadoop framework that allows computers to solve problems using vast amounts of data.

Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1. This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies. One file alone in the Imperial model contained 15,000 lines of code.

Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead. In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.

The approach ignores widely accepted computer science principles known as “separation of concerns”, which date back to the early 70s and are essential to the design and architecture of successful software systems. The principles guard against what developers call CACE: Changing Anything Changes Everything.

Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole. Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls.

Only then is the car deemed safe to go on the road. As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters. Run it on different computers and you would likely get different results. In other words, it is non-deterministic.

As such, it is fundamentally unreliable. It screams the question as to why our Government did not get a second opinion before swallowing Imperial’s prescription.

Ultimately, this is a computer science problem and where are the computer scientists in the room? Our leaders did not have the grounding in computer science to challenge the ideas and so were susceptible to the academics. I suspect the Government saw what was happening in Italy with its overwhelmed hospitals and panicked.

It chose a blunt instrument instead of a scalpel and now there is going to be a huge strain on society. Defenders of the Imperial model argue that because the problem – a global pandemic – is dynamic, then the solution should share the same stochastic, non-deterministic quality.

We disagree. Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters. Otherwise, there is simply no way of knowing whether they will be reliable.

Indeed, many global industries successfully use deterministic models that factor in randomness. No surgeon would put a pacemaker into a cardiac patient knowing it was based on an arguably unpredictable approach for fear of jeopardising the Hippocratic oath. Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?

David Richards, founder and chief executive of WANdisco and Dr Konstantin Boudnik is the company’s vice-president of architecture 

3) Charles Moore: Lockdown Is Showing Us The Misery That Net Zero Will Demand
The Daily Telegraph, 16 May 2020

Eco-politics succeeds only with voters who feel guilty about being rich. Covid-19 will put paid to that

Roger Harrabin, the BBC’s evangelically green environment analyst, recently wrote this on his employer’s website:

“I’ve just had a light bulb moment. The feisty little wren chirping loudly in the matted ivy outside my back door is telling us something important about global climate change. That’s because, intertwined with the melodious notes of a robin, I can actually hear its song clearly. Normally, both birds are muffled by the insistent rumble of traffic, but the din has been all but extinguished in the peace of lockdown.”

Ah, the peace of lockdown. It is, for us lucky ones, very real. It is two months to the day since I last left my rural county. Never before have I experienced so much quiet here, or brighter stars. My long daily walks are almost mystically beautiful in their combination of light and air, the sound of nature and the silence of machines. If I were Wordsworth, I would give thanks in verse. Like Harrabin, I love hearing more wrens and robins and less traffic, and want it to continue.

What might that involve, though? The light-bulb over Harrabin’s head – powered, of course, by green energy – is telling him that we must, in the new eco-buzz phrase, “Build Back Better”. Governments, in their Covid recovery packages, should support only companies and projects “which decouple economic growth from GHG [greenhouse gas] emissions”. Otherwise, we shall not achieve Net Zero. I am quoting from a recent working paper of the Oxford Smith School of Enterprise and the Environment with the snappy title, “Will Covid-19 fiscal recovery accelerate or retard progress on climate change?” Its authors include the grandest of global greens such as Joseph Stiglitz and Lord Stern.

Their opening paragraph says: “The Covid-19 crisis could mark a turning point in progress on climate change. This year, global greenhouse gas (GHG) emissions will fall by more than in any other year on record. The percentage declines likely in 2020, however, would need to be repeated, year after year, to reach net-zero emissions by 2050. Instead, emissions will rebound once mobility restrictions are lifted and economies recover, unless governments intervene.”

The authors are in a bind. They half-recognise that Covid-19 – not just medically, but socio-economically – is a disaster from which societies will wish to recover. Yet it has brought about what they want. Emissions have fallen unprecedentedly because of the extreme economic contraction it has produced. Focus on their point that such a decline “would need to be repeated, year after year” to save the planet. They want the Covid effect – without, of course, the illness bit – to go on forever.

That effect means two related things. The first is an enormous increase in government control. To fight the disease, we have had to surrender large parts of our freedom to work, trade, associate, travel, worship, even vote (local government elections being postponed) and in many cases our right to a family life.

The second effect is greater poverty. This is caused by the compulsory stoppage of so many businesses, with consequent insolvencies, wage cuts and job losses. The poverty has been mitigated and delayed by government measures. This may not directly damage Harrabin or me as, on full pay, we enjoy the intertwining of chirpy wrens and melodious robins (though we shall surely notice it later in our taxes); but it was shockingly unexpected and is becoming shockingly real. It has also made billions anxious, lonely and gloomy.

Stiglitz, Stern and Co are right that “emissions will rebound once mobility restrictions are lifted and economies recover, unless governments intervene”, but they do not seem to understand what they are saying. Why will emissions rebound? Because people will travel more – especially in cars (which are much safer than public transport against the virus). And why will economies recover? Because growth is a function of activity, and activity is made possible by energy, and globally energy remains about 85 per cent dependent on fossil fuels.

(This applies, by the way, even to eco-activity. Part of the blessed peace of lockdown has been the absence of Extinction Rebellion street protests which cannot be organised without modern transport. The same applies to the planet-saving conferences to which rich and powerful people fly from all over the world. This year, because of Covid-19, Glasgow has been spared the United Nations Climate Change Conference (COP26) in which 196 countries would have met to talk yet again about limiting warming to 1.5C.)

As Lord Lilley, the former Cabinet minister, put it in a Global Warming Policy Forum webinar this week, the coming Covid recession is caused “by a suppression of supply, not by a failure of demand”. In other words, it is not what people wanted. It has been imposed upon them. In a democracy, people rarely vote for what they do not want. After the Covid lockdown, voters will want to get back to work unimpeded and take the full benefits of the collapse of the oil price in falling costs for transport and heating. They will not, you would think, be in the mood to go on paying ever-higher electricity prices for renewables.

Even in goody-goody Germany, this thought is dawning on politicians. This week, Angela Merkel had to give in to her party’s MPs who protested that Germany should not contribute its bit to the European “Green Deal” – agreed shortly before the virus struck – for still faster climate reductions by 2030, unless all other EU member states do the same. It is an insoluble problem for green politics that they succeed only among voters who feel guilty for being rich. Greenery depends on the consumerism it hates for its very existence. Most voters will now be angry about getting poorer, not guilty about being rich.

How can green policies survive, then? The clue is in that phrase “unless government intervenes”. Only governments can suppress the economic spirits of their people. And the only way they can do so is by exploiting the language of emergency.

That is why the Covid-19 experience appeals to the Net Zero mind-set. Even before the disease came along, the phrase “climate emergency” had been deliberately deployed by activists and accepted by MPs. It was invented to persuade government to coerce public opinion. The remedy, you see, is “led by the science”, which is allegedly “settled”.  The message to the people is: lose your rights or lose the planet.

The Covid experience ought to have shown us the difference between a real emergency – a fell plague besetting the world – and a speculative one. Even in the Covid crisis, there is fierce debate about whether such action was necessary. Those doubts should be infinitely stronger in relation to Net Zero. Its entire edifice is based on models – we keep seeing how models can mislead – which make worst-case assumptions about the distant future. Problem, perhaps; emergency, no.

Surely we should have some faith that our developing technology can continue to grow cleaner and quieter. Surely the resources of civilisation can make it easier for Harrabin and me to hear wrens and robins without beggaring humanity in the process. “Our house is on fire!” shouted Greta Thunberg last year. It isn’t, but it has been locked down. Once is enough.

4) Tilak Doshi: Coronavirus And Climate Change: A Tale Of Two Hysterias
Forbes, 14 May 2020

Up to a few months ago, life was normal. Well, sort of. In that pre-coronavirus normalcy, the reigning narrative was that of mankind facing assured destruction if we did not amend our wasteful – read carbon-intensive — ways. Short of a drastic curtailment in our use of fossil fuels, we would all perish in the not too distant future.

How distant depended on who one listened to. At the radical end of the spectrum — US Congresswoman Alexandria Ocasio-Cortez, teenage icon Greta Thunberg and the Extinction Rebellion folk among others — gave us a decade or less before we would face the fury of the elements, be they fires, droughts, floods, and other horrors of biblical proportions. The “moderate” position held by the mainstream climate change establishment — ranging from the key multilateral organizations such as the UN’s IPCC to the private sector with oil majors such as Shell and leading environment and social governance (“ESG”) practitioners like Larry Fink, CEO of the world’s largest hedge fund BlackRock – held that we had to reach the “net-zero” rate of carbon emissions by 2050 lest the world climate “tip over” to Armageddon.

But then, something happened along the way. Up popped a particularly contagious virus, first in its birthplace in Wuhan, China, and then spreading across the world. In a mere couple of months, the novel coronavirus began to wreak death and economic mayhem, the latter caused primarily by governments panicked into shutting down entire swathes of the economy to “flatten the curve” of infections to avoid health systems from being overwhelmed.

It did not take long after the onset of the global pandemic for people to observe the many parallels between the covid-19 pandemic and climate change. An invisible novel virus of the SARS family now represents an existential threat to humanity. As does CO2, a colourless trace gas constituting 0.04% of the atmosphere which allegedly serves as the control knob of climate change. Lockdowns are to the pandemic what decarbonization is to climate change. Indeed, lockdowns and decarbonization share much in common, from tourism and international travel to shopping and having a good time. It would seem that Greta Thunberg’s dreams have come true, and perhaps that is why CNN announced on Wednesday that it is featuring her on a coronavirus town-hall panel alongside health experts.

In response to both threats, governments and their policy experts habitually chant the “follow the science” mantra. In everything from face masks and social distancing (1 or 2 meters, depending on the relevant jurisdiction) to the duration of lockdowns, governments were  “led by the science”.  California governor Gavin Newsom told protestors last month “We are going to do the right thing, not politics, not protests, but by science”. In banning the sale of mulch and vegetable seeds and such-like as non-essential, Michigan governor Gretchen Whitmer proclaimed in a New York Times op-ed that “each action has been informed by the best science and epidemiology counsel there is.”

But, beyond being a soundbite and means of obtaining political cover, ‘following the science’ is neither straightforward nor consensual. The diversity of scientific views on covid-19 became quickly apparent in the dramatic flip-flop of the UK government. In the early stages of the spread in infection, Boris Johnson spoke of “herd immunity”, protecting the vulnerable and common sense (à la Sweden’s leading epidemiologist Professor Johan Giesecke) and rejected banning mass gatherings or imposing social distancing rules. Then, an unpublished bombshell March 16th report by Professor  Neil Ferguson of Imperial College, London, warned of 510,000 deaths in the country if the country did not immediately adopt a suppression strategy. On March 23, the UK government reversed course and imposed one of Europe’s strictest lockdowns. For the US, the professor had predicted 2.2 million deaths absent similar government controls, and here too, Ferguson’s alarmism moved the federal government into lockdown mode.

Unlike climate change models that predict outcomes over a period of decades, however, its takes only days and weeks for epidemiological model forecasts to be falsified by data. Thus, by March 25th, Ferguson’s predicted half a million fatalities in the UK was adjusted downward to “unlikely to exceed 20,000”, a reduction by a factor of 25. This drastic reduction was credited to the UK’s lockdown which, however, was imposed only 2 days previously, before any social distancing measures could possibly have had enough time to work.

For those engaged in the fraught debates over climate change over the past few decades, the use of  alarmist models to guide policy has been a familiar point of contention. Much as Ferguson’s model drove governments to impose Covid-19 lockdowns affecting nearly 3 billion people on the planet, Professor Michael Mann’s “hockey stick” modelwas used by the IPCC, mass media and politicians to push the man-made global warming (now called climate change) hysteria over the past two decades.

As politicians abdicate policy formulation to opaque expertise in highly specialized fields such as  epidemiology or climate science, a process of groupthink emerges as scientists generate ‘significant’ results which reinforce confirmation bias, affirm the “scientific consensus” and marginalize sceptics.

In a recent interview, Lord Sumption – a former Supreme Court Justice in the UK – had this to say in lambasting the country’s collective hysteria: “Hysteria is infectious. We are working ourselves up into a lather in which we exaggerate the threat and stop asking ourselves whether the cure may be worse than the disease.”

Rather than allocating resources and efforts towards protecting the vulnerable old and infirm while allowing the rest of the population to carry on with their livelihoods with individuals taking responsibility for safe socializing, most governments have opted to experiment with top-down economy-crushing lockdowns. And rather than mitigating real environmental threats such as the use of traditional biomass for cooking indoors that is a major cause of mortality in the developing world or the trade in wild animals,  the climate change establishment advocates decarbonisation (read de-industrialization) to save us from extreme scenarios of global warming. Taking the wheels off of entire economies on the basis of wildly exaggerated models is not the way to go.