PCR test sensitivity and false positives

The retired government scientist I have mentioned before has been number crunching again, this time about the PCR test and false positives.

This is what he reports:

I’ve had some cracking discussions with a GP [referred to as DM] of Cumbria regarding nominal test sensitivity, specificity, disease prevalence, bench-marking and gold standards (lack thereof) and decided to treat everyone to a single slide that summarises everything we need to understand about false positives.

In plain English a false positive is when we tell somebody they are infected with a disease when they are not. All diagnostic tests have their limits and the RT-PCR test is no exception, but what governments and their expert advisors are doing is completely ignoring this basic fact and assuming everybody who tests positive is either sick with COVID symptoms, infectious or carrying the SARS-COV-2 virus.

Nothing could be further from the truth, as you are about to see for yourselves.

For this analysis I’ve set test sensitivity at a nominal 80% (a figure supplied by the Centre for Evidence Based Medicine at Oxford University). This figure tells us how good the test is at detecting presence of the virus in infected people and an 80% sensitivity means that the test will detect 8 out of every 10 infected people, thus two people will go home being told they don’t have the virus when they do.

RT-PCR test specificity is a most controversial subject, with initial nominal estimates set at 99.9%. This figure is essentially a guess based on previous research, bench studies and hand waving, with experts arguing over what the real figure is. Specificity tells us how good the test is at detecting absence of the virus in uninfected people, and 99.9% specificity means that the test will accidentally yield 1 infected case out of every 1000 uninfected cases, thus this person will be told they are carrying the virus when they are not.

What complicates matters is how much disease there is in the population. We call this prevalence and 1% prevalence means 1 in every 100 people are carrying the virus (SARS-COV-2) that causes the disease we call COVID-19. When a viral infection is highly prevalent diagnostic tests work well, but when prevalence starts to wane diagnostic tests start to produce nonsensical results.

Prevalence is another of those controversial guesstimates because it is hard to measure reliably and yet the latest UK government report suggests this may now be down to 0.1% (1 in every 1,000 people).

This chart enables you to see just how false positive cases are being generated for differing levels of test specificity for three levels of prevalence for an assumed nominal sensitivity of 80%. The first thing to notice is how low the red line is compared to the others. This reminds us that few false positives are generated when the virus is rampant at 10% prevalence.

With UK government estimates now down at 0.1% prevalence (green line) we can see that false positives are going to rocket at anything below the nominal 99.9% specificity. Since operational specificity (real world specificity) is likely down at 97% or even 95% we now see that virtually every positive test result the government is counting as a ‘case’ is almost certainly a false positive.

With estimate of prevalence down at just 6% for England even during the peak of the outbreak (Ward et al Nature Communications 12, Article number 905. 2021) and operational prevalence likely lying somewhere between 95% and 97%, then we can safely assume that lockdown, mandatory masks, social distancing, closure of health services, destruction of businesses, damage to the economy and all the rest of the shit shovelled upon us has been built on nothing more than a gigantic fiction so huge and so fraudulent that the public cannot even see it.

Perhaps they dare not see it for the price of listening to a few alleged experts has been great and terrible.

Source