Does the UK’s “REACT-2” antibody study prove that a novel virus was in circulation in 2020?

People who believe the fantastical story that a virus “somehow escaped” from a lab in Wuhan, traversed (most of) the globe, and temporarily wiped out the flu nearly everywhere often fall back on the UK’s REACT-2 study as evidence, and specifically this graph contained within one of its published reports:

The interpretation by a number of analysts and commentators – including Dr. Clare Craig, shown above – is this:

Researchers asked people who tested positive for antibodies when they had their symptoms, and this graph depicts the answers. The fact that the first peak depicted on that graph matches the “first wave” supports the assertion that there was a definite spring 2020 surge in the prevalence of the novel virus known as SARS-CoV-2.

(Clare actually goes further in interpreting the above graph. Here, she asserts that the smaller bump in Nov / Dec 2019 symptoms depicted in the graph above is evidence that “COVID was around from Autumn 2019”. However, our present focus is the spring 2020 “surge”.)

What was REACT-2?

The REaltime Assessment of Community Transmission-2 (REACT-2) study was a large-scale community survey designed to measure the prevalence of antibodies to the SARS-CoV-2 virus among adults in England.

This is the published pre-print from which the above graph is extracted (also available here):

2021

(The official website for the study – containing a comprehensive collection of materials – can be found here, with all study materials here.)

In summary, for round 5, the researchers randomly selected 600,000 individuals from an NHS database, resulting in a large sample size. Each person was sent a lateral flow antibody test, which detects antibodies using a fingerprick blood sample1.

Approximately 172,000 participants completed the test, submitted their results online, and filled out an online symptom questionnaire.

The aforementioned graph can be found on page 22 of the published paper. It is presented as a “reconstructed epidemic curve” of antibody-positive participants who reported symptoms. The years are not shown on the x-axis but we can infer the range as being mid-October 2019 through to early February 2021.

The only mention of Figure 1 in the paper is under “Prevalence” on page 5:

An epidemic curve constructed from date of onset of symptoms in unvaccinated people who were IgG positive shows that the second wave grew more slowly in September to November than the first wave in March-April, and then accelerated in December 2020. (Figure 1)

A clear and complete explanation of the methodology used to construct the curve isn’t provided2 and no raw underlying data is made available.

The questions this cohort were asked can be found in this document:

210114_Study_5_Antibody_Round_5_User_Survey

For those who had “had covid”3, a list of the following symptoms (a long list of mundane ailments, many of which are indistinguishable from any other cold-like illnesses) was presented, and participants were asked for the dates on which symptoms started and ended (“as best as you can remember”):

Which of the following symptoms were part of your COVID-19 illness?

Please select all the symptoms you had, whether or not you saw a doctor.

1. Decrease in appetite

2. Nausea and/or vomiting

3. Diarrhoea

4. Abdominal pain/tummy ache

5. Runny nose

6. Sneezing

7. Blocked nose

8. Sore eyes

9. Loss of sense of smell

10. Loss of sense of taste

11. Sore throat

12. Hoarse voice

13. Headache

14. Dizziness

15. Shortness of breath affecting normal activities

16. New persistent cough

17. Tightness in chest

18. Chest pain

19. Fever (feeling too hot)

20. Chills (feeling too cold)

21. Difficulty sleeping

22. Felt more tired than normal

23. Severe fatigue (e.g. inability to get out of bed)

24. Numbness or tingling somewhere in the body

25. Feeling of heaviness in arms or legs

26. Achy muscles

27. Raised, red, itchy areas on the skin

28. Sudden swelling of the face or lips

29. Red/purple sores or blisters on your feet (including toes)

30. Leg swelling (Thrombosis)

31. Other symptom (please specify)

32. None of these

Those who did NOT report having “had covid” were presented with the same list as above, prefaced by:

Have you had any of the following symptoms since November 2019.

Please select all the symptoms you have had, whether or not you saw a doctor.

These respondents were NOT asked to provide an exact date, but rather the month on which these symptoms occurred:

One huge issue with this study is that we don’t have a breakdown of the data by whether or not particpants actually thought they had had covid. Hence, we don’t know how much of the symptom data came from those who had “had covid” and thus provided an exact date, or those who did not think they’d had it, and hence just specified the month or months in which they recalled experiencing symptoms.

This is a major omission which, on its own while not even considering all the other problems, essentially invalidates this study entirely. How can such a precise “symptom curve” be imputed when an unknown quantity of the data was from participants who simply recalled having symptoms nearly a year beforehand (see below) during a particular month?

Moreover, a key piece of information from the had covid / didn’t have covid breakdown – whether that is a predictor of “covid antibody status” – is not reported, and we must raise the suspicion that this is because no such link was found, bringing into question whether anything of any significance at all was being measured.

The y-axis on the graph is labeled “seven-day rolling average infections”. We have no sense from the paper how or why a seven-day average is used, why the term “infections” is applied, or why the numbers range from 0 to 125.

(There is also a mismatch between the prose and graph title, with the authors saying Figure 1 includes unvaccinated individuals testing positive for antibodies but the figure itself identifying “antibody positive participants” – with no distinction by vaccination status.)

So, on its face, it appears the researchers amassed some data about symptoms of a cold or flu-like illness experienced by a large body of people over the previous months, and constructed a modeled “epidemic curve” from the subset who have “positive antibodies”.

We can’t verify who had what symptoms, on what precise dates, and when, nor (crucially) how the same observations differ from those NOT testing positive for antibodies.

Moreover, there are a number of other limitations related to who responded and what they were asked to recollect.

  • Firstly, although the initial invitation was based on a random selection, those that responded are self-selected. They are surely much more likely to be people who truly believed in Covid and considered any symptoms they had during the spring 2020 “surge” were indeed different in some way from a regular cold or flu-like illness.
  • Secondly, and linked to the point about selection bias above, it must be appreciated that participants were being asked for dates on which they experienced symptoms nearly a year after the event. In most cases these would have been minor and inconsequential. It seems hardly credible that onset dates could have been recalled with accuracy. It seems possible, or even probable, that participants were guided by knowledge of what had happened in spring 2020 into fitting the dates of their symptoms (which themselves may have been the result of the propaganda / nocebo effect4) into the known “first wave”.

The dates for sample collection and questionnaire completion for round 5, which make up the cohort contributing towards the graph in the paper under discussion herein, can be seen below (source):

As can be seen, this was carried out between 25 January 2021 and 8 February 2021, nearly a year after the “first wave” surge5.

For all the above reasons, it is crucial to be able to see the same curve generated for those who did NOT test positive for SARS-CoV-2 antibodies (to see if and how it differs materially), but this has not been published, and nor is the raw data allowing independent researchers to do so available6.

In summary, the curve from the REACT-2 report appears to be meaningless.

What of the antibody tests themselves? Surely, they were measuring something meaningful?

Consider the following claims made in respect of the validation of the test used:

The LFIA (Fortress Diagnostics, Northern Ireland) targeting the spike protein was selected following evaluation of performance characteristics (sensitivity and specificity) against predefined criteria for detection of IgG,(12) and extensive public involvement and user testing.(13)

The LFIA has a clinical sensitivity on finger-prick blood (self-read) for IgG antibodies following natural infection estimated at 84.4% (70.5, 93.5) in RT-PCR confirmed cases in healthcare workers, and specificity 98.6% (97.1, 99.4) in pre-pandemic sera. (12,14)

So, high specificity is claimed – that is, that when the test is positive, it is detecting specifically the target virus, and not something else.

References numbered 12 and 14 are cited in support.

Ref 12 is this: Flower B, Brown JC, Simmons B, Moshe M, Frise R, Penn R, et al. Clinical and laboratory evaluation of SARS-CoV-2 lateral flow assays for use in a national COVID-19 seroprevalence survey. Thorax. 2020 Aug 12:

1082

Ref 14 is this (click on the picture for link):

Notably the Flower et al paper (ref 12) just says this regarding specificity testing:

Sera for specificity testing were collected prior to August 2019 as part of the Airwaves study from police personnel.

There is no mention of testing for cross-reactivity to any other pathogens known to be associated with identical symptoms.

In this extract from their paper, Flower at al refer (as their ref #10) to MHRA requirements and recommendations in respect of sensitivity and specificity:

That MHRA document is this one:

Target_Product_Profile_Antibody_Tests_To_Help_Determine_If_People_Have_Immunity_To_Sars_Cov_2_Version_2

It is noteworthy that in a document said to specify minimum standards for these tests so little can apparently be known about what should be measured; it appears as if they are giving the green light to go fishing with a very big net rather than measure something precise and specific for a particular target.

It is indeed true that MHRA suggest a minimum of “200 confirmed negatives”. But, under “Analytical Specificity” they ALSO suggest that cross-reactivity with other common respiratory pathogens listed in an Annex needs to be checked.

Of the assay design, choice of antigen and so on, the document comments as follows:

Here’s the list of viruses (essentially the known causes of colds and influenza-like illnesses) specified in the Annex:

So, it appears that the REACT-2 study has not used a test which has been checked for cross-reactivity with other known7 causes of colds and flu-like illnesses. Perhaps the investigators took the word “should” in the above too literally, when logic dicates that it really ought to have been a “must”.

But surely the “pre-pandemic” samples all testing negative shows they were testing something new?

The Flower et al study states (in references 12 and 14) that specificity analysis was performed on prepandemic sera collected as part of the Airwave Health Monitoring Study before August 2019.

But, from where were these “prepandemic” samples sourced?

This reference is given in respect of the Airwaves study:

That particular study was one conducted “to evaluate possible health risks associated with use of TETRA, a digital communication system used by police forces and other emergency services in Great Britain since 2001. The study has been broadened to investigate more generally the health of the work force.”

The study’s website is here.

The protocol included storing blood samples from participants, and it is these samples which were used for covid antibody test validation. However, we don’t actually know much about how old these samples were (though we do know that they would have been collected between 5 and 16 years prior to 20208), how they were stored, defrosted, and so on.

Specifically, we don’t know whether these samples would have reacted to antibody tests for any other pathogens at all, such as those associated with cold / flu-like illnesses9.

Finally, it’s worth remembering also that, in addition to the above, the commonly-held view of antibody testing as a black-box technology giving a clear, reliable and meaningful “yes/no” result is deeply flawed, as we wrote here:

In summary:

Reporting from round 5 of the UK’s “REACT-2” antibody study provides no support for the assertion that a novel virus was in circulation in 2020.

Neither the symptom information nor the antibody testing performed can be regarded as reliable indicators of anything.

A final, frightening thought is this:

Is the level of competence on display here actually the norm with a lot of clinical research?


(Before commenting on this article, please see this disclaimer, with which all authors of this article concur.)

1

This is not the same as a lateral flow antigen (or “rapid antigen”) test, which is designed to detect viral proteins from a nasopharyngeal swab.

2

Maybe the authors used this tool provided by the CDC (H/t to

for alerting me to this), or something similar.

3

It appears as if having “had covid” includes those self-certifying (ie without the “benefit” of a test), however, since no detailed methodology is available we don’t know how these data points were ultimately treated.

4

The “nocebo effect” – the counterpart to the placebo effect – is the propensity for negative expectation (in this context the fear generated through intense and sustained sophisticated government propaganda) to worsen existing or create new symptoms. See the articles here.

5

And well over a year after the “Autumn 2019” symptoms would have been experienced.

6

We have written to the authors asking for more information, including the methodology used to construct the curve, and symptom information for antibody negative participants. This article will be updated if we receive any relevant response.

7

Not to mention unknown causes; this is an inherent weakness in the entire model of virus test validation – ie the fact that new, previously unknown, viruses are regularly being identified, but specificity testing can only be performed against known species.

8

This tissue bank registration implies that sample collection started in 2004, and this follow-up study protocol suggests it stopped in 2015.

9

Surely these samples should have been tested for cross-reactivity with a range of antigens, and if there was no cross-reactivity at all this should have raised the question as to whether these samples would have cross-reacted with anything; it would have been inconceivable that many of those participants wouldn’t have had antibodies to at least some of those ubiquitous endemic pathogens.

 

See more here Substack

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (1)

  • Avatar

    kamas716

    |

    Sars-Cov2 had likely already penetrated deep into “flyover country” in the US by December of 2019. When I took my daughter into the clinic in early December the nurse stated that they were seeing a lot of “flu cases that don’t show up on the influenza tests.”

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via