Photographic upscaling, genetic sequencing, and the Covid ‘Virus’

Whilst perusing X, Jonathan was struck by a post promoting a restored 4K version of the Zapruder film of the assasination of JFK

Viewing the scene naturally raises the following question: How are such restorations actually done? How can a higher resolution image be created than the one originally captured?

The process clearly involves adding more pixels, and essentially “guessing” what these contain using some pre-conceived (and today AI-driven) notion of what might have been there if the camera had been able to capture it.

The same applies to film colourisation (or colorization in American usage) which was all the rage a few years ago, but has since lost favor, being an example of “presentism”, imposing a set of contemporary beliefs, assumptions, and values onto the past.

Beyond the fact that the output rarely looks real, people seem instinctively to sense there’s something wrong with making guesses and presenting them as “the truth”. Interestingly, good archeological practice demands that “restored” elements of structures are easily identifiable (by the use of colour or texture) as non-original.

In a blog post discussing various aspects of “upscaling”, the author James Theopistos says this, which we found particularly particularly interesting:

At FinerWorks, we have been testing AI-driven image enlargements to help artists who might have only lower-resolution image files of their artwork but need print-worthy versions. Right now we are not getting consistently good results and here are some of the things we have seen in testing this first hand.

Possible Replacement of Original Details: Since AI upscaling generates new details based on patterns it has learned, it may add or modify parts of an image that weren’t originally there but thought they should be.

This can be a problem for images where showing original detail is important. For instance, the shadow of a man’s upper lip could be mistaken for a mustache.

The process, and the associated problems, reminds us of genomic sequencing technologies. There too, machines fill gaps, ambiguities, and unknowns by “guessing” from prior patterns and models, a process computer scientists call “imputation”.

The result can look or seem reasonably authoritative, even precise, but it remains a reconstruction – an inference presented as truth.

Elucidation of “the sequence” (initially 2019-nCoV, later SARS-CoV-2) and the making of “a test” began with:

  • authors of an early testing protocol endorsed by the WHO saying they “relied on social media reports announcing the detection of a SARS-like virus” and “assumed” a SARS-like virus was involved in the “outbreak” reported by Chinese authorities
  • “sequence alignment” (to known viral sequences) of all the various bits of genetic material found in the brochiolar lavage samples obtained from the subject or subjects.

We and others have questioned the order of events and methods employed in these enterprises1. In essence, it looks very much to have been a process whereby pre-existing assumptions were used to “fill in gaps” and “correctly order” the fragments found.

One can perhaps think of the social media reports and the “hint” to look at SARS-like viruses as being given the film footage and a segment to look at, and being asked whether they support a particular scenario or theory.

Unsurprisingly, a SARS-like virus was confirmed. Because SARS sits on the WHO’s International Health Regulations (2005) list of notifiable diseases with potential to trigger a Public Health Emergency of International Concern (PHEIC), we must ask whether establishing the virus’s SARS identity was a priority from the very beginning.

It is curious that the role and relevance of the “social media” sourcing remains one which nobody on any side of the “covid debate” appears willing to address.

https://pmc.ncbi.nlm.nih.gov/articles/PMC6988269/

The “Corman-Drosten” Review, a critical analysis (of the protocol outlined by the above paper) by 22 scientists published in late November 2020, makes no mention of it.

One would have thought that the fact that the entire sequencing and test development process was driven by such an assumption would have been a vital one to explore further, if not mentioned and addressed as the very first “fatal flaw” of the protocol design.

This could well be deliberate, or otherwise a reflection of the fact that the proponents of sequencing technologies simply don’t appreciate the extent of the biases introduced by the decision as to which “template” is used.

An interesting recent practical example

This article published a few months ago reports on a paper in Nature:

The article advances the following argument:

  • The previously accepted trope that “genetically humans differ from chimps by only one percent” looks like it may have been wrong, and not just by a little bit, but hugely so – with real differences now measured at aroud 15 percent
  • The cause of this was that the chimp genomes were not, in fact, fully sequenced before.
  • Instead, they were assembled, using the human genome as a reference, “which made the ape genomes look more human-like than they actually were.”

Quoting from the article (with our emphasis):

A groundbreaking paper in Nature reports the “Complete sequencing of ape genomes,” including the genomes for chimpanzees, bonobos, gorillas, Bornean orangutans, Sumatran orangutans, and siamangs.

I noted this in an article here yesterday2, reporting that an evolutionary icon — the famous “1 percent difference” between the human and chimp genomes, touted across the breadth of popular and other scientific writing and teaching — has fallen.

The researchers, for whatever reason — I’m not a mind reader — chose to bury that remarkable finding in technical jargon in their Supplementary Data section. Now for more on the scientific details.

You might be thinking, “Hey, weren’t these genomes sequenced long ago?” The answer is yes but also no. Yes, we had sequenced genomes from these species in the past, but, as the paper explains, “owing to the repetitive nature of ape genomes, complete assemblies have not been achieved.

Current references lack sequence resolution of some of the most dynamic genomic regions, including regions corresponding to lineage-specific gene families.”

Or, as an accompanying explainer article puts it:

In the past, scientists had deciphered segments of non-human apes’ genomes, but they had never managed to assemble a complete sequence for any species. In the current study, however, [Kateryna] Makova and her collaborators used advanced sequencing techniques and algorithms that allowed them to read long segments of DNA and assemble them into a sequence that stretched from one end of each chromosome to the other, without any gaps. “This has never been done before,” says Makova.

In other words, the complete ape genomes were never fully sequenced. And they used the human genome as a reference sequence, which made the ape genomes look more human-like than they actually were.

Predictably, there was considerable pushback from “establishment science” against that critique – see here, and here, for example. The author responded by updating the offending figure – and adding commentary under it – in his original article, and also with this piece in which he accused his critics of “changing the subject”.

Without delving too deeply into the rights and wrongs of the various positions on the “correct” way of measuring genetic similarities between species3, it does seem clear that we actually understand very little about the functional significance of any observed – or calculated – genomic differences; this stands in stark contrast to the certainty of many assumptions made about the genomic characteristics of viruses, especially in relation to “SARS-CoV-2”.

Conclusion

All in all, the issues with sequencing raised in this essay seem awfully similar to those seen with photographic “upscaling”.

Shouldn’t the same scepticism / warnings apply?

See more here substack.com

Header image: Youtube

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (1)

  • Avatar

    Paul

    |

    I was bemused by the FDA saying that because they didn’t have a “quantifiable virus isolates of the 2019-nCOV were available at the time the tests (RT-PCR) was developed, and this study conducted, assays designed for the detection of the 2019-nCOV were tested with characterized stocks of in vitro transcribed full length RNA (N gene; GenBank accession: MN908947.2) of known titer (RNA copies/’u’L) spiked into a diluent consisting of a suspension of human A549 cells and viral transport mediuM, (VTM) to mimic clinical specimen.” There’s more about how using circular logic they confirmed it.
    Did you spot the bit about “characterized stocks… GenBank accession: MN908947.2”? “No quantifiable virus isolate” at the time but,hey, we found some inside a computer and bunged that in instead!!!

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via