Why AI ‘misinformation’ algorithms and research are mostly expensive garbage

If ever there was a case of ‘garbage in, garbage out’ then this is it.

And, ultimately it has all been driven by the objective of censoring information that does not fit the politically correct narrative.

The Hunter Biden laptop story is just one of many stories which were deemed by the Main Stream Media (and most academics) to be ‘misinformation‘ but which were subsequently revealed as true.

Indeed Mark Zukerberg has now admitted that Facebook (Meta), along with the other big tech companies, were pressured into censoring the story before the 2020 US election and also subsequently pressured by the Biden/Harris administration to censor stories about Covid which were wrongly classified as misinformation.

The problem is that the same kind of people who decided what was and was not misinformation (generally people on the political Left) were also the ones who were funded to produce AI algorithms to ‘learn’:

a) which people were ‘spreaders of misinformation’; and

b) what new claims were ‘misinformation’.

Between 2016 and 2022, I attended many research seminars in the UK on using AI and Machine Learning to ‘combat misinformation and disinfomation’.

From 2020, the example of Hunter Biden’s laptop was often used as a key ‘learning’ example, so algorithms classified it as ‘misinformation’ with subclassifications like ‘Russian propaganda’ or ‘conspiracy theory’.

Moreover, every presentation I attended invariably started with (and was dominated by) examples of ‘misinformation’ that were claimed to be based on “Trump lies” such as those among what the Washington Post claimed were the “30,573 false or misleading claims made by Trump over 4 years”.

But many of these supposed false or misleading claims were already known to be true to anybody outside of the Guardian/NYT/Washington Post reading bubble.

For example, they claimed that Trump said “Neo-Nazis and white supremacists were very fine people” and that anybody denying was pushing misinformation, whereas even the far Left-leaning Snopes had debunked that in 2017.

Similarly, they claimed “evidence that Biden had dementia” or that “Biden liked to smell the hair of young girls” was misinformation despite multiple videos showing exactly that – so, don’t believe your lying eyes; indeed as recently as one week before Biden’s dementia could no longer been hidden during his live Presidential debate performance, the mainstream media were adamant that such videos were misinformation ‘cheap tricks’.

But the academics presenting these Trump, Biden, and other political, examples ridiculed anybody who dared question the reliability of the self-appointed oracles who determined what was and was not misinformation. At one major conference taking place on zoom I posted in the chat:

“Is anybody who does not hate Trump welcome in this meeting”. The answer was “No. Trump supporters are not welcome and if you are one you should leave now”.

Sadly, most academics do not believe in freedom of thought, let alone freedom of expression when it comes to any views that challenge the ‘progressive’ narrative on anything.

In addition to the Biden and Trump related ‘misinformation’ stories which turned out to be true, there were also multiple examples of covid related stories (such as those claiming very low fatality rates and lack of effectiveness and safety of the vaccines) classified as misinformation that also turned out to be true.

In all these cases anybody pushing these stories was classified as a ‘spreader of misinformation’, ‘conspiracy theorist’ etc. And it is these kinds of assumptions which drive how the AI ‘misinformation’ algorithms that were developed and implemented by organisations like Facebook and Twitter worked.

Let me give a simplified example The algorithms generally start with a database of statements which are pre-classified as either ‘misinformation’ (even though many of which turned out to be true), or ‘not misinformation’ (even though many of which turned out to be false). For example, the following were classified as misinformation:

  • “Hunter Biden left a laptop with evidence of his criminal behaviour in a repair shop”
  • “The covid vaccines can cause serious injury and death”

The converse of any statement classified as ‘misinformation’ was classified as ‘not misinformation’.

A subset of these statements are used to “train” the algorithm and others to “test” the algorithm.

So, suppose the laptop statement is one of those used to train the algorithm and the vaccine statement is one of those used to test the algorithm.

Then, because the laptop satement is classified as misinformation, the algorithm learns that people who repost or like a tweet with the laptop statement are ‘misinformation spreaders’. Based on other posts these people make, the algorithm might additionally classify them as, for example, ‘far right’.

The algorithm is likely to find that some people already classified as ‘far right’ or ‘misinformation spreader’ – or people they are connected to – also post a statement like “The covid vaccines can cause serious injury and death”.

In that case the algorithm will have ‘learnt’ that this statement is most likely misinformation. And, hey presto, since it gives the ‘correct’ classification to the ‘test’ statement, the algorithm is ‘validated’.

Moreover, when presented with a new test statement such as “The covid vaccines do not stop infection from covid” (which was also pre-classified as ‘misinformation’) the algorithm will also ‘correctly learn’ that this is ‘misinformation’ because it has already ‘learnt’ that the statement.

“The covid vaccines can cause serious injury and death” is misinformation and that people who claimed the latter statement- or people connected with them – also claimed the former statement.

The way I have outlined how the AI process is designed to detect ‘misinformation’, is also the way that ‘world leading misinformation experts’ set up their experiment to “profile” the “personality type” that is susceptible to misinformation.

The same methods are also now used to profile and monitor people that the academic ‘experts’ claim are ‘far right’ or racist.

Hence, an enormous amount of research was (and is still) spent on developing ‘clever’ algorithms which simply censor the truth online or promote lies. Much of the funding for this research is justified on the grounds that ‘misinformation’ is now one of the greatest threats to international security.

Indeed, in Jan 2024 the Word Economic Forum declared that “misinformation and disinformation were the biggest short term global risks”.

European Commission President Ursula von der Leyen also declared that “misinformation and disinformation are greater threats to the global business community than war and climate change”. In the UK alone, the Government has provided many hundreds of millions of pounds of funding to numerous University research labs working on misinformation. 

In March 2024 the Turing Institute alone (which has several dedicated teams working on this and closely related areas) was awarded £100 million of extra Government funding – it had already received some £700 million since its inception in 2015.

Somewhat ironically, the UK HM Government 2023 National Risk Register includes as a chronic risk:

“artificial intelligence (AI). Advances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation”

Yet it continues to prioritise research funding in AI to combat this increased risk of ‘harmful misinformation and disinformation’!

As Mike Benz has made clear in his recent work and interviews (backed up with detailed evidence), almost all of the funding for the Universities/research institutes world wide doing this kind of work, along with the ‘fact checkers’ that use it, comes from the US State Dept, NATO and the British Foreign Office who, in the wake of the Brexit vote and Trump election in 2016, were determined to stop the rise of ‘populism’ everywhere.

It is this objective which has driven the mad AI race to censor the internet. Look at this video in which Mike Benz walks us through an event that took place in 2019:

it was hosted by the Atlantic Council (a NATO front organisation) to train journalists from mainstream organisations all around the world on how to ‘counter misinformation’.

Note how they make it clear that ‘misinformation’ includes for them ‘malinformation’ which they define as information that is true but, but which might harm their own narrative. They explain how to muzzle such ‘malinformation’, especially from the (then) President Trump’s social media posts in advance of the 2020 election.

Despite claims that this did not happen (and indeed any such claims were themselves classified as misinformation) the journalists involved in this subsequently boasted very publicly that they not only did it but that it prevented Trump’s re-election in 2020.

See more here Substack

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATI ONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (3)

  • Avatar

    S.C.

    |

    SOCIAL MEDIA PSYCHOLOGICAL OPERATIONS
    The internet is a psychological war zone. The secret services, military, Vatican, cults,
    and various police organisations employ thousands upon thousands of social media agents.
    The role of these agents is to monitor, identify, target, and infiltrate individuals and activist
    groups opposed to the Luciferian global agenda. Social media agents target those who speak
    out against such controversial issues as the global child sex trafficking operation, 5G,
    vaccination, immigration, and the ‘Safe Schools’ program. After six years of battling it out on
    the internet, the agents’ behavioural patterns have become obvious to me. I conclude that
    approximately 90 percent of the higher profile alternative media identities are in fact agents,
    paid to control the narrative regarding controversial topics, with the primary goal of swaying
    public opinion and subduing the masses from revolting. These agents employ Hegelian
    98 Jim Berger (21 May 2012). PATCON, International Security Policy Paper. New America Foundation.
    [archive.org]
    157
    Dialectic methods and pretend to argue a topic, when in fact they are all working toward the
    same goal. They muddy the waters with endless rhetoric until the public is confused and
    dissuaded from investigating the truth of a matter.
    The following three news articles provide insight to this type of psychological
    operation. The authorities typically claim these operations exist to target extremists, when in
    fact they target government dissidents. The terms ‘Russia’ and ‘Isis’ are employed, to justify
    the existence of such Orwellian measures.
    BRITISH ARMY CREATES TEAM OF FACEBOOK WARRIORS
    Ewen MacAskill, The Guardian, 31 Jan 2015.
    The British army is creating a special force of Facebook warriors, skilled in
    psychological operations and use of social media to engage in unconventional warfare
    in the information age. The 77th Brigade, to be based in Hermitage, near Newbury, in
    Berkshire, will be about 1,500-strong and formed of units drawn from across the army.
    It will formally come into being in April.
    The brigade will be responsible for what is described as non-lethal warfare. Both the
    Israeli and US army already engage heavily in psychological operations. Against a
    background of 24-hour news, smartphones and social media, such as Facebook and
    Twitter, the force will attempt to control the narrative. The 77th will include regulars
    and reservists and recruitment will begin in the spring. Soldiers with journalism skills
    and familiarity with social media are among those being sought.

    The Israel Defence Forces have pioneered state military engagement with social media,
    with dedicated teams operating since Operation Cast Lead, its war in Gaza in 2008-9.
    The IDF is active on 30 platforms – including Twitter, Facebook, YouTube and
    Instagram – in six languages… It has been approached by several western countries,
    keen to learn from its expertise.

    ** From “Eyes Wide Open”, the autobiography of a self-described MK Ultra victim and former child-sex slave, whose name you can find under the title PDF. Ten years ago, I would have been unlikely to believe any of it. Recent events have opened my eyes to the unthinkable level of evil we now must confront. Check it out and decide for yourself.

    Reply

    • Avatar

      Ken Hughes

      |

      indeed, I concur. I find the best way to beat this is simply to stop watching mainstream and slightly off the mainstream news outlets and get all my information, (misinformation), from alternative sources. But then I still doubt and test this “information” against my deliberate preconception that all are liars for their own ends.
      Only when you trust no one but yourself, your fallible self, are you protected to the maximum against false information.
      The world has always been run by psychopaths for their own ends and it’s no different today. Why would it be?

      Reply

  • Avatar

    Tom

    |

    A/I will always be another method they use to get you locked into digital prison. It is nothing more than an advanced search engine and all search engines can be easily manipulated. After interfacing with several A/I agents during the course of regular consumer business, I conclude they should all be fired without pay and benefits. I am 100% certain that A/I gobbledygook like Perplexity is connected to the goog which is the ultimate anti-privacy spy machine.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via