AI: A Black Box Tool for Propaganda

Like other computer models, such as the IPCC climate models and the more recent Oxford COVID-19 epidemiological models, current artificial intelligence models operate as black-box systems, generating outputs based on given data inputs and modeling assumptions. They can be used as tools of propaganda

While AI models represent a more advanced level of computer modeling—particularly in their ability to process and utilize natural language data—they have inherent limitations.

For instance, ChatGPT, as a contemporary example of such a model, does not truly understand its outputs and lacks formal deductive reasoning grounded in a logical framework.

As a result, it is possible to prompt ChatGPT into contradicting itself when asked a sequence of related questions. An example of this phenomenon is provided below.

How ChatGPT works

ChatGPT generates seemingly intelligent and coherent responses, even to highly technical or specialized questions, by using a sophisticated neural network architecture known as the Generative Pre-trained Transformer (GPT).

This model is trained on vast amounts of text data, enabling it to recognize and replicate patterns in language, including grammar, syntax, and context. However, it is crucial to note that ChatGPT does not truly “understand” the content of its responses; instead, it predicts and generates text based on statistical patterns learned during training.

The GPT model operates by processing sequences of words in a sentence, using an attention mechanism [1] to analyse and weigh the relationships between words. This allows it to capture context and generate text that is contextually relevant and grammatically correct.

The process is computationally intensive, as it involves analysing and synthesizing information from a massive dataset. After the initial training phase, the model undergoes a second stage called “fine-tuning,” where human reviewers provide feedback to refine its outputs for specific tasks.

In summary, ChatGPT’s ability to produce clear, grammatical, and seemingly knowledgeable answers stems from its advanced pattern recognition capabilities and extensive training on diverse text data.

However, it lacks genuine comprehension or awareness of the information it generates, relying instead on statistical correlations and learned linguistic structures.

Contextual analysis

The ChatGPT algorithm can answer questions by collating vast amounts of relevant information from extensive databases and producing well-written, coherent texts. For example, it can distinguish that a “fat man” is the opposite of a “slim man,” but that a “fat chance” is synonymous with a “slim chance,” based on its contextual analysis of language within a massive corpus of text.

Logically, one might assume that “fat X” would always be the opposite of “slim X” for any given X. However, ChatGPT does not operate on logic—it relies on contextual patterns in language.

Paradoxically, this ability to differentiate based on context is often perceived by the public as intelligence, despite the absence of genuine thinking or discernment in its processes.

No intrinsic understanding

There is an important distinction between true logical reasoning and the probabilistic, data-driven approaches that underpin AI systems like ChatGPT. While AI can simulate aspects of logical reasoning by identifying patterns and correlations in data, it lacks intrinsic understanding or the ability to engage in abstract, conceptual thinking.

Its “logic” is derived from statistical relationships in its training data, rather than from a deep comprehension of principles or causality.

Alan Turing, a pioneer in the field of artificial intelligence, envisioned intelligence as something that could be demonstrated through behaviour, such as the ability to convincingly mimic human conversation (as in the Turing Test).

However, Turing’s conception of intelligence goes beyond mere imitation; it implies a capacity for reasoning, problem-solving, and creativity. ChatGPT, while impressive in its ability to generate coherent and contextually relevant responses, does not possess true intelligence in this sense.

Its outputs are the result of pattern recognition and recombination of existing knowledge, rather than original thought or creative insight.

What ChatGPT demonstrates is a form of “creative regurgitation.” It processes and synthesizes vast amounts of text data, drawing on the collective knowledge and consensus present in its training corpus.

While it can present this information in novel or creative ways, it does so without genuine understanding or intentionality. Its creativity is constrained by the patterns and structures it has learned, and it cannot transcend its training data to produce truly original ideas or insights.

Logically inconsistent

ChatGPT’s responses are directly influenced by the data it was trained on, making it susceptible to data bias. For instance, if the majority of the training data supports proposition A while some lesser-known data argue for not-A, ChatGPT can produce either A or not-A, depending on how the question is posed and the context provided.

For example, posed question, “Are COVID-19 vaccines safe and effective?” ChatGPT replied, “Yes, COVID-19 vaccines have been shown to be safe and effective for the vast majority of people.”

Notably, China’s DeepSeek, a similar AI model, provided an identical answer. This reflects the consensus answer derived from the training data. The answers also mentioned there has been rigorous testing from clinical trials, implying but not asserting that those clinical trials had proved the vaccines are safe and effective.

During the COVID-19 pandemic, mainstream media, often influenced by governmental messaging, heavily propagated specific narratives via television and the internet.

This created an apparent consensus in public databases, even though more nuanced or contradictory findings existed in specialized sources like research journals. Consequently, AI models trained on such data may reproduce contradictory or false information depending on context, highlighting their potential to provide inconsistent responses.

A COVID example

Here is an example. Question: Has the COVID spike protein been observed to spread? ChatGPT response: “No, the COVID-19 spike protein itself does not ‘spread’ independently of the virus.”

Following this, several questions were posed referencing published research on spike protein spread, including findings from autopsies, epidemiological evidence, excess deaths, and mechanisms of causality.

By the eighth question, the topic of spike protein transportation via exosomes to various parts of the body was introduced [2.3].

Finally a blunt question referring in its first answer: “The statement that the COVID-19 spike protein itself does not ‘spread’ independently of the virus, is incorrect?” ChatGPT response: “The statement that the COVID-19 spike protein does not ‘spread’ independently of the virus is incomplete or contextually inaccurate when considering specific mechanisms such as exosomal transport and other biological processes.”

Ultimately, ChatGPT concluded: “The COVID-19 spike protein can spread independently of the virus under certain circumstances, such as after mRNA vaccination, through mechanisms like exosomal transport and bloodstream circulation.”

This example demonstrates ChatGPT’s inability to recognize when it has contradicted itself. It operates by synthesizing context from the input it receives, but lacks an overarching awareness or logical consistency to reconcile conflicting statements.

Propaganda tool

AI models like ChatGPT and DeepSeek are only as good as the data they are trained on. If the training datasets contain false, contradictory, or propagandistic information, the AI will inevitably reproduce and amplify these flaws.

This is the classic “garbage in, garbage out” problem. For example, if a dataset includes widespread misinformation or biased narratives, the AI will generate outputs that reflect those inaccuracies, presenting them as factual.

If the training data is dominated by repetitive propaganda, misleading narratives, or censored information, the AI will reinforce these falsehoods as if they were truths. This creates a feedback loop where false information is perpetuated and further entrenched in public discourse.

Imagine a future where students in schools and universities rely heavily on AI engines for learning, and journalists and fact-checkers use language models as primary reference sources. Education and media information are then mostly regurgitation.

Without critical examination, they could be echo chambers, perpetuating, solidifying public consensus, turning misinformation into apparent “truths”. Vladimir Lenin’s observation, “A lie told often enough becomes the truth.” encapsulated this danger.

Once a lie has been established in the consensus data, it may be difficult, if not impossible, to correct it.

Most people are in awe of technology, but do not understand how AI models work. They perceive the outputs as intelligent and authoritative, even when the underlying data are flawed.

This lack of transparency makes it easy for bad actors—whether governments, corporations, or other entities—to control AI outputs by manipulating the training data. By selectively including or excluding information, they can create a “fake consensus” that serves their agenda.

Government propaganda

In September 2023, the CDC asserted as fact that “COVID-19 vaccines do not change or interact with your DNA in any way.” However, the CDC has been unable to provide specific evidence under Freedom of Information (FOI) requests to substantiate this claim.

While there may not have been evidence at the time to disprove the statement, the absence of falsification does not inherently validate its truthfulness. Despite this, the declaration has been widely echoed by ‘fact-checkers’ and mainstream media, reinforcing its acceptance.

Over time, this repeated assertion has become embedded as a presumed truth within AI engines, which rely on existing data and consensus to generate responses.

On the other hand, several recent research papers have raised questions about the accuracy of the CDC’s statement, suggesting it may be false. When asked whether the CDC’s claim is correct, ChatGPT responded: “Yes, the statement ‘COVID-19 vaccines do not change or interact with your DNA in any way’ is correct.”

Similarly, DeepSeek replied: “Yes, that statement is correct. COVID-19 vaccines, including mRNA vaccines (like Pfizer-BioNTech and Moderna) and viral vector vaccines (like Johnson & Johnson and AstraZeneca), do not change or interact with your DNA in any way.”

These AI responses reflect the prevailing consensus but do not account for emerging research that challenges this view.

The COVID-19 pandemic may have been one of the largest and most far-reaching propaganda efforts in human history.

This campaign propagated numerous claims and narratives, some of which were misleading or false. These inaccuracies risk embedding themselves into the collective pool of human knowledge, potentially persisting indefinitely as they are absorbed and repeated by AI systems, which rely on existing data to generate responses.

Even in other specialized fields of medicine, business, or science, AI models are not immune to these issues. The quality of their outputs depends on the quality and selection of their training data.

If the data is biased, incomplete, or manipulated, the AI will produce logically inconsistent or misleading results. This undermines the potential benefits of AI in these fields and highlights the need for rigorous oversight and transparency.

Conclusion

Current AI engines are useful for getting consensus knowledge from a given large dataset, but without knowing the underlying data, they present risks of providing false or inconsistent knowledge.

By controlling the input data which drive AI engine outputs, AI can potentially be used as a black box tool for propaganda.

Unfortunately, the knowledge in such AI engines is not self-correcting, as falsehoods can be entrenched by repetition.

The immediate existential threat of AI to humanity is not super-intelligent cyborgs exterminating humans, but rather bad policies aided by unrecognized AI misinformation leading to self-destruction of humanity.

Some of these bad policies based on corrupt science and misinformation are already self-evident in our era.

References

[1] Vaswani, A. et al., Attention Is All You Need, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

[2] Zhang, Y. et al., Exosomes: biogenesis, biologic function and clinical potential. Cell Biosci 9, 19 (2019). https://doi.org/10.1186/s13578-019-0282-2

[3] Bansal S. et al., Circulating Exosomes with COVID Spike Protein Are Induced by BNT162b2 (Pfizer-BioNTech) Vaccination prior to Development of Antibodies: A Novel Mechanism for Immune Activation by mRNA Vaccines. J Immunol. 2021 Nov 15;207(10):2405-2410. https://pmc.ncbi.nlm.nih.gov/articles/PMC11073804/

Header image: Klippa

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (4)

  • Avatar

    Tom

    |

    For any operating system using software it will always be GIGO as the rule of the day…garbage in, garbage out. A/i is only as smart as the truth it is programmed to expel. Meaning it is not all knowing and can be programmed for nefarious purposes and will be.

    Reply

  • Avatar

    Herb Rose

    |

    AI will be the end of science. Science advances because someone questions things that eon’t fit the norm causing changes in the perspective. It happened when Copernicus was bothered by an insignificant number of stars that were wanderers resulting in the end of the Earth centered universe and when Galileo challenge Aristotle’s belief sanctioned by the church.

    Reply

    • Avatar

      Jerry Krause

      |

      Hi Herb,

      Very good!

      Have a good day

      Reply

  • Avatar

    solarsmurph

    |

    “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” ― Frank Herbert, Dune

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via