AI-Designed Viruses: A Virologist’s Warning (In Defense of Virology – Episode 5)

In Episode 5 of In Defense of Virology, Rutgers professor Bryce Nickels speaks with distinguished virologist Simon Wain-Hobson about the risks of applying artificial intelligence to virus design, a subject Simon recently wrote about in his essay, AI assisted design of viruses.
The discussion centers on a recent preprint in which researchers used AI to generate novel bacteriophages (King et al., Generative design of novel bacteriophages with genome language models. bioRxiv. September 17, 2025.). From hundreds of AI-designed candidates, the team synthesized 16 fully functional viruses. One replicated faster than its natural reference phage, while six showed remarkable genetic stability, accumulating no detectable mutations at all.
Even for Simon, whose career spans decades of studying viral evolution, the results were shocking. He argues that the researchers’ success has implications far beyond phage biology, suggesting that applying similar AI-driven methods to animal or human viruses could readily generate extremely dangerous new pathogens.
The discussion places these results within a broader (and increasingly urgent) context. Leaders across the AI community are now openly warning about the risks of AI-enabled biology. Most notably, AI pioneer Yoshua Bengio (often described as one of the “godfathers of AI”) wrote in a New York Times op-ed that the implications of this technology are “terrifying.” Similar concerns surfaced at the September Red Lines AI meeting, where participants identified AI-driven pandemics as a top concern.
Simon argues that scientists, left to their own devices, will not voluntarily refrain from applying AI-based viral design to human-relevant pathogens. He calls on research funders to draw firm boundaries by withholding support for work that could escalate existential risk (an approach that would align with the recent executive order on dangerous gain-of-function research).
The episode closes with a broader appeal for scientists to stop prioritizing technical novelty and high-risk experimentation, and instead recommit to the principle of Do No Harm. Simon argues that scientific effort should be directed toward areas (e.g., cancer, neurodegenerative disease, and mental health) where advances can be truly transformative without carrying the risk of catastrophic consequences.
(recorded December 7, 2025)
Timestamps
00:30—Welcome and introduction
01:16—Simon summarizes results of a preprint on AI-designed bacteriophages
08:12—Why the results deeply concern Simon
13:17—Why the AI community is raising similar alarms
16:35—A call for administrators and funders to not fund such high-risk work
18:01—“Curiosity killed the cat”
19:28—Closing remarks
intro and outro by Tess Parks
Click here to read Simon’s collection of essays on Biosafety Now’s Substack page.
source sciencefromthefringe.substack.com
