A Mountaineer’s Guide to Managing Research-Related Risks

I love the mountains. When people hear about what I like to do in the mountains, it’s totally understandable when they assume I’m a risk-taker

However, I don’t like the thrills. I hate being afraid. I’m a relative chickenshit in the world of bold outdoors adventurers.

The only way I still get outside is because the views are amazing and the risks can be managed.

Thus, I don’t see the exciting things I do as thrill-chasing or risk-taking, but as risk-management enabling unique rewards.

One may look at the photo below and see risk.

Photo: Ian Lange

Again, it’s very natural to see risk, because there is risk. What you don’t see, however, is the risk-management behind the scenes.

Prior to running with the bulls, I read testimonials in English and Spanish on the best tactics and ethics for the event. I studied the route. I poured over countless videos to watch the bulls, enabling me to draw a statistical distribution of the bulls’ trajectories through the course.

I also studied the people in this sprinting mosh pit. The gentleman wearing the red polka dotted shirt at the bottom of the figure below wears the same shirt every time, has been in every run I studied for the past several years, and I assessed he had one of the most reliable methods for running with the bulls of any of the runners I studied.

He runs the mis-named “Dead Man’s Corner”, more appropriately called “Curva de la Estafeta”, and he starts his run about 50m prior to the corner. The first crowd of people starts running, and he stays in place, jumping up and down to look for the bulls. Slowly, the pace of the crowd quickens around him as he jogs while still jumping & looking back.

As he sees the bulls, he stays near the inside corner while sprinting the Curva, allowing him to avoid being trapped and crushed by sliding bulls on the outside of the corner, and often he manages to land the coveted position at the horns of bulls past the Curva.

Having done my analysis of bull trajectories and studied the gentleman’s example, I committed to taking a slightly more conservative path. The morning of my run, I saw the gentleman in the dotted shirt and nodded in deep appreciation.

He prayed, completely unaware of how much he’d taught me through his example. The fireworks went off, and the streets were silent. A palpable cloud of nervousness filled the streets, and the first nervous joggers took flight.

I jumped and looked for the bulls, but also glanced ahead to see the dotted shirt. Screams, shouts, and the cacophony of the stampede reverberated along the walls of the route.

The volume rose and rose and rose, until, suddenly, as I was jogging, jumping, and looking back, I saw the bulls 10 meters behind me, charging fast. I sprinted. I hugged the inside corner as I’d stipulated, but, like many good plans of mice and men, my plan went awry.

The bulls ran an anomalously tight trajectory, brushing right against the inside corner and almost trapping me. Seeing this and needing to improvise, I backed towards the walls until the first bell oxen passed me and then I sprinted behind the horns of the bell oxen on the inside corner.

We turned the Curva together and suddenly I found myself in the open pocket behind the bulls with plenty of space to sprint, hop over a series of runners laying on the ground, and smile as I ran with the bulls.

Well-managed risk yields wild experiences and ephemeral rewards, but it’s the process of managing risk that endures as my true hobby. Risk-management is a way of life to me.

Lately, I’ve served as a contractor for the NIH Office of the Director to provide advice on the dangerous gain of function executive order, an order intended to manage the risk of biological research modifying pathogens in ways that could cause “significant societal consequences”.

This unique opportunity has been an honor, and while I came into the process with some knowledge and ideas, the process has been an incredible learning opportunity.

As I’ve grappled with science and policy questions alongside experienced veterans of science policy in the US government, I’ve consulted many world experts, learned from people across industry, academic, and government roles in this struggle, and evaluated science within my subject matter expertise to understand the challenge of this science policy space as best I can.

I’ve tried to recruit diverse views, understand and synthesize them, and distill this mixture of information in my head into drops of informed advice.

One thing that stands out from my experience is that my hobby of risk-management across everything from mountains and bulls to finance and biology positions me to feel a unique part of this elephant of biosafety and biosecurity policy.

I want to share these personal, loosely held views in hopes of fostering broader discussion. While this topic is fraught with contentious discourse, I encourage folk to please stay courteous – nobody knows the right answer here, and only by listening to others feeling different parts of this science policy elephant, and giving them the benefit of the doubt, can we answer hard questions and forge lasting and effective policy.

Some research carries enormous risks, risks that I believe are not yet well-managed. Research enhancing pathogens capable of causing a pandemic can, by definition, cause a pandemic.

From Ron Fouchier’s 2011 experiment passaging an avian influenza in ferrets to make a mammalian-transmissible bird flu to the 2018 DEFUSE proposal to insert a furin cleavage site in a bat SARS-related coronavirus, biology has found a frontier of research enhancing potential pandemic pathogens that is exceptionally risky.

I have high confidence that this risky research caused the COVID-19 pandemic. Millions of people have died from COVID-19, helping everyone understand the magnitude of research-related risks.

Climbing or running with the bulls could result in my death, along with risks that my family loses a loved one and rescuers trying to help me could be injured in the line of duty. The risks of my hobbies warrant the most prudent risk management practices to stay within my level of acceptable risk.

Yet, the risks in my hobbies are infinitesimal compared to risks faced by the riskiest bioscience research. Some research poses risks so extreme that the best risk-management tool is to simply say “no” to the proposal as surely as I say “no” to climbs outside my comfort zone.

The challenge is: what do you say “yes” to? How do you build confidence in our system of assessing and managing risk to be confident in scientific risk-management? When we say “yes” to some research and “no” to other research, how can we arrive at these judgements in a structured, reliable, and objective way?

Let’s break the task into three parts (1) how we conceptualize risk, (2) how we assess risk and (3) how we manage it. If we’ve done justice to conceptualizing, assessing, and managing risk, it’s easier to say “yes” and “no” with confidence.

This doesn’t mean we won’t incur some risk, it doesn’t mean bad things can’t happen, but it does help us feel like we’ve done our best to use all information available at the time, and like we won’t feel foolish in hindsight.

A Case Study: Oncolytic Viruses

To frame this discussion on what to say “yes” to, I prefer to inhabit a space away from the extremes of “YES” and “NO”, to find the “yes…?” and “…no?”, to seek the grey areas where risk-assessment and risk-management is hard.

The extremes of dangerous gain of function contain research that is so obviously hideously dangerous, where the answer is such a loud “NO!!!!” that I would never be comfortable with that being done unless the entire world faced clear, immediate, and existential risk whose only hope of solution is the hideously dangerous thing.

On the other extreme is a big, beautiful world of benign biology that doesn’t pose the immediate risk that the experiments being conducted could significantly disrupt society – in this case, the portfolio management problem doesn’t weigh liabilities but instead weighs relative benefits of research proposals.

The best grey area I came up with, and the one for which I don’t have a clear answer but invite others to consider and discuss, is oncolytic viruses. Oncolytic viruses are a remarkable set of engineered or otherwise modified viruses, drawn from diverse branches in the viral taxonomic tree, and repurposed to become exceptionally good at targeting cancer cells to treat cancer.

Once a virus is able to differentiate between cancer and non-cancer cells, a plethora of additional, clever biological mods may be considered, such as making the virus better at killing those cancer cells, better at recruiting the immune system to recognize the cancer cells, making the viruses immunoevasive for improved therapeutic efficacy, and more.

Cancer is the second leading cause of death. To note a conflict of interest or a bit of positional awareness: my mom is currently battling pancreatic cancer. While my mom and I are fighting that with metabolic hacking and not oncolytic viruses, and she’s being supported by the best doctors in the world at the Mayo Clinic, I’d be lying if I said the thought didn’t cross my biologist’s mind in an effort to learn whatever’s needed to save my mom.

Curing cancer would be undeniably beneficial. If someone could halve the risk of death for any given type of cancer, they would receive a Nobel prize. I don’t think anybody can argue that the dream of curing cancer makes the benefits of “gain of function” in oncolytic viruses promising, and given the severity of the disease, the often terminal nature of the diagnosis, clinical trials may be easier to implement and rapid progress made.

Research is happening today to cure cancer with viruses that have gained cool new functions. But is it “dangerous” gain of function?

What are the odds that one of the many types of oncolytic viruses escapes, especially from mutations during more widespread use in clinical populations should they be approved?

How dumb would we feel if the plot of World War Z played out, in which an oncolytic virus escaped and a catastrophic pandemic ensued?

How can we conceptualize that risk? How would we assess its likelihood? How can we manage it?

I don’t have answers here. My personal views are not solidified, but I found this broad case study of oncolytic viruses be uniquely nuanced and useful as I grappled with how to manage scientific risks.

I find this grey area to be a warm hearth around which we can gather for productive discussions.

How do we conceptualize scientific risk?

In finance, it’s common to think of risk as simply the variance of returns or some other measure of the spread – and not the central tendency – of possible outcomes. These quantitative, continuous concepts of risk are particularly valuable when we have data and when risk and reward both fall along the same axis, such as return on investment (ROI).

For most research-related risks in biology, however, we don’t have enough data. If we’re working with a pathogen and imagined living in a world of infinite data, we might try to estimate the transmissibility, virulence, and other characteristics or “inherent risks” of a pathogen and combine those numbers into some expected mortality burden, possibly discounted by the probability of an accident, the conditional probability of an unmitigated outbreak given an accident, and other such “manageable risks” for biological research.

Biology research also doesn’t have performance indicators as simple as financial ROI. The benefits we dream of when doing research are multidimensional, spanning many forms of countermeasures and therapies.

The risks we fear take the form of every pestilential nightmare our minds can conjure, along with all the collateral damage of disrupted medical systems, civil unrest, distrust in science, and more.

We don’t have data, and even if we had data the risks and benefits don’t fall along a single axis of outcomes like “stocks/health go up” vs. “stocks/health go down”.

How, then, can we conceptualize the risk of research, such as research enhancing a bat SARS-related coronavirus? Wild-type bat viruses may have low transmissibility in humans in their natural state, but could have extraordinary transmissibility after some modifications.

The multiple dimensions of mortality, morbidity, civil unrest, distrust, decline in global standing, and other harms that could result from an accident can’t be projected onto one line of “good” vs. “bad” outcomes.

The vague benefits in this case of modifying bat SARS-related coronaviruses as proposed in DEFUSE make such research a certain “NO!!!” for me, but that’s an easy question. Let’s go back to the hard question in our grey area of oncolytic viruses.

The same potential multidimensional risks exist, these risks likely vary across types of oncolytic viruses in ways that may be difficult to predict, and the risks exist alongside a very plausible benefit.

We can’t estimate the risk of, say, an oncolytic adenovirus evolving beyond the mutations intended to make the virus replication-deficient. To estimate such risks scientifically would require incurring them, conducting the experiments and measuring the results, but that forces us to accept the risks we’re trying to prevent or possibly avoid altogether.

In mountaineering terms, incurring the risks to estimate them would be equivalent to me climbing something to assess if safe enough to climb. If I die on the climb, I have my answer. In the process, I will have failed at risk management.

Due to the lack of data, we probably shouldn’t rely on quantitative concepts of risk.

Our inability to estimate risks and benefits quantitatively doesn’t preclude prudent risk management. In fact, I believe the first step towards making prudent decisions is to recognize the nature of our uncertainty in this problem.

We need to grapple with the fundamental uncertainty of assessing the outcomes of some biological experiments gone wrong, and make decisions in light of that uncertainty.

This is taken from a long document, read the rest here substack.com

Header image: Art Supplies

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

 

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via