Do doctors understand test results?
Are doctors confused by statistics? A new book by one prominent statistician says they are – and that this makes it hard for patients to make informed decisions about treatment.
In 1992, shortly after Gerd Gigerenzer moved to Chicago, he took his six-year-old daughter to the dentist. She didn’t have toothache, but he thought it was about time she got acquainted with the routine of sitting in the big reclining chair and being prodded with pointy objects.
The clinic had other ideas. “The dentist wanted to X-ray her,” Gigerenzer recalls. “I told first the nurse, and then him, that she had no pains and I wanted him to do a clinical examination, not an X-ray.”
These words went down as well as a gulp of dental mouthwash. The dentist argued that he might miss something if he didn’t perform an X-ray, and Gigerenzer would be responsible.
But the advice of the US Food and Drug Administration is not to use X-rays to screen for problems before a regular examination. Gigerenzer asked him: “Could you please tell me what’s known about the potential harms of dental X-rays for children? For instance, thyroid and brain cancer? Or give me a reference so I can check the evidence?”
But it’s not just that doctors and dentists can’t reel off the relevant stats for every treatment option. Even when the information is placed in front of them, Gigerenzer says, they often can’t make sense of it.
In 2006 and 2007 Gigerenzer gave a series of statistics workshops to more than 1,000 practising gynaecologists, and kicked off every session with the same question:
A 50-year-old woman, no symptoms, participates in routine mammography screening. She tests positive, is alarmed, and wants to know from you whether she has breast cancer for certain or what the chances are. Apart from the screening results, you know nothing else about this woman. How many women who test positive actually have breast cancer? What is the best answer?
Gigerenzer then supplied the assembled doctors with some data about Western women of this age to help them answer his question. (His figures were based on US studies from the 1990s, rounded up or down for simplicity – current stats from Britain’s National Health Service are slightly different).
- The probability that a woman has breast cancer is 1{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (“prevalence”)
- If a woman has breast cancer, the probability that she tests positive is 90{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (“sensitivity”)
- If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (“false alarm rate”)
In one session, almost half the group of 160 gynaecologists responded that the woman’s chance of having cancer was nine in 10. Only 21{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} said that the figure was one in 10 – which is the correct answer. That’s a worse result than if the doctors had been answering at random.
The fact that 90{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of women with breast cancer get a positive result from a mammogram doesn’t mean that 90{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of women with positive results have breast cancer. The high false alarm rate, combined with the disease’s prevalence of 1{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}, means that roughly nine out of 10 women with a worrying mammogram don’t actually have breast cancer.
It’s a maths puzzle many of us would struggle with. That’s because, Gigerenzer says, setting probabilities out as percentages, although standard practice, is confusing. He campaigns for risks to be expressed using numbers of people instead, and if possible diagrams.
Even so, Gigerenzer says, it’s surprising how few specialists understand the risk a woman with a positive mammogram result is facing – and worrying too. “We can only imagine how much anxiety those innumerate doctors instil in women,” he says. Research suggests that months after a mammogram false alarm, up to a quarter of women are still affected by the process on a daily basis.
Survival rates are another source of confusion for doctors, not to mention journalists, politicians and patients. These are not, as you might assume, simply the opposite of mortality rates – the proportion of the general population who die from a disease. They describe the health outcomes of people who have been diagnosed with a disease, over a period of time – often five years from the point of diagnosis. They don’t tell us about whether patients die from the disease afterwards.
Take prostate cancer. In the US, many men choose to be screened for prostate-specific antigens (PSA) which can be an indicator of the disease. In the UK, it’s more common for men to get checked only after they start experiencing problems. Consequently, they are diagnosed with prostate cancer later, and are less likely to survive for five years before dying – but this doesn’t mean that more men die.
Moreover, many men have “non-progressive” prostate cancer that will never kill them. While screened American men in this situation are marked as having “survived” cancer, unscreened British men aren’t. These two facts explain why five-year survival rates of prostate cancer are much higher in the US than in the UK (99{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} rather than 81{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}), while the numbers of deaths every year per 100,000 men are almost the same (23 in the US, 24 in the UK).
One of the Harding Center’s diagrams shows that the risk of death is the same whether men are screened for prostate cancer or not:
So when former New York mayor Rudy Giuliani declared in 2007 that someone’s chance of surviving prostate cancer in the US was twice that of someone using the “socialised medicine” of Britain’s National Health Service, he was wrong. And when, in 1999, there was a furore about Britain’s survival rate for colon cancer (at the time 35{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}) being half that of the US (60{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}), experts again ignored the fact that that the mortality rate was about the same.
Gigerenzer’s research shows just how confused doctors often are about survival and mortality rates. In a survey of 412 doctors in the US he found three-quarters mistakenly believed that higher survival rates meant more lives were saved. He also found more doctors would recommend a test to a patient on the basis of a higher survival rate, than they would on the basis of a lower mortality rate.
Unsurprisingly, patients’ misconceptions about health risks are even further off the mark than doctors’. Gigerenzer and his colleagues asked over 10,000 men and women across Europe about the benefits of PSA screening and breast cancer screening respectively. Most overestimated the benefits, with respondents in the UK doing particularly badly – 99{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of British men and 96{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of British women overestimated the benefit of the tests. (Russians did the best, though Gigerenzer speculates that this is not because they get more good information, but because they get less misleading information.)
Read the rest of this article at www.bbc.co.uk
Trackback from your site.