About 10 years ago I went to Osaka to investigate why people with kidney failure in Japan seemed to have better survival five years after starting dialysis treatment compared with people in Britain. Dialysis aims to replicate kidney functions to keep people alive. So you might think that comparing the percentage of people alive five years after starting dialysis would tell us how effective the treatment is in the two countries. So did I. But I was wrong.
A rather nerdy month perched in a windowless hospital office, translating and analyzing audit data from Japan and Britain not only taught me the Japanese word for kidney ('jinzo') and what time to dash to the cafeteria for the best seaweed salad, it also revealed a simple explanation for the differences in five-year survival rates that I'd found: in Japan, people with kidney failure tended to start treatment at an earlier stage of their disease compared to people in Britain. It wasn't that people with kidney failure were actually living longer in Japan - it was that they started the clock earlier, so it just looked that way. This is known in the trade as 'lead time bias', and can cause all sorts of confusion. I realized that this made it fairly meaningless for me to compare Japan's five-year survival for kidney failure with Britain's. On the other hand, if you started the clock when people's kidneys were at exactly the same stage of failing, the data suggested that the apparent differences probably disappeared. And thus I was introduced to the flaws of five-year survival data.
The same can sometimes be said for cancer screening. A survey published on 5th March in the Annals of Medicine this week has caused some consternation - particularly amongst the stats-nerd element of the Twitter population. The survey asks more than 400 physicians whether they would recommend screening that was associated with an increased five-year survival rate, and the majority say yes. Now we know from the kidneys in Japan and Britain that unless five-year survival refers to a particular stage of a disease, it doesn't tell you very much. Its results hinge on when you start the clock. And the whole point of screening is to start the clock earlier, so of course there's an increased five-year survival rate! It's that pesky lead time bias again... Since patients look to their physicians for advice on screening, it is concerning that this survey found that physicians can so easily be taken in by this.
Imagine that Bob and Jane start growing an identical, imaginary type of cancer on the same day. Bob goes for screening one year later, and it's picked up. Bob starts the clock that day, and in five years, he is still alive. His five-year survival is recorded. He dies four years later, having survived a total of ten years since the cancer started, and nine years since his diagnosis. Compare that with Jane who didn't have screening and only notices the lump after seven years. Jane sees her physician and starts her clock immediately. She dies on exactly the same day as Bob due to the cancer that started on exactly the same day as his. But because she only lived three years after her diagnosis, the statisticians report her as not having achieved five-year survival. From this, can you conclude that having the screening prolongs your life? Of course not - Jane might look on paper like she died sooner, but in fact they both developed their cancer on the same day, and both died exactly ten years later. The only difference is that Bob (and the statisticians) knew about his cancer for longer than Jane. And for screening, the key question is whether it is useful to know sooner.
For some cancers, it definitely is - screening can save lives. And we know this by moving our attention from the irrelevant five-year survival ('how likely are people diagnosed with this cancer to be alive after five years?') to the more useful mortality rate. ('how likely are people who have this cancer to die?'). If screening can improve the mortality rate, it's worthwhile. But if screening increases five-year survival but doesn't improve the mortality rate, it's just an expensive way to know you have cancer for a longer time, without increasing your survival. Thanks, doctor. It is disappointing that in the Annals of Medicine survey, so many physicians wrongly thought a reported increase in five-year survival data (irrelevant) was more persuasive in making decisions about whether to have screening than a reported reduction in mortality rate (which actually tells you something useful about whether screening saves lives). Oh dear.
Even worse, perhaps, is that many of these physicians inaccurately believed that since more cancers are picked up in people who have screening compared to people who don't, this means that screening saves lives. There are several important reasons that this is wrong, of which I will pick two. First, screening is not 100% accurate. It is not a diagnosis - it tells you whether you have a high or low probability of having a disease. Some people who are told they may have cancer via screening do not actually have cancer, and this can cause massive stress and anguish and unnecessary, potentially risky, distressing, and expensive tests to investigate.
Secondly, just because you have cancer doesn't mean there's a benefit in knowing about it early (or at all). A good example is some types of prostate cancer - Albert never finds out he has prostate cancer. It progresses quietly, slowly, and Albert dies many years later of something completely unrelated. Compare that with Bert who decides to have screening and it picks up the same prostate cancer that would have progressed in just the same way as Albert's - only now that he's had the screening, Bert now has to worry about a cancer diagnosis, go through stressful, painful, expensive tests, and treatment that might leave him incontinent, impotent, or worse, and because screening tests aren't perfect, after all of that it might turn out that he didn't even have cancer in the first place. Again, earlier is not always better. And again, five-year survival comparisons are not very helpful here. Bert might be able to claim good five-year survival that he attributes to having his cancer picked up via screening, but in reality, it might be exactly the same cancer as Albert, who doesn't feature on the statisticians' charts at all and lives in ignorant bliss, no thanks to screening.
Screening can be great for certain, highly selected diseases for which there is an accurate, acceptable test, effective treatment, and evidence that catching the disease at an early stage is likely to prolong life (aka improving the mortality rate). But just because the test is simple doesn't mean the repercussions are. Screening for everything just for the sake of it is likely to bring up all sorts of false alarms and uncertainties, alert you to things you didn't need to worry about, or can't do anything about, and lead to all sorts of tests and treatments that might make you more anxious and sick than the diseases you were screened for were ever going to do. This is why full body scans are a bad idea. With genetic testing fast approaching, this is going to become an increasingly important and common issue. The point of screening is to start the clock early. But only if it's helpful to do so.
When it comes to screening, it's not a bad idea to look before you leap. Don't fall for the old lead time bias, or the description of the test being initially 'simple' - ask how having screening affects the mortality rate, weigh up the pros and cons, and make up your own mind. Personally, I won't be rushing out to buy kits that sequence my DNA. And just as with the kidney failure data, unless I can be persuaded of their usefulness, I will be treating five-year survival statistics that compare screened and not-screened people with a pinch of salt, sprinkled over my Osaka-style seaweed salad.