I don't normally like to criticize the work of others in this blog, other than pointing out from time to time where I may disagree with a particular viewpoint or conclusion. But an abstract that is going to be presented this coming Monday at a meeting of the American Society for Radiation Oncology has received some degree of coverage in the press, and those reports are exceptionally uncritical of what I consider a flawed study.
The headlines are suggesting that the study demonstrates that even though PSA tests don't necessarily save lives, they do lead to a reduction in cancer recurrence, and therefore are valuable. I am of the opinion that no such conclusion can be drawn from this research.
I have no problem with authors doing research and presenting abstracts. That's what we do in medical science. But when studies are promoted, and the foundation of the conclusion is very suspect, and the press does nothing to address the obvious problems with the study, then I become a bit upset.
I need to emphasize that this is an abstract at a medical meeting, and not a final publication in a peer reviewed journal. And, I also need to underline the fact that all I have is a brief description of the study. Hopefully, some of these criticisms are going to be addressed when the paper is presented.
The research reports that the investigators examined records of patients who were treated with either surgery or radiation prior to the time that PSA testing for prostate cancer "was first advocated in 1993." The examined the records of 575 men treated from 1986-1992 and another 1146 men treated from 1993-1996, which they describe as the "post-screening era."
They then go on to determine when the men were first diagnosed as having "metastases free survival rates" for a maximum period of 10 years after their treatment.
When the authors examined the percentages of men who did not have a recurrence in the two groups by ten years of follow-up, they found significantly lower rates in the post-PSA group: for men at high risk, 58% (post) vs. 82% (pre), men at intermediate risk 79% vs. 93% and for men at low risk 90% vs. 98%.
Their conclusion is not surprising: "This study suggests that routine screening for (prostate cancer) has resulted in a significant decrease in the risk of a patient developing metastatic disease within 10 years of treatment for prostate cancer after controlling for severity of disease."
OK, but does it?
There is a classic error here that needs to be pointed out, and frankly has me concerned about possible misrepresentation of these results.
You don't have to be a rocket scientist to see that there were a lot more men treated after 1992 than before 1992. Not only that, but the "pre" group spanned 7 years, while the "post" PSA group covered only 4 years.
Do you need an explanation why treatment for prostate cancer accelerated after 1992? Could it possibly be that PSA led to a lot more men receiving treatment?
There is no question that the use of the PSA test dramatically increased the number of men diagnosed with prostate cancer and treated for prostate cancer. And while this was happening, the types and quality of our treatments changed as well, with a more effective surgical approach getting traction in major centers around the country.
Here is the kicker: there were men before 1992 who also had prostate cancer, but that cancer wasn't found because a PSA test wasn't done, or so the authors imply (more about that in a moment). But they wouldn't have been included as patients with prostate cancer when the percentages were calculated. Had they been included, then the percentages of recurrence for the "pre" era would have been substantially less.
Let me try to show you this in simpler terms:
If you have 100 men, and 50 of them have a disease, but you only find it in 25 of them, and of those 25 only 10 have a "recurrence", then your recurrence rate among the diagnosed men is 40%.
But let's say medicine gets "better" at finding the disease earlier. Now you have 100 men, and 50 of them have the disease. Your new test finds the disease not only in the same 25 men you would have found it anyway, but in an additional 25 men you couldn't find it in before. The same 10 men have a recurrence of the disease, but on this go-around, your recurrence rate among "detected" cases is 20%, or half.
Essentially, in our simple example, you still have 10 men who have recurrence, but because you have a new test, it makes it look like you are doing better (20% vs. 40%) but in fact you haven't made one bit of difference in the actual outcomes for the men.
That is exactly what could be going on in this study: Find and treat more men with the disease, and you appear to decrease the rate of recurrence when in fact you may not have done anything at all.
That's why I am so concerned about the press coverage this paper is receiving. It looks good, sounds good, may be good, but it is fundamentally flawed. The reporters who covered it should have realized what was going on here and backed off. This is not a paper that should influence anyone about the value of getting or not getting prostate cancer screening. (You may be interested in a press release from another advocacy organization that is also publicizing this study.)
I will acknowledge that there may be a valid argument that PSA testing may indeed have decreased the risk of recurrence from prostate cancer. My colleagues and I have on occasion discussed the fact that fewer serious complications from prostate cancer-such as recurrent disease invading the spinal cord leading to paralysis-appear to be occurring based on their own experience. But those reports are just discussions, not scientifically valid truths. It may be that someone can take that thought and do the study to demonstrate that the population-based frequency of serious complications is declining. But that is not what was done in this study.
One other point that is probably a minor one: The PSA era began before 1993. It may have picked up steam at that time, but I don't know that for certain. What I don't know is whether or not in fact 1992-1993 is a legitimate "cut off" date for PSA screening. No one turned on a magic switch on January 1, 1993 and said all men with prostate cancer will be diagnosed with a PSA test as of that date. I hope that the researchers reviewed the patient records and confirmed that the men in the "pre" PSA group were in fact in the "pre PSA" group and hadn't received a PSA before they were referred to the medical center for treatment. And, that the men in the "post-PSA" group in fact had their disease diagnosed only because they had a PSA test.
This is why I emphasize so often that new studies with new thoughts have to go through a process of presentation, review, discussion and criticism. In my personal opinion, promoting this study in the press based on such a basic flaw does a disservice to the men in this country who are faced with a dilemma of trying to decide whether or not PSA screening is right for them.