Dr. Len's Cancer Blog

Expert perspective, insight and discussion

Dr. Len's Cancer Blog

The American Cancer Society

Cell Phones And Cancer: Time To Hang Up?

by Dr. Len July 29, 2008

I returned from vacation yesterday to a rash of comments and concerns about the use of cell phones and whether they caused cancer.

 

The news stories picked up on a memo written by the director of the University of Pittsburgh Cancer Institute, (UPCI) where he advised the faculty and staff of the cancer center that he was concerned there was an increased risk of cancer linked to cell phone use based on his review of the evidence.

 

When I took a more careful look at the memo and the supporting information, I didn’t find any new science on the subject.  It was essentially one more person adding their opinion that there was a risk to using cell phones. 

 

What the memo didn’t say was that there are others—equally expert—who do not agree with the conclusions that cell phones cause cancer.

 

In the meantime, based on the media headlines, many people have become concerned that the cell phones they use every day are a proven cause of a serious disease.

 

Let me say at the beginning that the science on this topic is mixed, and much of it does not support a link between cell phone use and serious disease in the opinions of the experts that I rely on.

 

The UPCI memo makes several recommendations on what you should do about using cell phones, including keeping your kids from using them, using a blue tooth or wired headset, don’t use the cell phone in a public place, don’t use a home cordless phone since it also emits microwaves, and use text messaging as a safer alternative.

 

At the bottom of the first page, there is a note that these recommendations were based on advice from an international expert panel, which is available at www.preventingcancernow.org.  When you click on the link, it automatically takes you to another URL: www.environmentaloncology.org, which is the website for the UPCI Center for Environmental Oncology.  There is no separate website that I could find for “preventing cancer now.”

 

I think it is important to read the information on that website carefully.  The authors point out that the science on the topic of cell phone risk is not conclusive at this time.  There is much yet to be learned.

 

What this tells me is that we are really dealing with here is what is called the “precautionary principle.”  In simple terms, as I understand it, the precautionary principle says that just because you can’t prove something isn’t harmful doesn’t mean it is safe.

 

I will be the first to admit that I am not an expert on cell phone use and its potential harm or lack of harm.  But I do turn to other experts, and many of them do not agree with the alarm that has been sounded over cell phone use in relation to cancer (cell phone use while driving is an entirely different matter).

 

That leaves us in a situation where each person has to make their own decision, and weigh the benefits and risks of using a cell phone or a cordless phone.  If you feel the potential risk outweighs the benefit, you take certain actions.  On the other hand, if you are of the opinion that the absence of strong scientific evidence on the harms of cell phone use is reassuring, you take different actions.

 

The suggestion of abandoning cell phone use and the use of portable handsets in the home and office is not likely to get much traction.  It certainly has gotten a lot of publicity.

 

What does the American Cancer Society have to say about this issue?

 

The Society has taken the position that most current evidence does not link the use of cell phones to cancer.  If you are concerned, then you can take some simple steps to reduce your exposure.

 

The Society also concludes that we don’t know about the risk to younger children, and many researchers are concerned that younger cell phone users may be more likely to face a higher risk. 

 

However, the reality is that cell phones do not emit the type of radiation that causes cancer.  The majority of studies reported to date have not supported a link between cell phone use and cancer, and there is no increase in brain cancer where cell phones have been used longest.  And, there is no increase in brain cancer here in the United States.

 

There are studies that have shown an increase in benign tumors on the side of the head where people report using their cell phones, but the studies are considered difficult to interpret for a number of reasons.

 

Finally, there are things you can do to reduce your risk, such as using newer digital models which emit less radiation.  You can limit your child’s use of cell phones or encourage text messaging (which is basically the only way some children currently use their cell phones anyway.)  Finally, you can use a headset, which is what I do and have been doing for many years simply because I find it difficult to constantly hold a cell phone to my head.

 

When you think about it, some of these recommendations are similar to those from UPCI.  But they weren’t issued as a major media event.  They are common sense suggestions you can consider if you are concerned about the risk.

 

But stop using my cordless phone in my house and use only a corded phone or speaker phone?  I doubt there are many people who are going to go that far in terms of changing their behaviors.

 

There are lots of us in medicine who have lots of opinions on a variety of subjects.  We can take a look at the same data and come to different conclusions.  That is not exactly an unusual event in medicine or medical science. Equating opinion to science is fraught with difficulty.

 

From my viewpoint, what we have here is a case where someone of academic stature decided to issue an alert and a call to action.  The memo was based on personal opinion and review of the evidence.  There was no new science, there was no new data.  Plain and simple, it was an opinion.

 

What is problematic is that it is also an opinion that is not universally shared by many other equally well-qualified experts.  But that wasn’t highlighted in the headlines or the news stories.  You had to dig a bit to find out that the concerns are linked to precaution, rather than definitive studies on the subject.

 

My suggestion?  Stick to the evidence and the science, state that it’s an opinion when it is an opinion, offer options for action, and let people draw their own conclusions. 

 

This isn’t about “right” vs. “wrong.”  This is about what I think vs. what you think.

 

Without evidence, we are on very shaky ground when it comes to making nationwide public health recommendations that take on the veil of authority when in fact the evidence is not clear, and they are based on opinion not support by clear facts.

 

If we aren’t careful, we will find ourselves living in a very confusing world when it comes to guiding the health of the public.

 

Filed Under:

Other cancers | Prevention | Research

If The World Was Perfect...

by Dr. Len July 15, 2008

How often have you heard the phrase, “If the world were perfect…?”

 

We don’t live in a perfect world, but a recently published study in the medical journal Circulation (and to be published in August in the journal Diabetes Care) shows what would happen if we lived in a perfect world when it comes to the impact of universal, effective medical prevention on the incidence of cardiovascular disease.

 

The heart of the question is what would happen if we did everything right as a country when it comes to undertaking preventive strategies, in this case for cardiac disease, and would we save any money if we did so?

 

Yes, I know this is a cancer blog, but this study has been undertaken as part of a partnership focused on prevention.  That partnership includes the American Cancer Society, the American Heart Association, and the American Diabetes Association.

 

The implications of this research for cardiovascular disease can give us an idea of what we might expect when it comes to cancer prevention and early detection. Future reports from this group will focus on the value of primary prevention and cancer screening on reducing the risk of disease and deaths from cardiovascular disease, diabetes and cancer.

 

The researchers used a very, very (yes, I said that twice) intricate mathematical model which literally mimics the health and unique characteristics of the adult population of the United States called "Archimedes."  It is a very complicated model, and is certainly impressive in its scope and capabilities.

 

Then, they asked what would happen if everyone in this country followed each of 11 recommended cardiovascular disease prevention activities, or combinations of those activities.

 

These recommendations are probably familiar to many of you, and include such healthy behaviors as not smoking, keeping your blood pressure under 140/90 if you are not diabetic, or under 130/90 if you are diabetic, keeping your body mass index less than 30 (which defines you as obese), maintaining your cholesterol at certain recommended levels, taking aspirin if you are at a high risk of having a heart attack, and so on.

 

What did the researchers find, when applying these recommendations to this computer model of the United States adult population?

 

First, they determined that of the 200 million people adults between the ages of 20 and 80 years alive in the United States today, 78% were candidates for at least one of the recommended interventions.

 

Then, they calculated how many people would benefit from following these recommendations over the next 30 years, assuming 100% compliance, compared to the situation if we just continued doing what we do today.  That means basically that everyone would do everything right when it came to preventing cardiovascular disease.

 

The results?

 

There would be a stunning 63% reduction in the number of heart attacks over the next 30 years, from an estimated 43 million to about 27.4 million, and there would be 31% fewer strokes.

 

The net result would be an increase of an average of 1.3 years of life per person over that time period.  That may not seem like much, but that figure applies to literally millions of people.

 

But, as I noted at the beginning of the blog, we don’t live in a perfect world. 

 

So what would be a reasonable estimate if we employed “best practices?”  In this case, the measure of the best practice is what results we could expect to achieve if we had compliance that was similar to the best health systems in the world (no, that isn’t the United States, my friends).

 

In that scenario, 36% of heart attacks would be prevented as would 20% of the strokes.  Life expectancy would be increased 0.7 years.  Not too shabby, when you consider how many people we are talking about.

 

Not every recommendation was equally effective in reducing heart attacks or strokes.  Some saved money, such as smoking cessation, while others cost only a small amount to implement, such as aspirin therapy for people at high risk of heart attack.  Others were much more expensive, such as lowering LDL cholesterol levels to less than 160 in people at low risk for coronary artery disease.

 

What about costs? How much would we save if we did everything right?  After all, there would be a huge reduction in the number of people who had heart attacks in strokes.  So we would spend a lot less on medical care, wouldn’t we?

 

 The model was not as perfect in this regard, since it looked only at the medical costs of treating cardiovascular disease.  For example, the model did not include the savings from lost productivity, nor did it include the benefits of smoking cessation on lung cancer incidence and treatment costs.

 

That said, the study found that doing all of these prevention activities the country would actually cost more money for health care.  We would spend an average of $1700 per person per year, or $283 billion dollars more per year, or a total of $8.5 trillion dollars over 30 years if we followed all of these recommendations.

 

I should point out that not everyone agreed that these prevention activities would in fact be more expensive. 

 

Last week I attended a meeting of prevention experts just after this study was released, and some of them felt that there were problems with the assumptions that were made in the study when it came to costs and savings over time. 

 

I am not in a position to refute the conclusions of the authors of the study, and inevitably we are going to have to wrestle with the fact that prevention in health care may not produce as much cost savings as we would expect. 

 

In the press release that accompanied the release of this paper, the three organizations noted, “Moreover, the analysis didn’t include important savings from reductions in nonmedical costs, which could be considerable, but are much more difficult to estimate.  These could include reducing the human and financial burden of care giving for family members or for society, and increasing the productive working lifespan of individuals.”

 

The authors conclude that although on one hand there clearly is a benefit from making the effort to reduce cardiovascular disease by implementing effective health measures, the costs are significant. 

 

“In summary, approximately three-fourths of U.S. adults would benefit from at least one recommended prevention activity to reduce the incidence of cardiovascular disease.  Full deployment of these interventions could potentially prevent approximately two-thirds of MIs (heart attacks) and one-third of strokes.  However, as they are currently delivered, most of the interventions will substantially increase costs.  If our health care system were able to reduce the cost of prevention activities, then the full potential for reducing the burden of CVD could be realized.”

 

We await the follow-up studies on the impact of primary prevention strategies and cancer screening to see how much they would reduce deaths from heart disease, diabetes and cancer.

 

Until then, we can only imagine what would it would be like to reduce the burden of illness in a perfect world.  We can always hope…

Filed Under:

Diet | Prevention | Tobacco

Kidney Cancer Vaccine: Much Hype, Little Hope

by Dr. Len July 03, 2008

There is something that fascinates the public about the possibility of treating cancer with a vaccine.  Perhaps that explains why so many abstracts and journal articles about the latest cancer vaccine research find their way into our newspapers, magazines and television reports.

 

A research article appearing online today in the British medical journal The Lancet describes a clinical trial which investigated whether a vaccine called vitespen could improve the survival of patients with primary kidney cancer. 

 

Unfortunately, the study points out—yet once again—that we may be hopeful that cancer vaccines will work, but we are a long way from success.

 

What is even more startling about this report—aside from the fact that a journal is actually publishing what we call a “negative” clinical trial—is the editorial which discusses the results.  The author of the editorial made some not-so- kind comments about how vaccine companies distort reports of vaccine trials, and how investigators make inappropriate claims regarding their research.

 

The design of the study was straightforward.  The researchers randomly assigned otherwise healthy patients who had newly diagnosed kidney cancers that were confined to the kidney to either receive a vaccine made from their tumor tissue or not receive this vaccine.

 

The vaccine—which was made from something called a “heat shock protein”—was prepared in the lab from cancer tissue removed at the time of surgery.  The study was international, with participants coming from several parts of the world, including the United States, Russia, Poland, Israel and Western Europe.

 

After the vaccine was prepared, it was then shipped back to the doctors caring for the patients and given initially once a week for 4 weeks, then every other week until the vaccine supply ran out.  Both groups of patients—half of whom received the vaccine and half who did not—were followed until there was evidence that their cancer returned.

 

Unfortunately, as pointed out in the journal article and the editorial, there were a number of technical problems with the study.  This meant that there were fewer patients to analyze for the final report.  Ultimately, there were about 360 patients in each group.

 

The results of the study showed that there was no evidence that this vaccine made a significant difference in the time to recurrence for these kidney cancer patients.  Both patients who received the vaccination and those who did not recurred at similar intervals.  There was a suggestion that patients with earlier stage disease did do better than similar patients in the untreated group, but the differences were not large enough to say that the vaccine was responsible for that difference.

 

Then the authors did something that is not uncommon in vaccine studies.  They did an analysis that was not originally planned as part of the initial study design.  This is what the authors and others call a “post-hoc analysis.”

 

These types of analyses are generally frowned upon.  Experience has taught us that these not infrequently can lead to erroneous conclusions.  Most researchers have come to the conclusion that you plan your study in advance and you determine what outcomes you want to look at when your study is completed.  That approach is generally accepted as removing much of the bias that could otherwise result by changing the rules as you go along during the clinical trial.

 

If you do a study and find an unplanned outcome with a subset of patients, then the appropriate thing to do is to run another study and prove your theory.  There are several clinical trials going on right now that are outgrowths of such an approach.

 

In this report, the authors did in fact do one of these unplanned analyses, and reported on their results as part of this paper.  They concluded that in patients with “intermediate-risk” kidney cancer the vaccine decreased recurrences compared to control patients who did not receive vaccine.

 

They concluded that, although their vaccine did not improve the outlook for kidney cancer patients when measured by conventionally accepted approaches, “Nevertheless, the observation that treatment with vitespen (the cancer vaccine) confers an apparent clinical benefit to patients with earlier stage disease with better prognosis is biologically plausible and consistent with a wealth of scientific and clinical evidence.”

 

That is somewhat like saying the vaccine didn’t work but it worked.

 

This is no small issue. 

 

There has been a huge fight between vaccine proponents and the Food and Drug Administration regarding a vaccine for the treatment of advanced prostate cancer called Provenge.  That disagreement—which at times has been ugly—resulted from just the same type of post-hoc analysis.

 

In that case, the FDA has decided not to approve the vaccine based on the unplanned analysis, pending the results of additional clinical trials.  To say the least, the proponents of Provenge are not pleased.  They have lobbied Congress, the FDA and everyone they can think of to get the vaccine approved based on that analysis as opposed to waiting for the completion of the clinical trial.

 

The editorial which accompanied the kidney cancer article in The Lancet was vehement on these issues.  In fact, I don’t recall reading many (if any) editorials that have attacked the conclusions of researchers and the sponsoring pharmaceutical company so directly in a major medical publication.

 

The editorial, written by James Yang, MD from the National Cancer Institute, pointed out the results and technical failings of the study.  He also noted that, just like several other cancer vaccines, there were early phase studies of vitespen that looked promising in the treatment of cancer but didn’t pan out when studied as part of larger clinical trial.

 

Dr. Yang stated that one of the allures of cancer vaccines is that they don’t have many side effects.  “That lack of toxicity also makes it easier to dismiss the fact that only rare objective regressions of metastatic disease were seen…Unfortunately, the results do not support the hypothesis that vitespen is beneficial.”

 

He goes on to note that cancer vaccines simply don’t have the power to help the body fight the cancer cells, much like they would fight an infection.  The theory behind cancer treatment vaccines makes sense, he notes.  It is turning that theory into reality that has been so difficult for these past many years.

 

Then the editorial gets a bit more interesting and less conventional in its statements:

 

“Yet the credibility of the field of cancer immunotherapy is weakened when some investigators, and particularly vaccine companies, cannot accept the results of randomised trials.  There has been extensive use of post-hoc subset analyses to salvage underpowered studies or those that fail to reject the null hypothesis (my note: that means studies that show the vaccines don’t work).  Such practices are akin to shooting the arrow first and being permitted to draw the target afterwards…As Wood and co-workers (the authors of the article) state, ‘interpretation of post-hoc analyses must be approached with caution’, which is largely because subsequent prospective studies often cannot verify the results.”

 

Dr. Yang goes on to chastise the company that has sponsored the research on vitespen for misleading press releases.  Those press releases dealt with the announcement that this vaccine will be available to patients in Russia in the second half of 2008.

 

“One final issue to be raised is the differences seen between results and analyses as presented in peer-reviewed publications and the sometimes selective reporting in company press releases.  In a press release on April 8, 2008, and despite the results of the trial reported today, the manufacturer of vitespen highlighted the approval of vitespen in Russia for patients with intermediate-risk renal cell cancer.   That announcement uses relative rather than absolute risk and fails to point out that the analysis was not pre-specified in the protocol…

 

Commercially driven efforts that spin or obfuscate the conclusions of such a trial should be vigorously resisted because such efforts seriously erode its value.”

 

I have been watching the efforts to develop a vaccine to treat cancer for over 30 years. 

 

We have learned much about the human immune system.  We have learned much about the various markers—or antigens—that should make cancer cells appear different to our bodies the same way an infection does.  And I have seen failure after failure of vaccine trials to deliver something to patients that results in a consistent, clinical meaningful improvement in their condition and/or their survival.

 

I’m sorry if I seem a bit skeptical, but time and time again when these studies are presented to me for comment I have to express my skepticism, and advise that we must wait for the results of larger, more appropriate clinical trials before we can get truly excited about any particular vaccine. 

 

Almost always when I see these early abstracts they are accompanied by glowing comments from an investigator suggesting that their vaccine research is a significant breakthrough, when in fact it is not.  These abstracts are frequently accompanied by a similarly glowing press release from a major research university where the research was done which further creates a spin suggesting there is more to the research report than actually exists on careful review.

 

(In the interests of full disclosure, I had previously written a blog  in October of 2005--shortly after this blog was started--based on research and a press release from the same company that made similar claims for patients treated with a heat shock protein vaccine for advanced melanoma.  At the time, I did try to balance my enthusiasm for the results reported by the company with the same skepticism I am discussing here.  On rereading that blog, I am not so certain that I accomplished my goal.)

 

I know how hard the researchers are working on trying to get this right.  And I know how difficult it is for those with cancer to hear media tidbits that this vaccine or that vaccine may help them survive their cancer longer. And I know that the argument is going to be made that the Russians are more forward-thinking in their treatment of kidney cancer, since you can get the vaccine there and not here.

 

But—just as Dr. Yang so succinctly and directly pointed out—there have been too many promising early reports for cancer treatment vaccines that never panned out when tested in the real life conditions of a large clinical trial.  There are more than a few reputations in cancer research and treatment that have been made on the early promises of vaccines, to be deflated after years of work when the theory didn’t result in an effective cancer treatment. 

 

So the journalists will continue reporting on early vaccine studies that appear promising, and I will be quoted urging caution until we see real results in real clinical trials.  And patients and their families will continue to be drawn into the cycle of hope and hype that has been so destructive in cancer treatment in years past.

 

We owe it to everyone to be as objective as possible when we report our research results or issue our press releases.  As researchers, we must avoid the allure of overpromising, then under delivering.

 

Ultimately, what the public needs to know are the facts.  Then all of us can focus on making the best decisions and recommendations in caring for our patients

 

Filed Under:

Other cancers | Treatment | Vaccines

About Dr. Len

Dr. Len

J. Leonard Lichtenfeld, MD, MACP - Dr. Lichtenfeld is Deputy Chief Medical Officer for the national office of the American Cancer Society.

MORE »

 

Recent Comments

Comment RSS