- Learning About New Ways to Prevent Cancer
- Look at where the information came from
- Look at the science behind the prevention method
- Types of human studies on cancer risk
- Studies that observe humans
- Human testing: Clinical trials
- A closer look at the evidence
- Other questions about studies on new ways to prevent cancer
- What does this mean to you?
- To learn more
- Appendix A
A closer look at the evidence
If you’re able to find clinical trials that were done on the method you are looking at, it’s important to notice what kind of study was done and see what was compared. You’ll also want to look at some other factors in the study:
Study subjects: If you happened to get your information from a study done on people, this is a good start. But there are many stages a treatment must go through in human tests before it can be used by most doctors to prevent cancer. It’s possible that the study is an early (preliminary) one, or a pilot study. These are small early studies, in which a drug or treatment is tested on a few people just to decide if it’s worth testing on larger numbers of people. (See the “Human testing: Clinical trials” section.) These small studies don’t have enough people in them to show whether a prevention method works.
Control group: A study that has a control group is called controlled. This might mean that the cancer prevention study was carefully planned, and that people who got the prevention method were compared to others who didn’t get that prevention method.
Studies that do not have control groups may compare their disease rates with older studies or general information collected on other groups. But these may not offer good comparisons due to differences in the groups of people, which can affect how much cancer will be found. For instance, one group may span different ages, which affects how many people will get cancer during the study period. Different parts of the country have more cases of certain cancers. Some cancers affect one sex more than the other. Some regions (and even entire states) have a higher percentage of smokers than others. And certain subgroups get more exercise and eat healthier than others. These, and many more factors, make it a bad idea to compare a test group to others chosen in a different way or from a different pool of people than the test group.
The best control group is like the test group in every way other than the factor being studied. That’s why better-planned studies start with one group of people and randomly divide them into 2 or more groups, as described below.
Randomization: This means that the prevention method is compared using similar groups of volunteers who were chosen completely by chance to be in one group or the other (they were randomized to a group). This reduces the risk, for instance, that the older people who are at higher risk for cancer mostly end up in one group, which could change the study outcome.
Some of the benefits of randomization include helping to avoid situations which could bias results of a study. For instance, if more young people who start out healthier end up in the group getting the new prevention method, it may make the prevention method look better than it really is. If more people who started out with a higher risk of cancer (such as smokers) end up in the new cancer prevention group, that group may fare worse than the control group. This could make the prevention method look less effective, because it was tested on people who were more likely to get cancer. On the other hand, if more smokers end up in the control group, they may make the test method look better because the control group will likely get more cancer over the years.
To keep the groups balanced, researchers put people into one group or the other by choosing people for each group using methods along the lines of flipping a coin – usually with a computer program. Randomization lowers the odds that one group will be very different from the other. This is why you don’t know, when you agree to take part in a randomized controlled cancer prevention trial, whether you’ll get a standard prevention or the new one that’s being tested. And since there aren’t many known standard preventions for cancer, you may very well end up in a placebo group. When you’re informed about the cancer prevention clinical trial, the study team will tell you if there’s a chance you’ll be in a placebo group.
Keep in mind that this is very different from clinical trials in which people already have cancer. When treatment clinical trials are randomized, current treatments (not placebos) are used in the control group. This lets the researchers know whether the new treatment works better than the one that’s now being used.
Blinding: This means that the patients don’t know which cancer prevention group they’re in (test group, comparison group, or placebo group). If the patients do know what they’re getting, the study is called an “open label” study. One advantage to a blinded study is that it can help the researchers learn more about side effects. For instance, if patients know that they’re getting placebos, or that they’re getting a vitamin or a known standard treatment, they might not bother to report health problems to the study coordinator. Those who know they may be getting the test drug or treatment are more likely to report nausea, headaches, and fever, even if the problems turn out to be from something else, like food poisoning or the flu. The same is true for serious illnesses, which also can happen with no known reason but may end up being blamed on whatever the person is taking.
You can see that if the treatment group mostly reports new health problems and the control group generally doesn’t, it can make the treatment method look like it has a lot more side effects. This is just one of the ways a patient’s knowledge about what they’re taking can affect a study’s outcome.
Double blinding: This means that neither the researchers nor the patient knows which treatment the patient is getting until after the prevention trial is completed and the observations are on record. This helps to avoid bias in which a researcher expects one group of patients to do better, which can affect the researcher’s observations. In cancer prevention trials, observations are carefully measured and written up. After the study is over, researchers break the code to find out who was in which group. Then the data is analyzed to find out which group (if any) did better than the other.
There is an exception to double blinding, however. In studies where there’s a chance that some harm might take place, a Data and Safety Monitoring group follows the results of the study. They don’t share this information with others unless it appears that harm is being done. For instance, if one group appears to be doing much better or worse than the other after an early review, they may require that the study be un-blinded so that a closer look can be taken at what may be going on. If the study is found to be causing harm (either to those getting the prevention or those not getting it), the study may be stopped before its scheduled ending time.
Statistical significance: The data are carefully looked at to see if the difference between the groups is likely to be due to chance. This is called a test of statistical significance. It means that if one group came out better than the other by a large enough margin, it’s very unlikely that the differences were by chance, and the results are said to be “significant.” Keep in mind this kind of test alone cannot prove that factors besides random chance didn’t bias or confound the results. Careful study planning and precise measurements are used to avoid those factors.
Publication and peer review: Publishing the findings in a respected peer-reviewed journal means that the methods and information from the study were looked at by other doctors or scientists. When they look at the information, they want to be sure that the scientific procedures were properly followed. They also keep an eye out for any bias or other factors that would make one group do better than the other for some reason other than the treatment being studied.
The highest standard of proof that a cancer prevention method works is a double-blind randomized clinical trial on humans that has met the strictest standards of scientific method. If blinded studies are not possible, scientific procedures must still be carefully followed to be sure that any difference in outcomes are due to the treatment, and not other factors. This usually allows the study to be published in a respected, peer-reviewed medical journal.
It takes more than one study to prove something really works. Even breakthrough ideas take a lot of testing to show that they work. Since many good ideas don’t pan out for cancer prevention, the failure rate can be high. One study with a good outcome doesn’t mean a cancer prevention method works. Even if a study is done in the most careful manner, future studies that try the same thing sometimes find that they get different results. This can happen because the second clinical trial tests the method on a different group of people that doesn’t respond the same way as the first group. Or the method may be used in a slightly different way, or with some other small difference that may have not even been noticed. Sometimes a treatment looks great on the first study, but then no other study gets the same outcome – meaning that real-life patients couldn’t expect those great results either.
Science builds on the studies in the lab, and sometimes tests in animals. If the cancer prevention method seems to be safe, it’s moved up to test in a small group of people. Getting to this point often takes years. If these results look promising, a phase I clinical trial may be started. At any point, the researchers may find that the cancer prevention method really doesn’t work the way they thought it would. But even if it does, good testing can take a long time.
Publication bias: There’s another problem that can creep in as more studies are published. Sometimes, the studies that show no difference between the treatment and placebo, or the ones that show the placebo group doing better, are not published. After all, it isn’t exactly exciting news when something doesn’t work. But these kinds of studies could really help people who are trying to decide whether it’s worthwhile to take the treatment. Worse, if the only clinical trials are published are the ones that show the treatment helps, a person reviewing the published information might not be able to find studies that showed no difference. He or she might conclude that the treatment was helpful, because those are the only studies that were published. This is an example of what is called publication bias.
What if different clinical trials show different outcomes?
If you find clinical trials that show opposite outcomes, it can be very confusing. When there are just a few studies, as there may be on a compound that’s generally thought to be safe, tests on humans may be the first type done. There may not be much understanding of how the compound might work from lab studies or animal studies. Even when the studies are set up well, these clinical trials often end up showing very little difference, if any, between the people who took it and those who didn’t. When the compound really doesn’t have any effect, chance will often tip the scales in one direction or another – sometimes even enough that the results look significant. This means that sometimes the placebo group will do a bit better than the test group, while at other times, the group that gets the new compound does a little better. When results conflict with one another like this, it often means that the treatment has very little effect. Publication bias can mean that you find more studies showing a method worked than studies showing it didn’t work, because most of the studies showing it didn’t work were never published. Or there can be study design problems, and other factors that affected the outcomes.
Last Medical Review: 10/09/2014
Last Revised: 05/21/2015