The wide availability of health information — on television, in print, on websites and via search engines — brings with it a new set of challenges for consumers of medical and health information. Sometimes it seems as though we are bombarded with health information. A lot of it is hyped, taken out of context or boiled down to a sensational headline. The Internet— with its thousands of sites putting out information whose quality can be difficult to judge — brings its own new set of challenges for consumers of medical and health information.
Consumers need to be able to evaluate different kinds of information and understand how they are generated.
How can a person know if the information they are getting is accurate?
Obviously, the source of the information is a big factor. The National Institutes of Health offer some tips for determining the quality of information on health and medical websites and a tutorial to help you evaluate medical websites.
Unfortunately, the devil, as the saying goes, is often in the details. Perhaps the most poorly understood fact about scientific studies is that there are only a few specific types of research that are able to prove cause and effect relationships between one thing and another. Often, all a study is really reporting is that there is a relationship, or correlation, between one thing and another. Once again, take as an example our fictional exercise story. Let's say that the researchers in this study followed a group of retirees from age 65 to age 90. The subjects were asked to fill out questionnaires about how many hours per week they exercised; researchers monitored the retirees and discovered that those who said they exercised for one hour a day lived an average of 20 years longer.
So what is wrong with saying that this study proves that exercise makes you live longer? For one thing, if you look carefully, you will see that there is a possible bias in the selection of subjects; among other differences, people who retire are wealthier than average. Second, the researchers accepted self-reported data about how much subjects exercised; because people know that they are supposed to exercise, when reporting to a medical professional or researcher they might well exaggerate how much exercise they really did. Finally, because of the way it was designed, this study can, at best, do nothing more than establish an association between exercise and longevity. In other words, it shows that the two are related, but not how they are related. Because the study did not attempt to account for the myriad large and small differences between the individuals studied — from medical history to income to ethnicity to education and economic status, there is absolutely no way it could prove that a) exercise causes b) longer life. In fact, the findings of the study are perfectly consistent with other explanations, such as:
Medical and other academic journals in the sciences require that their authors explain the methodology they used. This is done so the methods themselves can be evaluated and so other researchers can try to test or replicate (reproduce) the findings. If other researchers perform similar experiments and cannot come up with similar findings, it is likely that the findings of the first study were affected by some factor other than the one being studied — anything from unaccounted-for qualities of the participants to an accidental flaw in the way the study was done. Suppose one study found that eating a grapefruit every day improved muscle tone in a group of elderly men, but other researchers were unable to replicate these findings. The likelihood is that some other, unidentified factor was the real cause for the muscle mass increase in the first study.
That issue, the fact that even a carefully designed, peer-reviewed "scientific study" — the kind quoted in news articles, on television and websites — can be wrong, is the other reason why it is important to know a little about some of the basic research designs used by medical and other researchers. The rest of this article considers some of the major types of studies and the strengths and weaknesses of each.
Journals help keep researchers up to date with discoveries in their field and foster this gradual building of knowledge.
In addition, studies build on each other. The findings of researchers studying how cholesterol is metabolized in the body may well become the basis for research on drugs to lower cholesterol. A new discovery that advances our understanding of cholesterol — perhaps locating a gene that predisposes a person to high-cholesterol — may well start a shift in the way the condition is treated. Journals help keep researchers up to date with discoveries in their field and foster this gradual building of knowledge.
For patients and their families needing answers, this can be frustrating. It would be nice if your physician could always give you a clear "right" answer, but often, the answer to a medical question is provisional and based on an evolving body of knowledge. When the patient's own personal characteristics, personality and medical history are factored in, picking the right treatment becomes a complicated and individual matter.
That is also where understanding how to evaluate medical information also comes in. Obviously, real-life medical decisions need to be made in consultation with the doctors directly involved in the patient's care. But knowing how to weigh medical information for yourself can be a big help.
In this kind of experimental study, they would separate the people participating in the study, called subjects, into two groups, a control group and an experimental group. The experimental group is the group of subjects receiving the treatment that is being tested in the study. The control group is a group of subjects who are as similar as possible (in age and health and socioeconomic status, for example) to the experimental group, but who are not given the treatment and so serve as a good basis for comparison as a way to determine if the treatment works.
Particularly in medical studies, members of the control group are often given a placebo. This is an apparent treatment, usually in the form of a sugar pill or other harmless substance that is given to a subject who believes that it is, or may be, real. Placebos are used to make sure that any effects of the substance being studied are entirely the result of the substance's own action and are not caused by any psychological effects, such as, say, wanting to believe that taking a pill will work. This corrects for the placebo effect and allows researchers to tell whether the real treatment is the true cause of any effect. It also increases the likelihood that any changes in the experimental group are caused by the treatment and not by external factors.
Double-Blind studies are a type of study in which neither the subjects nor the researchers conducting the study know who is getting the real treatment and who is getting the placebo. Double-blind studies are useful because the subjects in the experimental and control groups are less likely to be affected by psychological factors, such as their expectations about the treatment they are receiving, that may make the treatment seem more or less effective. And the researchers are protected from their own pre-conceptions and biases which can influence a) how they treat subjects (such as taking the experimental group more seriously); or b) a subconscious tendency to look for "expected" findings; or c) intentionally or unintentionally suggesting "correct" responses to subjects. All these might affect how subjects respond to the experiment. Controlled, double-blind studies are considered to result in the most reliable findings.
The correlation can also be negative, meaning that when one variable goes up the other goes down. For example, the more often a person eats breakfast, the less likely they are to become obese.
Pay attention the next time your local TV news anchor describes a scientific finding. Often, they will confuse correlation with cause, saying, for example, that eating breakfast actually causes weight loss, when that is not really what the study showed. Even if that were true, it would take a controlled experiment to prove it.
But, as mentioned before, the relationships suggested by an epidemiologic study like the Framingham Heart Study cannot prove cause. At best, epidemiological studies can only point to a plausible cause and effect. Further controlled studies need to be done.
A relatively new kind of study is known as a meta-analysis. This is basically a study of studies. Instead of doing new research, a meta-analysis compares the results of different studies done in different ways at different times and in different places. A good example is a recent meta-analysis that looked at dozens of dietary studies and found that eating more calcium helps you lose weight. The downside of this kind of study is that the researchers are using all second-hand data that was collected in different ways by others; it can be very easy for such a study to draw false conclusions by comparing "apples and oranges." So again, testing these findings with further studies is necessary.
Many medical studies discuss risk. It is important to understand that there are two kinds of risk: absolute risk and relative risk. Absolute risk means the chances of developing a disease or other condition within a certain time period. For example, an American woman's lifetime absolute risk of breast cancer is one in nine; that is, one out of nine women will develop breast cancer at some point during their lives. This can also be expressed as a percentage; e.g., 11%. Relative risk is used to compare risk in two different groups of people. For example, studies have shown that women who drink alcohol have a higher risk of developing breast cancer than non-drinkers. If, for example, the drinkers had a 16.5% chance of developing breast cancer, then their relative risk would be 50% higher. Unfortunately, it is easy to confuse "50% higher risk" with "50% risk."
Finally, an important clue to how much weight to give to a particular study is sample size. Sample size means the number of people in a given study. It is important because the larger the sample size, the more reliable the results. For example, if you tested a new painkiller on a group of 100 people and found that three of them got severe headaches, you would be unsure if headaches were a side effect of the drug. However, if you did the same study with a sample size of 10,000 people and 300 of them developed headaches you would be much more certain that headaches were a real side effect.
Keeping these concerns in mind as you read health — and other kinds of scientific information — on the Web, will help you separate the substantial findings from the sensational. And that's very healthy.