Last updated: Fri, Jun 30, 2017
Selection bias describes what happens when the selection procedure for a study results in including as subjects a non-representative sample of the group being investigated. A representative sample can be found by randomly selecting a sufficient number of subjects. A random sample is desirable for perhaps two reasons.
First, any procedure that selects non-randomly may select a group of study subjects that isn't representative of the larger group. Those in the study may differ from the larger group in some way that depends on the selection process. Therefore, it won't be correct to say, “We found from our study that such-and-such is true of the larger group.” How incorrect it is will depend upon how different the study group is from the larger group. Nobody may know the answer to that.
The second and probably more serious problem arises if the study group and the control group are selected in different ways. In this case, even a weaker statement may be incorrect. “We found that among our experimental subjects, those who received the treatment fared 12% better than those who didn't” becomes a weak statement because we simply don't know how big a role was played by the differences between the two groups.
How might you go about obtaining a truly random sample of pain patients? I think the answer is that you couldn't. As a practical matter, pain studies are made using small subject groups who are usually picked from a particular clinic, a particular healthcare system, or at most a particular city, and who are characterized rather loosely. Here is the description of the participants in a study entitled “Changes after Multidisciplinary Pain Treatment in Patient Pain Beliefs and Coping Are Associated with Concurrent Changes in Patient Functioning”:
We analyzed data from 141 patients who participated in the University of Washington (Seattle, WA) outpatient multidisciplinary pain program, enrolled in a study of pain treatment process (Jensen et al., 2001), and provided posttreatment and 12-month follow-up data. During the study enrollment period, 283 patients participated in the pain program. Of these, 197 (70%) agreed to participate, and of these, 141 (72%) provided posttreatment and 12-month follow-up data. About half (51%) of the participants were female and most were Caucasian (90%). Mean (SD) age was 44.7 (10.7) years (range, 21 - 78 years) and median pain duration was 3.2 years (range, 4 months - 48 years). The primary site of pain varied and included the low back (34%), neck (18%), shoulder or arm (13%), leg (12%), head (9%), and other sites (14%). Twenty-nine percent of the study participants were working either full time (18%) or part time (11%), 60% were receiving pain-related disability compensation, and 12% had litigation pending regarding their pain problem at the time of study enrollment.1
While this description is pretty forthcoming in its description of the participants, it leaves some big questions: How do the 70% who agreed to participate differ from those who did not? Why did 28% of the 70% drop out? How were those who abandoned the study different from those who stayed? Almost exactly half of the patients during the study enrollment period stayed in the study, the other half didn't.
Now look at the study title again. It says that “Changes after Multidisciplinary Pain Treatment in Patient Pain Beliefs and Coping Are Associated with Concurrent Changes in Patient Functioning.” Assuming that you read the study and you agreed that the result had been proven with the group who finished the study, how confident could you be in extending it to other patients? Are you able to judge whether you, or perhaps the patients whom you treat, would respond in a similar way? Do the study authors believe that the claim of the title is broadly valid?
Selection bias found its way into this study in at least three ways: 1) the study participants were all patients at the same clinic; 2) thirty percent of those eligible to enroll declined; 3) twenty-eight percent of the remainder failed to complete the study.
This is not to say that there is no value in this research. It is, however, to say that it is important to consider selection bias in evaluating the conclusions of a study.