Random Sampling And Random Assignment Examples

 

Parametric and Resampling Statistics (cont):

Random Sampling and Random Assignment

The major assumption behind traditional parametric procedures--more fundamental than normality and homogeneity of variance--is the assumption that we have randomly sampled from some population (usually a normal one). Of course virtually no study you are likely to run will employ true random sampling, but leave that aside for the moment. To see why this assumption is so critical, consider an example in which we draw two samples, calculate the sample means and variances, and use those as estimates of the corresponding population parameters. For example, we might draw a random sample of anorexic girls (potentially) given one treatment, and a random sample of anorexic girls given another treatment, and use our statistical test to draw inferences about the parameters of the corresponding populations from which the girls were randomly sampled. We would probably like to show that our favorite treatment leads to greater weight gain than the competing treatment, and thus the mean of the population of all girls given our favorite treatment is greater than the mean of the other population. But statistically, it makes no sense to say that the sample means are estimates of the corresponding population parameters unless the samples are drawn randomly from that (those) populations(s). (Using the 12 middle school girls in your third period living-arts class is not going to give you a believable estimate of U. S. (let alone world) weights of pre-adolescent girls.) That is why the assumption of random sampling is so critical. In the extreme, if we don't sample randomly, we can't say anything meaningful about the parameters, so why bother? That is part of the argument put forth by the resampling camp.

Of course, those of us who have been involved in statistics for any length of time recognize this assumption, but we rarely give it much thought. We assume that our sample, though not really random, is a pretty good example of what we would have if we had the resources to draw truly random samples, and we go merrily on our way, confident in the belief that the samples we actually have are "good enough" for the purpose. That is where the parametric folks and the resampling folks have a parting of the ways.

The parametric people are not necessarily wrong in thinking that on occasion nonrandom sampling is good enough. If we are measuring something that would not be expected to vary systematically among participants, such as the effect of specific stimulus variations on visual illusions, then a convenience sample may give acceptable results. But keep in mind that any inferences we draw are not statistical inferences, but logical inferences. Without random sampling we cannot make a statistical inference about the mean of a larger population. But on nonstatistical grounds it may make good sense to assume that we have learned something about how people in general process visual information. But using that kind of argument to brush aside some of the criticisms of parametric tests doesn't diminish the fact that the resampling approach legitimately differs in its underlying philosophy.

The resampling approach, and for now I mean the randomization test approach, and not bootstrapping, really looks at the problem differently. In the first place, people in that area don't give a "population" the centrality that we are used to assigning to it in parametric statistics. They don't speak as if they sit around fondly imagining those lovely bell-shaped distributions with numbers streaming out of them, that we often see in introductory textbooks. In fact, they hardly appear to think about populations at all. And they certainly don't think about drawing random samples from those imaginary populations. Those people are as qualified as you could wish as statisticians, but they don't worry too much about estimating parameters, for which you really do need random samples. They just want to know the likelihood of the sample data falling as they did if treatments were equally effective. And for that, they don't absolutely need to think of populations.

In the history of statistics, the procedures with which we are most familiar were developed on the assumption of random sampling. And they were developed with the expectation that we are trying to estimate the corresponding population mean, variance, or whatever. This idea of "estimation" is central to the whole history of traditional statistics--we estimate population means so that we can (hopefully) conclude that they are different and that the treatments have different effects.

But that is not what the randomization test folks are trying to do. They start with the assumption that samples are probably not drawn randomly, and assume that we have no valid basis (or need) for estimating population parameters. This, I think, is the best reason to think of these procedures as nonparametric procedures, though there are other reasons to call them that. But if we can't estimate population parameters, and thus have no legitimate basis for retaining or rejecting a null hypothesis about those parameters, what basis do we have for constructing any statistical test. It turns out that we have legitimate alternative ways for testing our hypothesis, though I'm not sure that we should even be calling it a null hypothesis.

This difference over the role of random sampling is a critical difference between the two approaches. But that is not all. The resampling people, in particular, care greatly about random assignment. The whole approach is based on the idea of random assignment of cases to conditions. That will appear to create problems later on, but take it as part of the underlying rationale. Both groups certainly think that random assignment to conditions is important, primarily because it rules out alternative explanations for any differences that are found. But the resampling camp goes further, and makes it the center point of their analysis. To put it very succinctly, a randomization test works on the logical principle that if cases were randomly assigned to treatments, and if treatments have absolutely no effect on scores, then a particular score is just as likely to have appeared under one condition than under any other. Notice that the principle of random assignment tells us that if the null hypothesis is true, we could validly shuffle the data and expect to get essentially the same results. This is why random assignment is fundamental to the statistical procedure employed.

Return to Philosophy.html

dch:
David C. Howell
University of Vermont
David.Howell@uvm.edu

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group.

Study participants are randomly assigned to different groups, such as the experimental group, or treatment group. Random assignment might involve such tactics as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to participants.

It is important to note that random assignment differs from random selection. While random selection refers to how participants are randomly chosen to represent the larger population, random assignment refers to how those chosen participants are then assigned to experimental groups.

How Does Random Assignment Work in a Psychology Experiment?

To determine if changes in one variable lead to changes in another variable, psychologists must perform an experiment. Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable while the variable that they will then measure is known as the dependent variable. While there are different ways to look at relationships between variables, and experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 51 percent female and 49 percent male, then the sample should reflect those same percentages. Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands and equal chance of being chosen.

Once a pool of participants has been selected, it is time to assign them into groups. By randomly assigning the participants into groups, the experimenters can be sure that each group will be the same before the independent variable is applied.

Participants might be randomly assigned to the control group, which does not receive the treatment in question. Or they might be randomly assigned to the experimental group, which does receive the treatment. Random assignment increases the likelihood that the two groups are the same at the outset, that way any changes that result from the application of the independent variable can be assumed to be the result of the treatment of interest.

An Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group. The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test. Participants in both groups then take the test and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an  important role in the psychology research process. Not only does this process help eliminate possible sources of bias, it also makes it easier to generalize the results of a population to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Sources:

Alferes, VR. Methods of Randomization in Experimental Design. Los Angeles: SAGE; 2012.

Nestor, PG & Schutt, RK. Research Methods in Psychology: Investigating Human Behavior. Los Angeles: SAGE; 2015.

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *