Sampling
Sampling
A sample is a subset of items, objects, or elements from a larger group of interest, called the population. When an observed sample is used to make inferences about the unobserved population, chance factors must be considered and the risk of being wrong must be assessed. Statistical sampling uses probability to measure this uncertainty. Statistical, or probability, sampling implies that every item or subset of items in the population has a mathematically determined likelihood of being selected; which item or items in the population will be selected is left to chance and not to judgment. A sample consisting of numerical values that have meaning on a number line (for example, numbers on a ruler) is assumed to have been generated by a random variable that has a specific probability distribution.
SIMPLE RANDOM SAMPLING
In simple random sampling, each item in the population is equally likely to be selected. For instance, if the population of interest consists of N elements, then each and every possible sample of n elements (where n may equal 1, 2, 3, … N – 1) should have a probability of 1/ Nn of being realized. (Typically, in textbooks, upper case N is used to denote the number of elements in the population, and lower case n gives the number of elements in the sample.) An example would be to blindly draw well-shaken numbered slips of paper from a hat or drum one at a time, with each slip placed back in the hat. Replacement ensures that each number is equally likely on each draw, although computerized random number generators are typically used to simulate it.
In drawing numbers from a hat or in a laboratory experiment, replacement might seem possible; in actual business and economics practices, however, replacement is seldom possible. If the population of interest is extremely large relative to the sample size, then even though the probability of each sample being selected in repeated sampling does not remain fixed, the changes in probability could be trivial. For instance, if n = 30 and N = 30,000, then for the first sample the probability of drawing 30 items is
n !(N –n )!N ! = 2.8(10)–88.
On the second sampling, it is
n !(N –2 n )!/(N –n )! = 3.0(10)–88.
As long as N is large relative to n, whether there is or is not replacement will not be critical. In practice, randomization based on the notion of fixed sampling probabilities is more a matter of degree than an absolute.
Putting numbered items in a hat to be shaken or into a revolving drum, may give the appearance of good mixing, but the resulting selections will not necessarily produce a sample that represents the population. For instance, in 1970, during the Vietnam War, military draft status for induction into the U.S. Armed Forces was determined by the order in which birthdays were drawn from a drum. The Selective Service placed 366 capsules in a drum, each representing a birthday. The drum was turned several times and capsules were selected “at random.” As Norton Starr discusses in detail, this method looked impressive on television, but the results were not a good representation of birthdays. Randomization suggests that among the first 183 birthdays selected, about one-sixth should be from November and December. Of the first 183 days selected, however, 46 were from November and December, well above the expected 30 or 31. This led to speculation that more than simple random sampling error was involved.
SYSTEMATIC SAMPLING
Systematic random sampling involves the selection of every k -th element (or block of elements) from a list of elements, starting with any randomly selected element. Systematic sampling is a popular way of generating samples in accounting and quality-control work. It is typically less expensive to select every k -th element than to search for the n randomly determined items. In systematic sampling, however, only the first of n items can be considered as randomly determined.
Systematic sampling is convenient for populations formed by lists, stacks, or series. For instance, if students with e-mail addresses at a university is the population of interest, systematic sampling could be used to minimize the likelihood that those with the same last name (who may be relatives) would be included in a sample. However, if the data have cyclical components, then systematic sampling may be inappropriate.
STRATIFIED SAMPLING
When the population is known to consist of a number of distinct categories, characteristics, or demographics, the sampling process can be made more efficient (requiring a lower sample size for the same precision) if the population is divided into subgroups that have the common attribute. For example, in a study of starting salaries, recent university graduates might be grouped, or “stratified,” by their majors. In a study of higher-education costs, universities might be placed in one of two subgroups: public or private. Heterogeneous populations can always be divided into subgroups that are more homogeneous if those in the population can be identified by the characteristic. These more homogeneous subgroups are called “strata.” Either simple random sampling within strata or one of a number of sampling methods is then applied to each stratum separately, where sample size for a strata is made proportional to the stratum’s share of the population.
Cluster Sampling If sampling requires face-to-face communication and the members of a population are physically separated from one another (as in different cities), then it would be very costly to visit randomly selected homes. Instead, sampling might be restricted to a few cities. Within these cities, a surveyor visits specific neighborhoods, interviewing an adult from every i -th house on j -th street. Such sampling is called cluster or area sampling because groups or clusters of elements are first selected and then elements within a cluster are chosen. At each stage, selection can be random or based on nonrandom judgment.
Classical Statistics and Sample Statistics Classical statistics assumes simple random sampling. Random “draws” imply that each sample of size n will yield somewhat different values. Thus, the value of any statistic calculated with sample data (for example, the sample mean) varies from sample to sample. A histogram (graph) of these values provides the sampling distribution of the statistic. Many Web sites show how the distribution of the sample means changes with n. Robert Wolf, for example, has designed “Statutor,” a computer-based tutorial on sampling distributions, estimators, and related topics that can be freely downloaded.
The law of large numbers holds that as n increases, a statistic such as the sample mean (X̄ ) converges to its true mean (μ ). That is, the sampling distribution of the mean collapses on or degenerates to a spike at the population mean. The central limit theorem, on the other hand, states that for many samples of like and sufficiently large size n, the histogram of their sample means will appear to be a normal bell-shaped distribution. As the sample size n is increased, the distribution of the sample mean at first becomes normal (central limit theorem) but then degenerates to the population mean (law of large numbers). Only the standardized mean, maintains its shape as n goes to infinity, where σ is the population standard deviation.
Sampling and Inferential Statistics in the Social Sciences Unlike laboratory random experiments, sample data in the social sciences are often “opportunistic,” meaning they have been observed with no explicit and certain knowledge as to how they were generated. In such cases, the researcher must have a theory about the data-generating process. For example, one might assume that observation i at time t on dependent variable Yit was generated by k independent variables, X 1t , X2t, X3t … Xkt, plus an error term εit reflecting random chance factors, where the betas are parameters to be estimated:
Yit = β 0 +β 1 X1t + β 2X 2t + … + βkXkt + εit
A maximum likelihood estimator of the betas requires the researcher to make an assumption about the error term (for example, εit is normal with mean zero and unit standard deviation) and then have a computer program search for values of the betas that maximize the probability of getting the observed sample values of the Xs and Y. Here the sample Y values are assumed to come from this model, conditional on the values of the Xs, with the randomness in Y generated by the assumed distribution of the epsilon error term. If the assumed population model of the data-generating process is wrong, then the estimated parameters are meaningless. Unlike data obtained from simple random sampling in a laboratory experiment, opportunistic sample data cannot be used to make inferences without a theory about the nature of the data-generating process.
SEE ALSO Censoring, Sample; Ex Ante and Ex Post; Exchangeability; Heteroskedasticity; Monte Carlo Experiments; Multicollinearity; Natural Experiments; Policy Experiment; Probability, Limits in; Sample Attrition; Selection Bias; Serial Correlation
BIBLIOGRAPHY
Starr, Norton. 1997. Nonrandom Risk: The 1970 Draft Lottery. Journal of Statistics Education 5 (2). http://www.amstat.org/publications/jse/v5n2/datasets.starr.html.
Wolf, Robert A. Statutor: A Computer Based Teaching Tool for Statistical Concepts. Ann Arbor: University of Michigan, Department of Biostatistics. http://archives.math.utk.edu/software/msdos/statistics/statutor/.html.
William E. Becker
sampling
Probability sampling requires that each case in the universe being studied must have a determinate, or fixed, chance of being selected; probability statistics can then be used to measure quantitatively the risk of drawing the wrong conclusion from samples of various sizes. It seems intuitively obvious that if one in two cases is randomly selected from a population, the risk of the half so selected being unrepresentative of the whole group is far lower than if one in fifty were selected. The higher sampling fraction of one in two must give more reliable information than the sampling fraction of one case in fifty. But the actual sample size is even more important in determining how representative the sample is. A sample of about 2,500 persons has broadly the same reliability and representativeness, whether it comes from a population of 100,000 persons or one million persons. Samples of 2,000–2,500 are in fact the most common size for national samples, especially when a fairly narrow range of characteristics are being studied.
There are a variety of sample designs. A random sample, or simple random sample, is one in which each case has an equal chance (or equal probability) of selection, so that the techniques of probability statistics can then be applied to the resulting information. A common variation on this is the stratified random sample: the population being studied is first divided into sub-groups or strata, and random sampling is applied within the strata. For example, random sampling might be applied to both the male and female groups of a population of political representatives, but using a sampling fraction of one person in twenty from the numerous male group and a sampling fraction of one person in two from the relatively small female group. Another common variation is two-stage or multi-stage (also known as complex) sampling. For example, random sampling is first used to select a limited number of local areas for a survey, and then a second stage of random sampling is applied to selecting persons or households or companies within the random sample of areas. The two stages can be extended to three or more stages, if necessary, so long as the eventual sample remains large enough to support analysis. All these sampling designs use random sampling in the final selection process, producing a list of persons from the electoral register, household addresses, company names, or other cases which constitute the final issued sample. All of them must be included in the study, with no substitutions allowed, in sharp contrast with the procedures for obtaining quota samples. For this reason, interviewers working on a random sample survey will exert great effort to persuade potential respondents to participate in the study. Failure to achieve interviews with the complete sample can produce non-response bias in the resulting data. The calculation of sampling errors for complex sample designs is statistically far more complicated than in the case of simple random samples.
Once the sampling fraction and sample size are known, probability theory provides the basis for a whole range of statistical inferences to be made about the characteristics of the universe, from the observed characteristics of the sample drawn from it. The standard deviation (see VARIATION) of the distribution of sample means, which is referred to as the standard error of the means for any given characteristic (such as age), can be calculated to assess the reliability of the sample data. Large standard errors reduce our confidence that the sample is fully representative of the complete universe. Similarly, the probability of two samples yielding different measures, and the probability of obtaining particular values of correlation coefficient or other measures of association, can all be calculated. Most of the relevant calculations and significance tests are supplied in the SPSS software package. Statistics textbooks supply details of the underlying calculations.
It must be emphasized that textbooks on sampling and probability statistics are written by statisticians, and refer exclusively to the case of the single random sample on a topic on which the statistician or researcher is entirely ignorant, having absolutely no substantive information other than that supplied by the sample. Deductions and inferences are therefore restricted to those that can be calculated mathematically. It is rare for a sociologist or other social scientist to be in this position. Good researchers bring a great deal of substantive knowledge to bear on assessing the validity and reliability of survey results, and they supplement statistical measures with other methods for increasing confidence in the reliability of sample survey results, and the interpretations placed on them. These methods include triangulation; repeat surveys (as illustrated by opinion polls); literature surveys which yield information on earlier replications; as well as theoretical assessments. Statistical measures of reliability, association, or significance are not the same as assessments of the substantive importance of a result. Social surveys can sometimes be over-engineered, in seeking (for example) to establish whether the exact incidence of something is 31 per cent or 36 per cent, whereas in practice all that matters is whether it is about one-third or about one in thirty.
Sampling
SAMPLING
In many disciplines, there is often a need to describe the characteristics of some large entity, such as the air quality in a region, the prevalence of smoking in the general population, or the output from a production line of a pharmaceutical company. Due to practical considerations, it is impossible to assay the entire atmosphere, interview every person in the nation, or test every pill. Sampling is the process whereby information is obtained from selected parts of an entity, with the aim of making general statements that apply to the entity as a whole, or an identifiable part of it. Opinion pollsters use sampling to gauge political allegiances or preferences for brands of commercial products, whereas water quality engineers employed by public health departments will take samples of water to make sure it is fit to drink. The process of drawing conclusions about the larger entity based on the information contained in a sample is known as statistical inference.
There are several advantages to using sampling rather than conducting measurements on an entire population. An important advantage is the considerable savings in time and money that can result from collecting information from a much smaller population. When sampling individuals, the reduced number of subjects that need to be contacted may allow more resources to be devoted to finding and persuading nonresponders to participate. The information collected using sampling is often more accurate, as greater effort can be expended on the training of interviewers, more sophisticated and expensive measurement devices can be used, repeated measurements can be taken, and more detailed questions can be posed.
DEFINITIONS
The term "target population" is commonly used to refer to the group of people or entities (the "universe") to which the findings of the sample are to be generalized. The "sampling unit" is the basic unit (e.g., person, household, pill) around which a sampling procedure is planned. For instance if one wanted to apply sampling methods to estimate the prevalence of diabetes in a population, the sampling unit would be persons, whereas households would be the sampling unit for a study to determine the number of households where one or more persons were smokers. The "sampling frame" is any list of all the sampling units in the target population. Although a complete list of all individuals in a population is rarely available, an alphabetic listing of residents in a community or of registered voters are examples of sampling frames.
SAMPLING METHODS
The general goal of all sampling methods is to obtain a sample that is representative of the target population. In other words, apart from random error, the information derived from the sample is expected to be the same had a complete census of the target population been carried out. The procedures used to select a sample require some prior knowledge of the target population, which allows a determination of the size of the sample needed to achieve a reasonable estimate (with accepted precision and accuracy) of the characteristics of the population. Most sampling methods attempt to select units such that each has a definable probability of being chosen. Methods that adopt this approach are called "probability sampling methods." Examples of such methods include simple random sampling, systematic sampling, stratified sampling, and cluster sampling.
A random sample is one where every person (or unit) in the population from which the sample is drawn has some chance of being included in it. Ideally, the selections that make up the sample are made independently; that is, the choice to select one unit will not affect the chance of another unit being selected. The simplest way of selecting sampling units where each unit has an equal probability of being chosen is referred to as a simple random sample.
Systematic random sampling involves deciding what fraction of the target population is to be sampled, and then compiling an ordered list of the target population. The ordering may be based on the date a patient entered a clinic, the last surname of patients, or other factors. Then, starting at the beginning of the list, the initial sample unit is randomly selected from within the first k units, and thereafter every kth individual is sampled. Typically, the integer k is estimated by dividing the size of the target population by the desired sample size. This method of sampling is easy to implement in practice, and the sampling frame can be compiled as the study progresses.
A stratified random sample divides the population into distinct nonoverlapping subgroups (strata) according to some important characteristics (e.g., age, income) and then a random sample is selected within each subgroup. The investigator can use this method to ensure that each subgroup of interest is represented in the sample. This method generally produces more precise estimates of the characteristics of the target population, unless very small numbers of units are selected within individual strata.
Cluster sampling may be used if the study units form natural groups or if an adequate list of the entire population is difficult to compile. In a national survey, for example, clusters may comprise individuals in a localized geographic area. The clusters or regions are selected, preferably at random, and the persons are enumerated in each selected region and random samples are drawn from these units of the population. Because sampling is performed at multiple levels, this method is sometimes referred to as multistage sampling.
With nonprobability sampling methods, the probability of being included in the sample is unknown. Examples of this sampling method include convenience samples and volunteers. These types of samples are prone to bias and cannot be assumed to be representative of the target population. For example, people who volunteer are frequently different in many respects from those who do not. Tests of hypothesis and statistical inference concerning the sampled units and the target population can only be applied with probability sampling methods. That is, there is no way to assess the validity of the samples obtained using nonprobability sampling strategies.
VALIDITY AND SOURCES OF ERROR
The distribution of values in any sample, no matter how it is selected, will differ from the distribution in sample chosen by chance alone. The larger the sample, the more likely it is that the sample reflects the characteristic of interest in the target population. However, there are sources of error not related to sampling that may bias comparisons between the sampled units and the target population. First, coverage error (selection bias) may arise when the sampling frame does not fully cover the target population. Second, nonresponse bias may occur when sampled individuals cannot be reached or will not provide the information requested. Bias is present if respondents differ systematically from the individuals who do not respond. Finally, the measuring device may not be able to accurately determine the characteristics being measured.
Paul J. Villeneuve
(see also: Statistics for Public Health; Stratification of Data; Survey Research Methods )
Bibliography
Kelsey, J. L.; Thompson, W. D.; and Evans, A. S. (1986). Methods in Observational Epidemiology. New York: Oxford University Press.
Pagano, M., and Gauvreau, K. (2000). Principles of Biostatistics, 2nd edition. Pacific Grove, CA: Duxbury.
sampling
1. (time quantization) A process by which the value of an analog, or continuous, signal is “examined” at discrete fixed intervals of time. The resulting sampled value will normally be held constant until the next sampling instant, and may be converted into a digital form using an A/D converter for subsequent processing by a computer.
The rate at which a given analog signal is sampled must be a certain minimum value, dependent upon the bandwidth of the analog signal; this ensures that none of the information in the analog signal is lost. The sampling rate may also affect the stability of an analog system if the system is to be controlled by a computer. See also Nyquist's criterion.
2. The act of selecting items for study in such a way that the measurements made on the items in the sample will provide information about similar items not in the sample. Items can be people, machines, periods of time, fields of corn, games of chance, or whatever is being studied. Sample size is the number of items included in the sample. If the variance of the measurement (see measures of variation) is approximately known, the variance of its mean in a sample is the population variance divided by the sample size. This formula can then be used to indicate an appropriate sample size.
A population is a complete set of items about which information is required. It must be defined before selecting the sample or results may be ill-defined. The sample is the basis for inference about probability distributions of measurements on the population. Problems of sampling include avoidance of bias and selection of enough samples to ensure adequate precision.
Random sampling is the process that results in each item having the same probability of inclusion in the sample. Items may be selected with the aid of tables of random numbers or with mechanical devices such as cards or coins.
Systematic sampling selects items in some regular manner. It is valid when the order in which items are encountered is irrelevant to the question under study, but can be an unintentional source of bias.
sampling
sampling
sampling
sam·pling / ˈsampling/ • n. 1. the taking of a sample or samples.2. the technique of digitally encoding music or sound and reusing it as part of a composition or recording.