AP Statistics Lectures
Table of Contents
by Arnold Kling

Sampling Distributions

Sampling is part of the scientific process. To estimate the chemical properties of a substance, chemists use a sample of that substance. To estimate the political opinions of a population, pollsters survey a sample from that population.

Unknown Parameters, Observations, Statistics, and Samples

We need to distinguish between a parameter and a statistic. A parameter is a characteristic of the population that in practice we would only know if we were omniscient.

For example, many Democrats claim that more Florida voters intended to vote for Gore than for Bush, but because of the Butterfly Ballot and other factors their intentions were not counted. From a statistical perspective, this claim cannot be answered with certainty. The number of voters who intended to vote for Gore is a parameter. You would have to be omniscient to know what it was.

On the other hand, there are plenty of statistics that we can use to estimate the intent of voters. The machine count is one statistical estimate. Any of the manual recounts represents another estimate.

Most statistical estimates are based on samples. For example, the networks originally "called" Florida for Gore based on "exit polls," which were conducted with a small sample of voters as they left the voting booths. We say that the networks used a statistic (exit polls) to estimate the parameter (the proportion of people who voted for Gore) that only can be known by someone who is omniscient.

The result of a single experiment designed to pin down an unknown parameter is called an observation. For example, if I cool water and take a measurement of the temperature at which it freezes, that is an observation. If a TV network research organization interviews a single voter about whether she voted for Gore or Bush, then that is an observation.

A set of observations is called a sample. If the TV network research organization asks 3,500 voters whether they voted for Gore or Bush, then the research organization has a sample of 3,500 observations.

Biased Statistics

A statistic whose expected value is equal to the parameter is said to be unbiased. Clearly, this is a desirable characteristic of a statistic. Another desirable characteristic is low variability. If a statistic is unbiased and has low variability, then it provides a reliable estimate of the parameter.

How could a statistic be biased? There are many ways, and we will discuss them more next semester when we talk about designing a study.

Every ten years, the United States government takes a census of the population. The intent is to have the head of every household fill out a form with information. This is used to count the population.

Critics of the United States census complain that blacks respond to the census at a lower rate than whites. Do you think that census takers are biased against blacks? Do you think that the census could be biased in the statistical sense of the term?

Here is a well-known example of biased statistics. Sometimes, you will see a report that men have, say, 5 times the variety of heterosexual partners as women. This is because journalists take biased statistics at face value.

Suppose that an omniscient voyeur makes a note any time a man and a woman hook up for the first time. The omnisicient voyeur adds up all of the different couples who have sex, and we call this number n. All of the heterosexual relationships in the population are included in n.

If you divide n by the number of males in the population, then you have the true parameter for the average number of sex partners per male. If you divide n by the number of females in the population, then you have the true parameter for the average number of sex partners per female. If the number of females and the number of males in the population are even roughly equal, then it is impossible for males to have 5 times the number of sex partners as females.

It does seem to be the case that if you take a survey of men and women you find that on average men report many more sex partners than women. Because this is mathematically absurd, it would seem safe to say that men have an upward bias in their report of sexual partners, or women have a downward bias, or both.

Calculating Sampling Variability of Sample Proportion

The variability of a parameter estimate goes down with the sample size. This relationship is systematic, and it forms the basis for most of the reporting about statistical research. Concepts such as confidence intervals, hypothesis tests, and tests of significance all rely on the theory of sampling variability.

At a party, I could ask a sample of girls to dance. I could take the percentage of girls who accept as an estimate of the probability that any girl at the party will dance with me.

The estimate of the probability that a girl will dance with me is written as p^, pronounced "p hat." What is its variability?

What we are doing is taking the average of a set of observations from the binomial distribution. If X is a binomial variable with parameter p and n is the number of samples, then our estimate, p^, is equal to X/n.

In chapter 7, we learned that if Y = X/n, then E(X) = E(Y)/n and the variance of Y = (1/n2)(variance[X]). Using the formulas for the mean and variance of a binomial, we have

mp^ = np/n = p
s2p^ = np(1-p)/n2 = p(1-p)/n

Note:

  1. The variance of the sum of n iid variables goes up with n, but the variance of the average of n iid variables goes down with n.

  2. As usual, the standard deviation is the square root of the variance.

Sampling Distribution of Mean

Often, we estimate an unknown parameter by taking its average value over a number of observations. For example, we could ask a sample of households in the zip code 20902 to tell us their annual income. We could use the average of these observations as an estimate of average household income in the entire zip code

The values of the unknown parameters of the true mean and standard variance of income in the zip code are mX and s2X, respectively. If we take a sample and use the average income in the sample as an estimate, m^, then the distribution of m^ is given by

E(m^) = mX
Var(m^) = s2X/n

Again, the variance of the estimate goes down with the sample size. The standard deviation of the estimate will go down with the square root of the sample size.

It is interesting and important that the size of the underlying population does not affect the variability of the sampling estimate. The standard deviation of a the mean of a sample of 100 observations will be the same, regardless of whether the underlying population is 10,000 people or 10,000,000 people.

Asking a random sample of 500 people out of a population of 10,000,000 is more accurate than asking a random sample of 200 people out of a population of 100,000. Does that seem right to you? Why or why not?