A t-test cookbook

We will use as an example problem 11.9 in the book, we have four observations:

1.6, 1.7, 1.8, 1.9

To calculate the mean,

, from a sample of data, take the sum of observations and divide by n (we knew that). In the example, we get = 1.75
To calculate the sample variance, s^{2}, we take the sum of squared deviations from the mean, and divide by **n-1**. In the example, we take (1.6 - 1.75)^{2} + (1.7 - 1.75)^{2} + (1.8 - 1.75)^{2} + (1.9 - 1.75)^{2}, which equals .025, and divide by 3, to get .01667

To find t* for a 90 percent confidence interval for the mean, we go to the table and use the degrees of freedom, n-1, which equals 3 in the example. We look at the bottom of the table for a 90 percent confidence interval, and then read up to the row with the correct number of degrees of freedom. With 3 degrees of freedom, t* equals 2.353

To calculate a margin of error for a confidence level, we take t* and multiply it by s/(square root of n). In the example, s=.129, and the square root of n = 2. So we multiply 2.353 times .0645, to get .1517

To calculate a confidence interval, we add and subtract the margin of error from the sample mean. In the example, the interval is 1.60 to 1.90

For a hypothesis test, we set a significance level, a. For example, we might set a = .05 for a 5 percent significance level. If we are doing a one-tail test, then 5 percent is the same as the upper-tail probability, p, and we read the critical value t* from the table. If there are 3 degrees of freedom, then t* = 2.353.

We do a one-tail test when we test a null hypothesis for m_{0} against an alternative hypothesis that the true mean is greater than m_{0}. (Or we do a one-tail test against the alternative that the true mean is less than m_{0}.)

We do a two-tail test when the alternative hypothesis is that the mean does not equal m_{0}. In that case, we divide the significance level by 2 before we look up the critical value of t*. That is, if the significance level is 5 percent, then we look up for the upper-tail probability of 2.5 percent, or .025.

The actual t-statistic (as opposed to t*) is calculated by comparing the sample mean to the null hypothesis, divided by the standard error of the mean. Suppose that in our example the null hypothesis is that the mean is 1.65. We calculate t as the difference between

and 1.65, divided by s/(square root of n). In this example, we get (1.75 - 1.65)/(.129/2) = 1.55Using the table at the back of the book, the P-value for the data can only be interpolated (it can be computed exactly using the calculator with stats/t-test/stats). Continuing our example and using the table in the back of the book, we take the degrees of freedom, 3, and try to find the values inside the table that bracket 1.55. The values are 1.25 and 1.638. Reading up to the top of the table, these correspond to .15 and .10, respectively, which means that the P-value falls somewhere in between .10 and .15

Earlier, we saw that with 3 degrees of freedom and a 5 percent significance level, t* for a one-tail test is 2.353. Since we just calculated t as 1.55, and this is less than t*, we fail to reject the null hypothesis.

Now, we can see that since the P-value falls between .15 and .10, we can see that we would reject the null hypothesis at a significance level of 15 percent but not at a level of 10 percent.

To reject the hypothesis that the mean is 1.65 at a 5 percent level, we would need a value of t* of 2.353. We want to see what this cutoff means in terms of natural units, which means that we multiply t* by the standard error of the mean, and add the result to the sample mean. In this case, the cutoff in terms of natural units is 1.65 + 2.353(s/(square root of n) or 1.65 + 2.353(.129/2) = 1.90

Now that we know the cutoff for rejecting the null hypothesis, we can calculate the power of the test against a specific alternative hypothesis. For example, suppose that the alternative hypothesis is that the true mean is 2.1

We need to know the probability that we would observe a sample mean greater than equal to the cutoff of 1.90 under the assumption that 2.1 is the true mean. To do this, we calculate the value of t as (1.90 - 2.1)/(s/square root of n), which is -.2/(.129/2) = -3.1. Now, we try to find the upper-tail probabilities that bracket 3.1 in the row with 3 degrees of freedom. The two values closest to 3.1 are 2.353 and 3.182, so we conclude that the upper tail values are somewhere between .05 and .025, which means that the power of the test against this specific alternative is 1.75 + or - the margin error of .15, or (.950, .975)

When we are dealing with a binomial type of variable, then we do not need to worry about t-tests. That is because we can make an unbiased estimate of the population standard deviation by taking p(1-p). Therefore, the standard error of the mean is the square root of p(1-p)/n. We go back to using the normal distribution, or z statistics.

For example, suppose that we estimate a sample proportion of .47 with a sample size of 50. Then our estimate of the standard error of the mean is the square root of (.47)(.53)/50, or .07

To calculate a 90 percent confidence interval, we look at the table for z* for 90%, which is the familiar 1.645.

To calculate the margin of error for this confidence level, we multiply z* by the standard error of the mean. 1.645 times .07 = .115

Suppose that the null hypothesis is that the mean is .35; We can calculate a Z-statistic for our data by taking .47-.35/.07, or 1.71

With a z-statistic of 1.71, we can reject the null hypothesis in favor of a one-tailed alternative at a 5 percent level, but not at a 2.5 percent level. (We cannot reject the null hypothesis against a two-tailed alternative at a 5 percent level).