### [A, SfS] Chapter 5: Confidence Intervals: 5.3: CI for mean difference

### Confidence Interval for the Population Mean Difference for a Quantitative Variable

Confidence Interval for the Population Mean Difference for a Quantitative Variable

In this lesson, you will learn how to estimate the average difference based on paired data.

#\text{}#

Another setting in which we might need a confidence interval involves *paired data*, that is, paired measurements of one continuous variable #X# on a random sample from some population. The paired data could represent measurements made:

- On the same subjects
*before*and*after*some event (repeated measures); - On
*dyads*, i.e., two separate subjects that are bound together in some way, such as romantic couples, identical twins, mother and child, etc. In this case the subject is the dyad,*not*the individuals in the dyad; - On the same subjects exposed to two different conditions at two different times (not necessarily in the same order), such as watching a happy video and watching a sad video;
- On the same subjects at two different (but related) locations, such as left arm and right arm of a person, or front tire and back tire of a bicycle.

In such cases the paired measurements are not independent, so we cannot directly use a method that requires independent measurements. But usually what we actually want to estimate is the *mean difference* in the paired measurements if they were to be recorded for the entire population, which we denote #\mu_D#.

Research questions might be:

- Among primary-school children, is there a difference between the pupil dilation in the left eye and the pupil dilation in the right eye after exposure to light?
- Among identical twins, is there a difference in IQ scores?
- Do heart patients experience a decrease in systolic blood pressure after using a new treatment?
- Are heart rates more elevated when watching a scary clip from a horror movie than when watching an action clip from a super-hero movie?

In each of these scenarios, we would be interested in the *difference *between the paired measures, rather than the measures themselves.

Mean and Variance of the Difference

Suppose #(X_{1A},X_{1B}),(X_{2A},X_{2B}),...,(X_{nA},X_{nB})# represents paired measurements of a quantitative variable #X# on a random sample of #n# subjects (which could be dyads) from some population.

For #i = 1,...,n#, let \[D_i = X_{iA} - X_{iB}\] Then \[\bar{D} =\cfrac{1}{n} \sum_{i=1}^n D_i \,\,\,\,\,\,\,\,\,\,\,\,\,\, \text{and} \,\,\,\,\,\,\,\,\,\,\,\,\,\, s^2_D = \cfrac{1}{n-1}\sum_{i=1}^n (D_i - \bar{D})^2\]

are the **sample mean difference **and **sample variance of the differences**, respectively.

#\text{}#

The paired-data setting has thus been converted into the previous setting, where we had single measurements of a continuous variable on a random sample from a single population, so we can use the same method, depending on whether or not we can assume that the differences have a normal distribution on the population, and whether or not the sample size is large. It is very unlikely that the population variance #\sigma^2_D# of the difference is known, so #s^2_D# is used in its place.

Confidence Interval for the Population Mean Difference

If the sample size #n# is considered large enough for the Central Limit Theorem to apply, a #(1 - \alpha)100\%# confidence interval for the population mean difference #\mu_D# is computed as:

\[(l,u) = \bigg(\bar{D} - z_{\alpha /2}\cfrac{s_D}{\sqrt{n}},\,\,\,\,\,\,\,\bar{D} + z_{\alpha /2}\cfrac{s_D}{\sqrt{n}}\bigg)\]

If the sample size #n# is not large but we can assume that the differences are normally distributed on the population, a #(1 - \alpha)100\%# confidence interval for the population mean difference #\mu_D# is computed as:

\[(l,u) = \bigg(\bar{D} - t_{n-1,\alpha /2}\cfrac{s_D}{\sqrt{n}},\,\,\,\,\,\,\,\bar{D} + t_{n-1,\alpha /2}\cfrac{s_D}{\sqrt{n}}\bigg)\]

For example, an education researcher wants to find out if scores on a standardized mathematics test improve after students use an online platform to review the tested mathematics content. She collects the following measurements on a sample of #9# students:

Student |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |

Before |
78 | 64 | 71 | 53 | 82 | 69 | 70 | 47 | 66 |

After |
79 | 63 | 76 | 60 | 82 | 74 | 73 | 48 | 63 |

#\text{}#

Then she computes the differences (After - Before):

Differences |
1 | -1 | 5 | 7 | 0 | 5 | 3 | 1 | -3 |

#\text{}#

The sample mean difference is #\bar{D} = 2#, with a standard deviation of #s_D \approx 3.24#. She assumes that the population distribution of the differences is normal (since the sample size is low). She wants a #95\%# CI for the mean difference #\mu_D#.

For a #95\%# confidence level, #\alpha = 0.05#, so the needed quantile is #t_{n-1,\alpha/2} = t_{8,0.025}#.

This quantile can be calculated in #\mathrm{R}# using:

> qt(0.025, 8, low=F)

to get #2.306#.

So the margin of error of the #95\%# CI is:

\[t_{n-1, \alpha/2}\, \cfrac{s}{\sqrt{n}} = (2.306)\bigg(\cfrac{3.24}{\sqrt{9}}\bigg) \approx 2.49\]

And the #95\%# CI is: \[(l,u) = (2 - 2.49,\,\,\,\,\,\,\,2+2.49) = (-0.49,\,\,\,\,\,\,\,4.49)\]

Since this #95\%# CI includes zero and negative values, it is plausible that the population mean difference is zero or negative, meaning that the online platform could have had no effect on the scores, or even a damaging effect. But the majority of the #95\%# CI is in the positive values, so it is also quite plausible that the online platform could have produced an improvement in test scores. Essentially, the research is inconclusive in this case.

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.