## Against All Odds: Inside Statistics

# Glossary

### A

### Addition Rule

If *C* and *D* are mutually exclusive events, then *P*(*C* or *D*) = *P*(*C*) + *P*(*D*).

### Adequacy of a Linear Model

A line is adequate to describe the pattern in a set of data points provided the data have linear form. A residual plot is a good way of checking adequacy.

### Alternative Hypothesis or *H*_{a}

_{a}

The claim in a significance test that we are trying to gather evidence for – the researcher’s point of view. The alternative hypothesis is contradictory to *H*_{0} and is judged the more plausible claim when *H*_{0} is rejected.

### ANOVA

Analysis of variance (ANOVA) is a technique used to analyze variation in data in order to test whether three or more population means are equal.

### Assumptions of the Linear Regression Model

- The observed response
*y*for any value of*x*varies according to a normal distribution. Repeated responses,*y*-values, are independent of each other. - The mean response,
*μ*, has a straight-line relationship with_{y}*x*:*μ*=_{y}*α*+*βx*. - The standard deviation of
*y*,*σ*, is the same for all values of*x*.

### B

### Bar Chart

Graph of a frequency distribution for categorical data. Each category is represented by a bar whose area is proportional to the frequency, relative frequency, or percent of that category. If the categorical variable is ordinal, the logical order of the categories should be preserved in the bar chart.

### Between-Groups Variation

A measure of the spread of the group means about the grand mean, the mean of all the observations. It is measured by the mean square for groups, *MSG*.

### Biased Sample

A sample in which some individuals or groups from the population are less likely to be selected than others due to some attribute.

### Binomial Distribution

In a binomial setting with *n* trials and probability of success *p*, the distribution of *x* = the number of successes. Shorthand notation for this distribution is b(*n*, *p*). The probabilities *p*(*x*) for the binomial distribution with parameters *n* and *p* can be calculated using the following formula:

**Binomial Random Variable**

The number of successes, *x*, in a binomial setting with *n* trials with probability of success *p*. The mean and standard deviation of a binomial random variable *x* can be calculated as follows:

### Binomial Setting

A setting in which there are a fixed number of *n* independent trials. Each trial can result in only one of two outcomes, success or failure, and the probability of success, *p*, is the same for each trial.

### Bivariate Data

Measurements or observations are recorded on two attributes for each individual or subject under study.

*Boxplot (or Box-and-Whisker Plot)*

Graphical representation of the five-number summary. The basic boxplot consists of a box that extends from the first quartile to the third quartile with whiskers that extend from each box end to the minimum and maximum data values. The basic boxplot can be modified to include identification of mild and extreme outliers. Unit 5

### C

### Categorical Variable

Variable whose values are classifications or categories. Gender, occupation, and eye color are examples of categorical variables.

### Census

An attempt to gather information about every individual in a population.

### Center Line

The center line on a control chart is generally the target value or mean of the quality characteristic being sampled.

### Central Limit Theorem

If the sample size *n* is large (say *n* > 30), then the sampling distribution of the sample mean *x̄* of *n* independent observations from the same population has an *approximate normal* distribution. If the population mean and variance are *μ* and *σ*, respectively, then *x̄*has an approximate normal distribution with mean *μ* and standard deviation *σ*/√*n*.

Chi-Square Test Statistic for Independence

The chi-square test for independence is used for categorical variables. For testing the null hypothesis *H*_{0}: no association between the variables or *H*_{0}: variables are independent, the chi-square-test statistic is computed as follows:

If the null hypothesis is true, χ2 will have a chi-square distribution with degrees of freedom (*r* – 1)(*c* – 1), where *r* and *c* are the number of rows and columns in the two-way table, respectively.

Common Cause Variation

Variation due to day-to-day factors that influence the process.

Complement of an Event *A*

An event that consists of all the outcomes in the sample space that are not in *A*. If *B* is the complement of *A*, then *B* = not *A*.

Complement Rule

For any event *C*, *P*(not *C*) = 1 – *P*(*C*).

Complementary Events

Two events are complementary if they are mutually exclusive and combining their outcomes into a single set gives the entire sample space.

Conditional Distribution

There are two sets of conditional distributions for a two-way table:

- distributions of the row variable for each fixed level of the column variable
- distributions of the column variable for each fixed level of the row variable

Conditional distributions provide one way to explore the relationship between the row and column variables.

Confidence Interval

An interval estimate computed from sample data that gives a range of plausible values for a population parameter. The interval is constructed so that the value of the parameter will be captured between the endpoints of the interval with a chosen level of confidence.

Confidence Interval for *μ* (*t*-interval)

When *σ* is unknown, the sample size *n* is small, and the population distribution is approximately normal, a *t*-confidence interval for *μ* is given by the following formula:

where *t** is a *t*-critical value associated with the confidence level and determined from a *t*-distribution with *df* = *n* – 1 degrees of freedom.

Confidence Interval for *μ* (*z*-interval)

When *σ* is known and either the sample size *n* is large or the population distribution is normal, a confidence interval for *μ* is given by the following formula:

where *z** is a *z*-critical value (from a standard normal distribution) associated with the confidence level.

Confidence Interval for *p*

In situations where the sample size *n* is large, a confidence interval for the population proportion *p* is given by the following formula:

where *p̂* is the sample proportion and *z*^{*} is the a *z*-critical value (from a standard normal distribution) associated with the confidence level.

Confidence Interval for Population Slope *β*

A confidence interval for the population slope *β* is given by the following formula:

where *t*^{*} is a *t*-critical value associated with the confidence level and determined from a *t*-distribution with *df* = *n* – 2; *b* is the least-squares estimate of the population slope calculated from the data, and *s _{b}* is the standard error of

*b*.

Confidence Level

A number that provides information on how much confidence we have in the method used to construct a confidence interval estimate of a population parameter. It is the long-run success rate (success means capturing the parameter in the interval) of the method used to construct the confidence interval.

Confounding Factors

Two (or more) factors (explanatory variables) are confounded when their effects on a response variable are intertwined and cannot be distinguished from each other.

Continuous Random Variable

A random variable that can take on values that include an interval. The number of possible distinct outcomes is uncountable; there are too many possible values to put them all in a list.

Control Charts

Charts used to monitor the output of a process. The charts are designed to signal when the process has been disturbed so that it is now out of control or is about to go out of control.

Control Group

A group in an experiment that does not receive the treatment under study. The control group could receive a placebo to hide the fact that no treatment is being given. In an active control group, the subjects receive what might be considered the existing standard treatment.

Control Limits

The upper control limit (UCL) and lower control limit (LCL) on a control chart are generally set ±3 *σ*/√*n* from the center line.

Convenience Sampling

A sampling design in which the pollster selects a sample that is easy to obtain, such as friends, family, co-workers, and so forth.

Correlation

Denoted by *r*, correlation measures the direction and strength of a linear relationship between two quantitative variables. The formula for computing Pearson’s correlation coefficient is:

### D

### Decision Rules

A set of rules that identify from a control chart when a process is becoming unstable or going out of control.

### Degrees of Freedom for Test for Independence

(*r* – 1)(*c* – 1), where the numbers *r* and *c* are the number of rows and columns in the two-way table, respectively.

### Dependent Events

Two events are dependent if the fact that one of the events occurs does affect the probability that the other occurs. Events that are not dependent are independent.

### Dependent Variable

A variable whose outcome we would like to predict based on another variable (independent variable). The dependent variable is always plotted on the vertical axis of a scatterplot. Also called a response variable.

### Deviations from the Mean

The deviations of each data value from the sample mean: *x _{1} – x̄, x_{2} – x̄, … x_{n} – x̄*.

### Discrete Random Variable

A random variable that can take on only a countable number of distinct values – in other words, it is possible to list all possible values. Any random variable that can take on only a finite number of values is a discrete random variable.

### Distribution

Description of the possible values a variable assumes and how often these values occur.

### Dotplot

Graphical display of quantitative data in which each observation (or a group of a specified number of observations) is represented by a dot above a horizontal axis.

### Double-Blind Experiment

An experiment in which neither the subjects nor the individuals measuring the response know which subjects are assigned to which treatment.

### E

### Empirical Rule (68-95-99.7% Rule)

Rule that gives the approximate percentage of data that fall within one standard deviation (68%), two standard deviations (95%), and three standard deviations (99.7%) of the mean. This rule should be applied only when the data are approximately normal.

### Estimated Regression Line

The estimated regression line for the linear regression model is the least-squares line, *ŷ*= *a* + *bx*.

### Expected Counts

The number of observations that would be expected to fall into each cell (or class) of a two-way table if the null hypothesis is true. The expected counts for the chi-square test for independence are computed as follows:

### Experimental Study

A study in which researchers deliberatively apply some treatment to the subjects in order to observe their responses. The purpose is to study whether the treatment causes a change in the response.

### Explanatory Variable

Variable that is used to predict the response variable. The explanatory variable is always plotted on the horizontal axis of a scatterplot. Also called Independent Variable.

### F

*F*-Test Statistic

The test statistic of the ratio of the *MSG* and *MSE*, , which is used for testing *H*_{0}: *μ*_{1} = *μ*_{2} = … = *μ*_{k}. When *H*_{0} is true, *F* has an *F* distribution with numerator *df* = *k* – 1 and denominator *df* = *N* – *k*, where *k* is the number of groups and *N* is the total number of observations.

### Factors

The explanatory variables in an observational study or an experiment. Also called the independent variables.

### First Quartile or Q1

The one-quarter point in an ordered set of quantitative data. To compute Q1, calculate the median of the lower half of the ordered data.

### Five-Number Summary

A five number summary of a quantitative data set consists of the following: minimum, first quartile (Q1), median, third quartile (Q3), maximum.

### Frequency Distribution

A table that displays frequencies of data falling into categories or class intervals.

### H

### Histogram

Graphical representation of a frequency distribution. Bars are drawn over each class interval on a number line. The areas of the bars are proportional to the frequencies with which data fall into the class intervals.

### I

### In Control

The state of a process that is running smoothly, with its variables staying within an acceptable range.

**Independent Events**

Two events are independent if the fact that one of the events occurs does not affect the probability that the other occurs.

**Independent Variable**

Variable that is used to predict the dependent variable. The independent variable is always plotted on the horizontal axis of a scatterplot. Also called Explanatory Variable.

**Interquartile range or IQR**

A measure of the spread of the middle half of the data: IQR = Q3 – Q1. The IQR is a resistant measure of the variability of a data set.

### J

### Joint Distribution of Two Categorical Variables

A two-way table of counts gives the joint distribution of two categorical variables. The joint distribution can be converted to percentages by dividing each cell count by the grand total and then multiplying by 100%.

### L

### Least-Squares Regression

A method for finding the best-fitting curve to a given set of data points by minimizing the sum of the squares of the residual errors (SSE).

### Least-Squares Regression Line

The least-squares line is the line that makes the sum of the squares of the residual errors (SSE) as small as possible. The equation of the least-squares line has the form *y*= *a* + *bx*, where *a* and slope *b* can be calculated from *n* data pairs (*x*, *y*) using the following formulas:

### Level

One of the possible values or settings that a factor can assume.

### Linear Form

A scatterplot has linear form when dots in a scatter plot appear to be randomly scattered on either side of a straight line.

### Linear Regression Model

The simple linear regression model assumes that for each value of *x* the observed values of the response variable *y* are normally distributed about a mean *μ _{y}* that has the following linear relationship with

*x*:

### Lurking Variable

### M

### Margin of Error

For confidence intervals of the form point estimate ± margin of error, the margin of error gives the range of values above and below the point estimate. The margin of error is the half-width of the confidence interval.

### Marginal Distribution

A distribution computed from a two-way table of counts by dividing the row or column totals by the overall total. Often the marginal distributions are expressed as percentages.

### Marginal Totals

The sum of the row entries or the sum of the column entries in a two-way table of counts.

### Matched-Pairs *t*-Test Statistic

In testing *H*_{0}: *μ _{D}* =

*μ*

_{D0}where

*μ*is the population mean difference, given by

_{D}where *x̄ _{D}* and

*s*are the mean and standard deviation of the sample differences. If the differences are approximately normally distributed and the null hypothesis is true, then

_{D}*t*has a

*t*-distribution with

*df*=

*n*– 1 degrees of freedom.

### Mean

The arithmetic average or balance point of sample data. To calculate the mean, sum the data values and divide the sum by the number of data values.

If the sample consists of observations *x*_{1},*x*_{2},…,*x*_{n}, then the sample mean is

Mean of a Discrete Random Variable *x*

Given a probability distribution, *p*(*x*), the mean is calculated as follows:

### Median

A resistant measure of center of a data set. The median separates the upper half of the data from the lower half. To calculate the median, order the data from smallest to largest and count up (*n* + 1)/2 places in the ordered list.

### Mode

The data value in a quantitative data set that occurs most frequently.

### Multiplication Rule

If *C* and *D* are independent, then *P*(*C* and *D*) = *P*(*C*)*P*(*D*).

### Multistage Sampling

A sampling design that begins by dividing the population into clusters. In stage one, the pollster choses a (random) sample of clusters. In subsequent stages, samples are chosen from each of the selected clusters.

### Multivariate Data

Data that consists of measurements or observations recorded on two or more attributes for each individual or subject under study.

### Mutually Exclusive Events

Events that have no outcomes in common. Events that are disjoint.

### N

### Negative Association

Two variables have negative association if above-average values of one accompany below-average values of the other, and vice versa. In a scatterplot, a negative association would appear as a pattern of dots in the upper left to the lower right.

### Nonlinear Form

Often scatterplots do not have linear form. Instead the data might form a curved pattern. In that case, we say the scatterplot has nonlinear form.

### Normal Curve

Bell-shaped curve. The center line of the normal curve is at the mean *μ*. The change-of-curvature in the bell-shaped curve occurs at *μ* – *σ* and *μ* + *σ* where *σ* is the standard deviation.

### Normal Density Curve

A normal curve scaled so that the area under the curve is 1.

### Normal Distribution

Distribution that is described by a normal density curve. Any particular normal distribution is completely specified by two numbers, its mean *μ* and standard deviation *σ*.

### Normal Quantile Plot

Also known as **normal probability plot**. A graphical method for assessing whether data come from a normal distribution. The plot compares the ordered data with what would be expected of perfectly normal data. A normal quantile plot that shows a roughly linear pattern suggests that it is reasonable to assume the data come from a normal distribution.

### Null Hypothesis or *H*_{0}

The claim tested by a significance test. Usually the null hypothesis is a statement about “no effect” or “no change.” The null hypothesis has the following form: *H*_{0}: population parameter = hypothesized value.

### O

### Observational Study

A study in which researchers observe subjects and measure variables of interest. However, the researchers do not try to influence the responses. The purpose is to *describe* groups of subjects under different situations.

### Observed Counts

The number of observations that fall into each cell (or class) of a two-way table.

### One-Sided Alternative Hypothesis

The alternative hypothesis in a significance test is one-sided if it states that either a parameter is greater than or a parameter is less than the null hypothesis value.

### One-Way ANOVA

An analysis of variance in which one factor is thought to be related to the response variable.

### Out of Control

The state of a process that is no longer in control. The process has become unstable or its variables are no longer within an acceptable range.

### Outlier

Data value that lies outside the overall pattern of the other data values.

### P

### Paired *t*-Confidence Interval for *μ*_{D}

_{D}

When data are matched pairs, and the standard deviation of the population differences *σ _{D}* is unknown, a

*t*-confidence interval estimate of the population mean differences,

*μ*, is given by the formula:

_{D}where *t*^{*} is a *t*-critical value associated with the confidence level and determined from a *t*-distribution with *df* = *n* – 1 and *x̄ _{D}* and

*s*are the mean and standard deviation of the sample differences.

_{D}### Percentile

A value such that a certain percentage of observations from the distribution falls at or below that value. The *p*^{th} percentile of a data set is a value such that *p*% of the observations fall at or below that value.

### Pie Chart

Graph of a frequency distribution for categorical data. Each category is represented by a slice of pie in which the area of the slice is proportional to the frequency or relative frequency of that category.

### Placebo

Something that is identical in appearance to the treatment received by the treatment group. Placebos are meant to be ineffectual and are given as control treatments.

### Point Estimate

A single number based on sample data (a statistic) that represents a plausible value for a population parameter.

### Population

The entire group of objects or individuals about which information is wanted.

### Population Proportion

For a population that is divided into two categories, success and failure, based on some characteristic, the population proportion, *p*, is:

### Population Regression Line

The population regression line, *μ _{y}* =

*α*+

*βx*describes how the mean response

*y*varies as

*x*changes.

### Positive Association

Two variables have positive association if above-average values of one tend to accompany above-average values of the other and below-average values of one tend to accompany below-average values of the other. In a scatterplot, a positive association would appear as a pattern of dots in the lower left to the upper right.

### Probability

A measure of how likely it is that something will happen or something is true. Probabilities are always between 0 and 1. Events with probabilities closer to 0 are less likely to happen and events with probabilities closer to 1 are more likely to happen.

### Probability Distribution

A list of the possible values of a discrete random variable together with the probabilities associated with those values.

### Process

Chain of steps that turns inputs into outputs.

### Prospective Study

A study that starts with a group and watches for outcomes (for example, the development of cancer or remaining cancer-free) during the study period and relates this to suspected risk or protection factors that might be linked to the outcomes.

*P*-value

The probability, computed under the assumption that the null hypothesis is true, of observing a value from the test statistic’s distribution that is at least as extreme as the value of the test statistic that was actually observed.

### Q

### Quantitative Variable

Variable whose values are numbers obtained from measurements or counts. Height, weight, and points scored at a basketball game are examples of quantitative variables.

### R

### Random Phenomenon

A situation in which the possible outcomes are known but we do not know which one will occur. If the situation is repeated over and over, a regular pattern to the outcomes emerges over the long run.

### Random Variable

A variable whose possible values are numbers associated with outcomes of a random phenomenon.

### Range

Measure of the variability of a quantitative data set from its extremes: range = maximum – minimum.

### Regression Line

A straight line that describes how a response variable *y* is related to an explanatory variable *x*.

### Representative Sample

A sample that accurately reflects the members of the entire population.

### Residual Error

A residual error is the vertical deviation of a data point from the regression model: residual error = actual *y* – predicted *y*.

### Resistant Measure

A statistic that measures some aspect of a distribution (such as its center) that is relatively unaffected by a small subset of extreme data values. For example, the median is a resistant measure of the center of a distribution while the mean is not a resistant measure of center.

### Response Variable

The variable used to measure the outcome of a study, which we attempt to explain or predict using one or more independent variables (factors). The response variable is always plotted on the vertical axis of a scatterplot. Also called the dependent variable.

### Retrospective Study

A study that starts with an outcome (for example, two groups of people, a cancer group and a non-cancer group) and then looks back to examine exposures to suspected risk or protection factors that might be linked to that outcome. Unit 14

### Run Chart

A plot of data values versus the order in which these values were collected.

### S

### Sample

The part of the population that is actually examined in a study.

### Sample Mean

One measure of center of a data set. The mean is the arithmetic average or balance point of a set of data. To calculate the mean, sum the data and divide by the number of data items:

### Sample Proportion

The sample proportion, *p̂*, from a sample of size *n* is:

### Sample Standard Deviation

One measure of variability of a data set. The standard deviation has the same units as the data values. To calculate the standard deviation, take the square root of the sample variance:

### Sample Variance

One measure of variability of a data set. To calculate the variance, sum the squared deviations from the mean and divide by the number of data minus one:

### Sampling Bias

Occurs when a sample is collected in such a way that some individuals in the population are less likely to be included in the sample than others. Because of this, information gathered from the sample will be slanted toward those who are more likely to be part of the sample.

### Sampling Design

Plan of how to select the sample from the population.

### Sampling Distribution

The distribution of the values of a sample statistic (such as *x̄*, the median, or *s*) over many, many random samples chosen from the same population.

### Sampling Distribution of the Sample Mean

The distribution of *x̄* over a very large number of samples. If *x̄* is the mean of a simple random sample (SRS) of size *n* from a population having mean *µ* and standard deviation *σ*, then the mean and standard deviation of *x̄* are:

Furthermore, if the population distribution is normal, then the distribution of *x̄* is normal.

### Sampling Distribution of the Sample Proportion

When the sample size *n* is large, the sampling distribution of the sample proportion *p̂* is approximately normally distributed with the following mean and standard deviation:

### Scatterplot

A graphical display of bivariate quantitative data in which each observation (*x*, *y*) is plotted in the plane.

### Self-Selecting Sampling

A sampling design in which the sample consists of people who respond to a request for participation in the survey. (Also called voluntary sampling.)

### Significance Level

In a significance test, the highest *p*-value for which we will reject the null hypothesis.

### Significance Test

A method that uses sample data to decide between two competing claims, called hypotheses, about a population parameter.

### Simple Random Sample of Size *n*

A sample of *n* individuals from the population chosen in such a way that every set of *n *individuals has an equal chance to be in the sample actually selected.

### Simple Random Sampling

A sampling design that chooses a sample of size *n* using a method in which all possible samples of size *n* are equally likely to be selected.

### Single-Blind Experiment

An experiment in which the subjects do not know which treatment they are receiving but the individuals measuring the response do know which subjects were assigned to which treatments.

*Skewed Right or Left*

A unimodal distribution is **skewed to the right** if the right tail of the distribution is longer than the left and is **skewed to the left** if the left tail of the distribution is longer than the right. Unit 3

### Special Cause Variation

Variation due to sudden, unexpected events that affect the process.

### Standard Deviation of a Discrete Random Variable *x*

Given a probability distribution, *p*(*x*), the standard deviation, *σ*, is calculated as follows:

### Standard Error of the Estimate

A point estimate of *σ*, which is a measure of how much the observations vary about the regression line. The standard error of the estimate, *s _{e}*, is computed as follows:

### Standard Error of the Slope *b*

The estimated standard deviation of *b*, the least-squares estimate for the population slope *β*, is:

### Standard Normal Distribution

Normal distribution with *μ* = 0 and *σ* = 1.

### Standard Normal Quantiles

The *z*-values that divide the horizontal axis of a standard normal density curve into intervals such that the areas under the density curve over each of the intervals are equal.

### Stemplot (or Stem-and-Leaf Plot)

Graphical tool for organizing quantitative data in order from smallest to largest. The plot consists of two columns, one for the stems (leading digit(s) of the observations) and the other for the leaves (trailing digit(s) for each observation listed beside corresponding stem). Stemplots are a useful tool for conveying the shape of relatively small data sets and identifying outliers.

### Strata

The non-overlapping groups used in a stratified sampling plan.

### Stratified Random Sample

A stratified sampling plan in which the sample is obtained by taking *random* samples from each of the strata.

### Stratified Sampling

A sampling plan that is used to ensure that specific non-overlapping groups of the population are represented in the sample. The non-overlapping groups are called strata. Samples are taken from each stratum.

### Symmetric Distribution

Shape of a distribution of a quantitative variable in which the lower half of the distribution is roughly a mirror image of the upper half.

### T

*t*-Confidence Interval for *μ*

When *σ* is unknown, the sample size *n* is small, and the population distribution is approximately normal, a *t*-confidence interval for *μ* is given by the following formula:

*t*

^{*}is a

*t*-critical value associated with the confidence level and determined from a

*t*-distribution with

*df*=

*n*– 1 degrees of freedom.

*t*-Distribution

*t*-distributions are bell-shaped and centered at zero, similar to the standard normal density curve. Compared to the standard normal distribution, a

*t*-distribution has more area under its tails. The shape of a

*t*-distribution, and how closely it resembles the standard normal distribution, is controlled by a number called its

**degrees of freedom**(

*df*). A

*t*-distribution with

*df*> 30 is very close to a standard normal distribution.

*t*-Test Statistic

*H*

_{0}:

*μ*=

*μ*

_{0}, where

*μ*is the population mean, the formula for the

*t*-test statistic is:

*t*-test is used in situations where the population standard deviation

*σ*is unknown, the sample size

*n*is small, and the population has a normal distribution. If the null hypothesis is true,

*t*has a

*t*-distribution with

*df*=

*n*– 1 degrees of freedom.

*t*-Test Statistic for the Slope

*H*

_{0}:

*β*=

*β*

_{0}, where

*β*is the population slope, the formula for the

*t*-test statistic is:

*H*

_{0}is true,

*t*has a

*t*-distribution with

*df*=

*n*– 2, where

*n*is the number of (

*x*,

*y*)-pairs in the sample. The usual null hypothesis is

*H*

_{0}:

*β*= 0, which says that the straight-line dependence on

*x*has no value in predicting

*y*.

### Test of Hypotheses

### Test Statistic

### Third Quartile or Q3

### Treatment

### Two-Sample *t*-Confidence Interval for *μ*_{1} – *μ*_{2}

*t*-confidence interval estimate of the difference in population means is given by the formula:

*df*) associated with

*t*

^{*}, the

*t*-critical value associated with the confidence level: (1) use technology or (2) use a conservative approach and let

*df*= smaller of

*n*

_{1}– 1 or

*n*

_{2}– 1 .

### Two-Sample *t*-Procedures

*t*-procedures are used to test or estimate

*μ*

_{1}–

*μ*

_{2}, the difference of two population means. The required data consists of two independent simple random samples of sizes

*n*

_{1}and

*n*

_{2}from each of the populations (or treatments).

### Two-Sample *t*-Test Statistic

*H*

_{0}:

*μ*

_{1}–

*μ*

_{2}=

*d*, where

*μ*

_{1}and

*μ*

_{2}are the means of two populations, the formula for the two-sample

*t*-test statistic is:

*df*) associated with

*t*: (1) use technology or (2) use a conservative approach and let

*df*= smaller of

*n*

_{1}– 1 or

*n*

_{2}– 1.

### Two-Sided Alternative Hypothesis

### Two-Way Table of Counts (Frequencies)

*r*rows and

*c*columns that organizes data on two categorical variables taken from the same individuals or subjects. Values of the row variable label the rows of the table; values of the column variable label the columns of the table.

### U

### V

### Variable

Describes some characteristic or attribute of interest that can vary in value.

### Variance of a Discrete Random Variable *x*

Given a probability distribution, *p*(*x*), the variance is calculated as follows:

### Voluntary Sampling

A sampling design in which the sample consists of people who respond to a request for participation in the survey. Also called self-selecting sampling.

### W

### Within-Groups Variation

A measure of the spread of individual data values within each group about the group mean. It is measured by the mean square error, *MSE*.

### X

*x̄* Charts

A plot of means of successive samples versus the order in which the samples were taken.

### Z

*z*-Score

Transformation of a data value *x* into its deviation from the mean measured in standard deviations. To calculate a *z*-score for a data value *x*, subtract the mean and divide by the standard deviation:

*z*-Test Statistic

In testing *H*_{0}: *μ* = *μ*_{0}, where *μ* is the population mean, the formula for the *z*-test statistic is:

The *z*-test statistic is used in situations where the population standard deviation *σ* is known and either the population has a normal distribution or the sample size *n* is large.

*z*-Test Statistic for Proportions

In testing *H*_{0}: *p* = *p*_{0}, where *p* is the population proportion, the formula for the *z*-test statistic is:

The *z*-test is used in situations where the sample size *n* is large.