The MAD is a measure of the variation in a data set about the mean. Professional statisticians more commonly use two other measures of variation: the variance and the standard deviation.
The method for calculating variance is very similar to the method you just used to calculate the MAD. First, let's go back to Line Plot A from Problem E1:
The first step in calculating the variance is the same one you used to find the MAD: Find the deviation for each value in the set (i.e., how much each value differs from the mean). The deviations for this data set are plotted below:
The next step in calculating the variance is to square each deviation. Note the difference between this and the MAD, which requires us to find the absolute value of each deviation. The squares of the deviations for this data set are plotted below:
The final step is to find the variance by calculating the mean of the squares. As usual, find the mean by adding all the values and then dividing by how many there are. Here is a table for this calculation:
The mean of the squared deviations is 38 / 9 = 4 2/9, or approximately 4.22. This value is the variance for this data set. As with the MAD, the variance is a measure of variation about the mean. Data sets with more variation will have a higher variance.
The variance is the mean of the squared deviations, so you could also say that it represents the average of the squared deviations. The problem with using the variance as a measure of variation is that it is in squared units. To gauge a typical (or standard) deviation, we would need to calculate the square root of the variance. This measure -- the square root of the variance -- is called the standard deviation for a data set.
For the data set given above, the standard deviation is the square root of 4.22, which is approximately 2.05. Note that this value is fairly close to the MAD of 1.78 that we calculated earlier.
The standard deviation, first introduced in the late 19th century, has become the most frequently used measure of variation in statistics today. For example, the SAT is scaled so that its mean is 500 points and its standard deviation is 100 points. IQ tests are created with an expected mean of 100 and a standard deviation of 15.