Algorithms for calculating variance play a major role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
Because
The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.
The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xnew. Here, m denotes the estimate of the population mean(using the sample mean), s2n-1 the estimate of the population variance, s2n the estimate of the sample variance, and n the number of elements in the sequence before the addition.
A slightly more convenient form allows one to calculate the standard deviation without having to explicitly calculate the new mean. If n is the number of elements in the sequence after the addition of the new element, then one has
Contents |
I. Naïve algorithm
The formula for calculating the variance of an entire population of size n is:n = 0 sum = 0 sum_sqr = 0 foreach x in data: n = n + 1 sum = sum + x sum_sqr = sum_sqr + x*x end for mean = sum/n variance = (sum_sqr - sum*mean)/(n - 1)This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.
Because
sum_sqr
and sum * mean
can be very similar numbers, the precision of the result can be much less than the inherent precision of the floating-point arithmetic used to perform the computation. This is particularly bad if the variance is small relative to the sum of the numbers.II. Two-pass algorithm
An alternate approach, using a different formula for the variance, is given by the following pseudocode:n = 0 sum1 = 0 foreach x in data: n = n + 1 sum1 = sum1 + x end for mean = sum1/n sum2 = 0 foreach x in data: sum2 = sum2 + (x - mean)^2 end for variance = sum2/(n - 1)This algorithm is often more numerically reliable than the naïve algorithm I for large sets of data, although it can be worse if much of the data is very close to but not precisely equal to the mean and some are quite far away from it.
The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.
IIa. Compensated variant
The compensated-summation version of the algorithm above reads:n = 0 sum1 = 0 foreach x in data: n = n + 1 sum1 = sum1 + x end for mean = sum1/n sum2 = 0 sumc = 0 foreach x in data: sum2 = sum2 + (x - mean)^2 sumc = sumc + (x - mean) end for variance = (sum2 - sumc^2/n)/(n - 1)
III. On-line algorithm
It is often useful to be able to compute the variance in a single pass, inspecting each value xi only once; for example, when the data are being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xnew. Here, m denotes the estimate of the population mean(using the sample mean), s2n-1 the estimate of the population variance, s2n the estimate of the sample variance, and n the number of elements in the sequence before the addition.
n = 0 mean = 0 M2 = 0 foreach x in data: n = n + 1 delta = x - mean mean = mean + delta/n M2 = M2 + delta*(x - mean) // This expression uses the new value of mean end for variance_n = M2/n variance = M2/(n - 1)This algorithm is much less prone to loss of precision due to massive cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
A slightly more convenient form allows one to calculate the standard deviation without having to explicitly calculate the new mean. If n is the number of elements in the sequence after the addition of the new element, then one has
IV. Weighted incremental algorithm
When the observations are weighted, West (1979) [3] suggests this incremental algorithm:n = 0 foreach x in the data: if n=0 then n = 1 mean = x S = 0 sumweight = weight else n = n + 1 temp = weight + sumweight S = S + sumweight*weight*(x-mean)^2 / temp mean = mean + (x-mean)*weight / temp sumweight = temp end if end for Variance = S * n / ((n-1) * sumweight) // if sample is the population, omit n/(n-1)
V. Parallel algorithm
Chan et al.[4] note that the above on-line algorithm III is a special case of an algorithm that works for any partition of the sample X into sets XA, XB:- .
Va. Higher-order statistics
Terriberry[5] extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:- skewness: ,
- kurtosis: .
1 comment:
Congratulations, you figured out how to plagiarize Wikipedia.
Post a Comment