A group of measurements is called a population. A large enough population follows a smooth curve. For many measurements this curve is (or is assumed to be) a Gaussian.
A Gaussian is a symmetrical "bell shaped" curve, with a maximum at the center. The center corresponds to the "mean" or average of the population. The half width of the Gaussian is usually taken as the standard deviation, or s, where 68% of the population is within ± 1 s, 95% of the population is within ± 2 s, and 99.75% of the population is within ± 3 s.
For a single measurement, the best guess for the true mean of the population is the value of the measurement. There is no "internal" information about the standard deviation, s, but there is usually external information. For example, you may know that the measurements were made with a meter that has an accuracy of ± 2%, or you may know that similar measurements that were previously made had an accuracy of ± 5%. Note that the error in a measurement is not how close the measurement approached the "right" answer.
68% of the time the measurement will be closer than one standard deviation (± 1 s) to the true mean, 95% of the time it will be closer than two standard deviations (± 2 s), and 99.75% of the time it will be closer than 3 standard deviations (± 3 s).
Rules for averaging
For multiple (independent, or "uncorrelated") measurements with the same error, the best guess for the true mean is the average of the measurements. The standard deviation, s, in this case can be estimated from the data, by fitting the population to a Gaussian. This is a simple calculation on a scientific calculator. If there are N uncorrelated measurements, all with the same accuracy, then the error in the average is s/Ö N. In other words, 68% of the time the average will be closer than ± 1 s/Ö N to the true mean, 95% of the time it will be closer than ± 2 s/Ö N, and 99.75% of the time it will be closer than ± 3 s/Ö N.
In order for averaging to work, the measurements must be truly independent. For example, if they were all made with the same meter that had a 5% error, the average will be off by 5% no matter how many measurements are averaged.
More generally, if there are several independent measurements of the same true mean with different errors, a better estimate can be made by averaging them. However, the more accurate measurements should be weighted more heavily in the average. We have
xAverage = s2 (x1 /s12 + x1 /s22 + x1 /s22 + ..... xN /sN2)
If all the sN are the same, then xAverage is the simple average, and s is reduced by a factor of Ö N, agreeing with the first statements.
Rules for adding errors
If two (or more) quantities are added, their errors are also added. If the quantities are uncorrelated, then the errors are added in quadratures:
If the quantities are correlated, then
The errors are treated the same way if the quantities are subtracted. One useful application of this is to determine if two quantities are the same. Subtract one quantity from the other, and see if the answer is 0 to within one standard deviation, two standard deviations, etc.
If two (or more) quantities are multiplied then the formulas above hold, but for the percentage errors. For example, if a quantity with an error of 3% is multiplied by a quantity with an error of 4%, then the product has an error of 5% if they are uncorrelated, and 7% if they are correlated. The errors are treated the same way if one quantity is divided by another.
Random events
Some measurements determine the average rate at with randomly occurring events take place. For example, one might wish to determine how many cosmic rays pass through a counter each minute, or how many raindrops fall on one sidewalk square. Some low current measurements may also have similar properties - if electrons arrive at random times, how many pass through a given point in a circuit each second?
These measurements follow what is called a Poisson distribution, which is similar to a Gaussian distribution although it is not strictly symmetric. It turns out that if N random events occur in some particular interval, then the standard deviation, s, is estimated by
This estimate of the error can be used as before, the same as any error.
There is a variation of the equation above for the case where there is a maximum possible value for N. If that maximum is M, then
An interesting property of the Poisson distribution is that it predicts the probability of observing no events during a time when N events are expected. That probability, P(0), is given by