What Is a Variance?
Variance is a measure of the degree of dispersion when measuring random variables or a set of data with probability and statistical variance. In variance theory, variance is used to measure the degree of deviation between a random variable and its mathematical expectation (that is, the mean). The variance (sample variance) in statistics is the average of the squared values of the differences between each sample value and the average of all sample values. In many practical problems, it is important to study the variance, that is, the degree of deviation.
- The term "variance" was the first
- Variance has different definitions and different formulas in statistical description and probability distribution.
- In statistical descriptions, variance is used to calculate the difference between each variable (observed value) and the population mean. In order to avoid the sum of the deviations from the mean being zero, and the sum of the squares of the deviations is affected by the sample content, statistics use the mean sum of the squares of the deviations to describe the degree of variation of the variables. Population variance calculation formula:
- In actual work, when the population mean is difficult to obtain, the sample statistics are used instead of the population parameters. After correction, the sample variance calculation formula is:
- S ^ 2 = (X-
- 1. Let C be
- Knowing that the true length of a part is a, now use A and B two instruments to measure 10 times each. The measurement result X is represented by points on the coordinates as shown in the figure:
- A instrument measurement results:
- Instrument B measurement results: all a
- The mean of the measurements from both instruments is a. But using the above results to evaluate the advantages and disadvantages of the two instruments, it is obvious that we would think that the performance of the second instrument is better, because the measurement results of the second instrument are concentrated near the mean.
- Therefore, it is necessary to study the degree of deviation of random variables from their mean. So, what kind of quantity is used to measure the degree of deviation? It is easy to see that E [| XE [X] |] can measure the degree of deviation of a random variable from its mean E (X). However, because the above formula has an absolute value, the calculation is inconvenient. Usually, the numerical feature of E [(XE [X]) 2 ] is the variance.
- Variance is the actual value and
- When the data distribution is relatively scattered (that is, the data fluctuates around the average), the square sum of the difference between each data and the average is larger, and the variance is larger; when the data distribution is concentrated, the difference between each data and the average The sum of squares is smaller. Therefore, the larger the variance, the greater the fluctuation of the data; the smaller the variance, the smaller the fluctuation of the data. [6]
- Variance not only expresses the degree to which the sample deviates from the mean, but also reveals the degree of fluctuation within the sample. It can also be understood that the variance represents the expectation that the sample fluctuates with each other. Of course, this conclusion is currently established under the second-order statistical moment. [7]