uncertainty of measurement
The difference between the value measured during a measurement (actual value) and the upper and lower limit for an expected deviation from this value is termed the uncertainty of measurement .
This measured value is then compared to the nominal value (value expected) for the measuring task and can be assessed by specifying and considering tolerance limits , taking into account agreed decision rules.
Table of contents
- What is uncertainty of measurement?
- Dealing with uncertainties of measurement confidently
- How is the uncertainty of measurement determined?
- What is the importance of uncertainty of measurement in practice?
What is uncertainty of measurement?
Definition according to ISO standard ISO/IEC Guide 98-3 (Guide to the Expression of Uncertainty in Measurement (GUM)): "Uncertainty of measurement: parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand."
The actual value from a measurement, for example the pH value of a liquid or the diameter of an item inspected, is termed the measured value or actual value. This value is stated in its units of measure (e.g. in cm). The value to be met is termed the nominal value or true value.
It is the nature of all physical materials that they never behave exactly the same and exactly as specified. Due to these imprecisely quantifiable known and unknown deviations, the true value can never be determined with absolute accuracy in practice. Instead, several measurements will yield slightly different results. This phenomenon is termed scatter. The measured values are therefore subject to a measurement deviation (random error) and differ from the nominal value/true value by this amount. The scatter or standard deviation is therefore a measure for the random distribution of the measurement results around the mean value. The extent of the scatter specifies the precision of the measurements or more accurately the precision of the measurement method. The deviation of the actual measurement results in relation to the true value is, on the other hand, termed correctness.
Accuracy is to be considered the generic term for precision and correctness and consists of a combination of the systematic and the random measurement deviation.
How does quality analysis deal with uncertainties of measurement?
At Quality Analysis we have internal rules for dealing with uncertainties of measurement. In this way, we create transparency for our customers and ensure that it is clear to them what our measurement results mean.
Measurement deviation, uncertainty of measurement and measurement error
What is the difference between measurement error, uncertainty of measurement and measurement deviation?
Measurement error is an older term that is sometimes used as a synonym for the term uncertainty of measurement, however, sometimes measurement deviation is also termed measurement error. It also refers to errors due to negligence, for instance the incorrect handling of the sample by the user, errors during the calibration of a test instrument or incorrect adjustment. The term measurement error is ambiguous and will therefore no longer be used in official standards.
Nevertheless, this word continues to appear in everyday use. Every measurement method, every type of measurement acquisition involves a system-related measurement deviation. This deviation is always the same, has no effect on the precision of a value and displaces the correctness from the true value, however always in the same manner and by the same amount. In everyday use this is termed the systemic error (the term "systematic deviations" would be correct). As a consequence, the measurement method and/or the measuring instruments are required as meta information so that different measurement series are comparable.
Uncertainty of measurement
The uncertainty of measurement describes the approximate value by which the actual value differs from the true value. The uncertainty is always positive and is stated without a sign. As a rule it lies within the normal distribution and can be determined using statistical methods and/or comparative interlaboratory comparisons/suitability tests.
The measurement deviation is defined as the difference between the current actual value and the true value. In other words: the current actual value consists of the true value plus systematic deviation(s) plus random errors. Here it is to be noted that the uncertainty of measurement is always stated as a positive value. During the calculations it must therefore be taken into account whether this value must be added or subtracted.
Dealing with uncertainties of measurement confidently
Uncertainty of measurement is therefore an inherent factor in every measurement and must be taken into account each time while determining measurands, for instance manufacturing dimensions. The methods for determining the uncertainty are based either on purely statistical calculations (type A) or on experience and information from calibration certificates and manuals (type B). The correct classification of this information requires practical experience. This is particularly the case if general experience about the behaviour or the characteristics of the sample material is relied upon for uncertainties of measurement of type B.
An accredited test laboratory such as Quality Analysis is able to determine uncertainties of measurement correctly and to give you the certainty you need. We can look back at many years of experience in all specialist areas. Our customers profit from our experience, in particular if the uncertainty of measurement is to be determined based on a method in ISO/IEC Guide 98-3 type B when sound knowledge about the characteristics of the material inspected and the inspection method is essential.
So that we can fulfil the expectations of our customers in relation to an uncertainty of measurement as low as possible, we at Quality Analysis can draw on many years of experience in all specialist areas for the selection of inspection methods and measuring instruments. This competence, combined with our modern, highly precise test instruments, ensures precise, correct results with the lowest possible uncertainties of measurement, irrespective of whether in optical metrology or industrial metrology.
How is the uncertainty of measurement determined?
The individual components of the input variables for the measurement must each be determined separately to ascertain the uncertainty of a measurement. This task can be undertake in two ways specified and described in the ISO standard ISO/IEC 98-3 (the so-called Guide to the Expression of Uncertainty in Measurement "GUM") as type A and type B:
The determination of the uncertainty of measurement according to type A is a statistical analysis. Here a measurement is undertaken several times to obtain several independent measured values.
Uncertainty of measurement for type B is not based on statistical parameters, but instead on experience from previous measurements and/or general knowledge about the characteristics and the behaviour of materials, the inhomogeneity of samples, the characteristics of the inspection method and the sampling, the effects of ambient conditions (climate,..) and much more. Type B also includes the use of corresponding values from documentation such as calibration certificates, manuals or manufacturer's information about the accuracy of the test instrument.
Further actions for the calculation of the uncertainty of measurement
The individual uncertainties must be depicted using probability functions or statistical models to determine the total uncertainty of measurement from all the individual standard uncertainties of measurement. These individual uncertainties are then placed in relation to each other and calculations performed taking into account the related sensitivities. (Calculations for type B)
According to method A the standard deviation is calculated based on the measured values from a large number of repeat measurements using the underlying statistical model. This standard deviation is multiplied by the coverage factor k=2 and then yields the total uncertainty of measurement.
For example: an uncertainty of measurement of u=0.1 mm signifies in practice ±0.1 mm. Multiplied by the coverage factor k=2 the expanded uncertainty of measurement is U=0.2 mm, which corresponds to an interval of ±0.2 mm.
As a rule a confidence interval of 95% is used. Usually a normal (Gaussian) distribution is assumed, however other distributions, e.g. Student's t-distribution, are common.
What is the importance of uncertainty of measurement in practice?
Because the uncertainty of measurement is not a fixed value, but is instead an interval that can deviate upward or downward from the expected value, it is not possible to simply "eliminate" the uncertainty of measurement from the measurement result. Instead the deviation must be taken into account in the form of a tolerance.
An example: an automotive manufacturer specifies internally a particular tolerance for the diameter of engine pistons. The uncertainty of measurement calculated is now subtracted from this value resulting in a new - lower - tolerance. In this way it is ensured the pistons fit in the engine despite the uncertainty of measurement. Pistons of the corresponding diameter are now ordered from a supplier based on this new tolerance. During quality control, the supplier must again take into account an uncertainty of measurement which means that the supplier reduces again the tolerance in production.
Of course, in practice the tolerances should be kept as low as possible so that the pistons seal correctly. For this reason it is in turn imperative to keep the uncertainty of measurement as low as possible and to determine it as precisely as possible. The challenge in all this is that numerous
parameters affect the uncertainty of measurement and determining them as accurately as possible requires considerable experience.