google.com, pub-4497197638514141, DIRECT, f08c47fec0942fa0 Industries Needs: November 2021

Tuesday, November 30, 2021

Errors during the measurement process

Rogue data points

In a set of measurements subject to random error, measurements with a very large error sometimes occur at random and unpredictable times, where the magnitude of the error is much larger than could reasonably be attributed to the expected random varia[1]tions in measurement value. Sources of such abnormal error include sudden transient voltage surges on the mains power supply and incorrect recording of data (e.g. writing down 146.1 when the actual measured value was 164.1). It is accepted practice in such cases to discard these rogue 



measurements, and a threshold level of a š3 deviation is often used to determine what should be discarded. It is extremely rare for measure[1]ment errors to exceed š3 limits when only normal random effects are affecting the measured value.

Special case when the number of measurements is small

When the number of measurements of a quantity is particularly small and statistical analysis of the distribution of error values is required, problems can arise when using standard Gaussian tables in terms of z as defined in equation (3.16) because the mean of only a small number of measurements may deviate significantly from the true measure[1]ment value. In response to this, an alternative distribution function called the Student-t distribution can be used which gives a more accurate prediction of the error distribution when the number of samples is small. This is discussed more fully in Miller (1990).

3.6 Aggregation of measurement system errors

Errors in measurement systems often arise from two or more different sources, and these must be aggregated in the correct way in order to obtain a prediction of the total likely error in output readings from the measurement system. Two different forms of aggregation are required. Firstly, a single measurement component may have both systematic and random errors and, secondly, a measurement system may consist of several measurement components that each have separate errors.

3.6.1 Combined effect of systematic and random errors

If a measurement is affected by both systematic and random errors that are quantified as ±x (systematic errors) and ±y (random errors), some means of expressing the combined effect of both types of error is needed. One way of expressing the combined error would be to sum the two separate components of error, i.e. to say that the total possible error is e = ±(x + y). However, a more usual course of action is to express the likely maximum error as follows:


It can be shown (ANSI/ASME, 1985) that this is the best expression for the error statistically, since it takes account of the reasonable assumption that the systematic and random errors are independent and so are unlikely to both be at their maximum or minimum value at the same time.

 

Sunday, November 28, 2021

Errors during the measurement process

 

Distribution of manufacturing tolerances

Many aspects of manufacturing processes are subject to random variations caused by factors that are similar to those that cause random errors in measurements. In most cases, these random variations in manufacturing, which are known as tolerances, fit a Gaussian distribution, and the previous analysis of random measurement errors can be applied to analyse the distribution of these variations in manufacturing parameters.

Example 3.5

An integrated circuit chip contains 105 transistors. The transistors have a mean current gain of 20 and a standard deviation of 2. Calculate the following:

(a) the number of transistors with a current gain between 19.8 and 20.2

(b) the number of transistors with a current gain greater than 17

Solution

(a) The proportion of transistors where 19.8 < gain < 20.2 is:

                    P[X < 20] - P[X < 19.8] = P[z < 0.2] - P[z < -0.2] (for z = (X - µ)/α)

For X = 20.2; z = 0.1 and for X = 19.8; z = - 0.1

From tables, P[z < 0.1] = 0.5398 and thus P[z < - 0.1] = 1 - P[z < 0.1] = 1 - 0.5398 = 0.4602

Hence, P[z < 0.1] - P[z < -0.1] = 0.5398 - 0.4602 = 0.0796

Thus 0.0796 × 105 = 7960 transistors have a current gain in the range from 19.8 to 20.2.

(b) The number of transistors with gain >17 is given by:

                            P[x > 17] = 1 - P[x < 17] = 1 - P[z < -1.5] = P[z < +1.5] = 0.9332

Thus, 93.32%, i.e. 93 320 transistors have a gain >17.

Goodness of fit to a Gaussian distribution

All of the analysis of random deviations presented so far only applies when the data being analysed belongs to a Gaussian distribution. Hence, the degree to which a set of data fits a Gaussian distribution should always be tested before any analysis is carried out. This test can be carried out in one of three ways:

(a) Simple test: The simplest way to test for Gaussian distribution of data is to plot a histogram and look for a ‘Bell-shape’ of the form shown earlier in Figure 3.5. Deciding whether or not the histogram confirms a Gaussian distribution is a matter of judgement. For a Gaussian distribution, there must always be approximate symmetry about the line through the centre of the histogram, the highest point of the histogram must always coincide with this line of symmetry, and the histogram must get progressively smaller either side of this point. However, because the histogram can only be drawn with a finite set of measurements, some deviation from the perfect shape of the histogram as described above is to be expected even if the data really is Gaussian.

(b) Using a normal probability plot: A normal probability plot involves dividing the data values into a number of ranges and plotting the cumulative probability of summed data frequencies against the data values on special graph paper.Ł This line should be a straight line if the data distribution is Gaussian. However, careful judgement is required since only a finite number of data values can be used and therefore the line drawn will not be entirely straight even if the distribution is Gaussian. Considerable experience is needed to judge whether the line is straight enough to indicate a Gaussian distribution. This will be easier to understand if the data in measurement set C is used as an example. Using the same five ranges as used to draw the histogram, the following table is first drawn:


The normal probability plot drawn from the above table is shown in Figure 3.9. This is sufficiently straight to indicate that the data in measurement set C is Gaussian.

(c) Chi-squared test: A further test that can be applied is based on the chi-squared (x2) distribution. This is beyond the scope of this book but full details can be found in Caulcott (1973).


Errors during the measurement process

 

Standard error of the mean

The foregoing analysis has examined the way in which measurements with random errors are distributed about the mean value. However, we have already observed that some error remains between the mean value of a set of measurements and the true value, i.e. averaging a number of measurements will only yield the true value if the number of measurements is infinite. If several subsets are taken from an infinite data population, then, by the central limit theorem, the means of the subsets will be distributed about the mean of the infinite data set. The error between the mean of a finite data set and the true measurement value (mean of the infinite data set) is defined as the standard error of the mean, α. This is calculated as:

                                                    α = α /n1/2                                                      (3.18)

α tends towards zero as the number of measurements in the data set expands towards infinity. The measurement value obtained from a set of n measurements, x1, x2, … xn,

can then be expressed as:

                                                          x = xmean ± α

For the data set C of length measurements used earlier, n = 23, α = 1.88 and α = 0.39. The length can therefore be expressed as 406.5 ± 0.4 (68% confidence limit). However, it is more usual to express measurements with 95% confidence limits (±2 α boundaries). In this case, 2 α = 3.76, 2α = 0.78 and the length can be expressed as 406.5 ± 0.8 (95% confidence limits).

Estimation of random error in a single measurement

In many situations where measurements are subject to random errors, it is not practical to take repeated measurements and find the average value. Also, the averaging process becomes invalid if the measured quantity does not remain at a constant value, as is usually the case when process variables are being measured. Thus, if only one measurement can be made, some means of estimating the likely magnitude of error in it is required. The normal approach to this is to calculate the error within 95% confidence limits, i.e. to calculate the value of the deviation D such that 95% of the area under the probability curve lies within limits of ±D. These limits correspond to a deviation of ±1.96. Thus, it is necessary to maintain the measured quantity at a constant value whilst a number of measurements are taken in order to create a reference measurement set from which α can be calculated. Subsequently, the maximum likely deviation in a single measurement can be expressed as: Deviation = ±1.96. However, this only expresses the maximum likely deviation of the measurement from the calculated mean of the reference measurement set, which is not the true value as observed earlier. Thus the calculated value for the standard error of the mean has to be added to the likely maximum deviation value. Thus, the maximum likely error in a single measurement can be expressed as:

                                         Error = ±(1.96 α + α )                                                          (3.19)

Example 3.4

Suppose that a standard mass is measured 30 times with the same instrument to create a reference data set, and the calculated values of α and ˛ are α = 0.43 and α = 0.08. If the instrument is then used to measure an unknown mass and the reading is 105.6 kg, how should the mass value be expressed?

Solution Using (3.19), 1.96 α + α = 0.92. The mass value should therefore be expressed as: 105.6 ± 0.9 kg.

Before leaving this matter, it must be emphasized that the maximum error specified for a measurement is only specified for the confidence limits defined. Thus, if the maximum error is specified as ±1% with 95% confidence limits, this means that there is still 1 chance in 20 that the error will exceed ±1%.

Errors during the measurement process


Standard Gaussian tables

A standard Gaussian table, such as that shown in Table 3.1, tabulates F(z) for various values of z, where F(z) is given by:



Thus, F(z) gives the proportion of data values that are less than or equal to z. This proportion is the area under the curve of F(z) against z that is to the left of z. There[1]fore, the expression given in (3.15) has to be evaluated as [F(z2) – F(z1)]. Study of Table 3.1 shows that F(z) = 0.5 for z = 0. This confirms that, as expected, the number of data values
 0 is 50% of the total. This must be so if the data only has random errors. It will also be observed that Table 3.1, in common with most published standard Gaussian tables, only gives F(z) for positive values of z. For negative values of z, we can make use of the following relationship because the frequency distribution curve is normalized:

 

                                                        F(-z) - 1 - F(z)                                                              (3.17)

(F(-z) is the area under the curve to the left of (-z), i.e. it represents the proportion of data values ≤ -z.)

Example 3.3

How many measurements in a data set subject to random errors lie outside deviation boundaries of +α and - α, i.e. how many measurements have a deviation greater than j α j? Solution The required number is represented by the sum of the two shaded areas in Figure 3.8. This can be expressed mathematically as:




(This last step is valid because the frequency distribution curve is normalized such that the total area under it is unity.)

Thus

                                    P[E < - α] + P[E > + α] = 0.1587 C 0.1587 = 0.3174 ~ 32%

i.e. 32% of the measurements lie outside the š boundaries, then 68% of the measure[1]ments lie inside.

The above analysis shows that, for Gaussian-distributed data values, 68% of the measurements have deviations that lie within the bounds of š. Similar analysis shows 



that boundaries of š2 contain 95.4% of data points, and extending the boundaries to š3 encompasses 99.7% of data points. The probability of any data point lying outside particular deviation boundaries can therefore be expressed by the following table.


 

Saturday, November 27, 2021

Errors during the measurement process

 

Gaussian distribution

Measurement sets that only contain random errors usually conform to a distribution with a particular shape that is called Gaussian, although this conformance must always be tested (see the later section headed ‘Goodness of fit’). The shape of a Gaussian curve is such that the frequency of small deviations from the mean value is much greater than the frequency of large deviations. This coincides with the usual expectation in measure[1]ments subject to random errors that the number of measurements with a small error is much larger than the number of measurements with a large error. Alternative names for the Gaussian distribution are the Normal distribution or Bell-shaped distribution. A Gaussian curve is formally defined as a normalized frequency distribution that is symmetrical about the line of zero error and in which the frequency and magnitude of quantities are related by the expression:



where m is the mean value of the data set x and the other quantities are as defined before. Equation (3.11) is particularly useful for analysing a Gaussian set of measurements and predicting how many measurements lie within some particular defined range. If the measurement deviations D are calculated for all measurements such that D – x - m, then the curve of deviation frequency F(D) plotted against deviation magnitude D is a Gaussian curve known as the error frequency distribution curve. The mathematical relationship between F(D) and D can then be derived by modifying equation (3.11) to give:


The shape of a Gaussian curve is strongly influenced by the value of , with the width of the curve decreasing as α becomes smaller. As a smaller α corresponds with the typical deviations of the measurements from the mean value becoming smaller, this confirms the earlier observation that the mean value of a set of measurements gets closer to the true value as α decreases.

 If the standard deviation is used as a unit of error, the Gaussian curve can be used to determine the probability that the deviation in any particular measurement in a Gaussian data set is greater than a certain value. By substituting the expression for F(D) in (3.12) into the probability equation (3.9), the probability that the error lies in a band between error levels D1 and D2 can be expressed as:


Solution of this expression is simplified by the substitution:

                                                     z = D/ α                                                                       (3.14)

The effect of this is to change the error distribution curve into a new Gaussian distri[1]bution that has a standard deviation of one (α = 1) and a mean of zero. This new form, shown in Figure 3.7, is known as a standard Gaussian curve, and the dependent



variable is now z instead of D. Equation (3.13) can now be re-expressed as:

Unfortunately, neither equation (3.13) nor (3.15) can be solved analytically using tables of standard integrals, and numerical integration provides the only method of solu[1]tion. However, in practice, the tedium of numerical integration can be avoided when analysing data because the standard form of equation (3.15), and its independence from the particular values of the mean and standard deviation of the data, means that standard Gaussian tables that tabulate F(z) for various values of z can be used.



Errors during the measurement process

3.5.2 Graphical data analysis techniques – frequency distributions

Graphical techniques are a very useful way of analysing the way in which random measurement errors are distributed. The simplest way of doing this is to draw a histogram, in which bands of equal width across the range of measurement values are defined and the number of measurements within each band is counted. Figure 3.5 shows a histogram for set C of the length measurement data given in section 3.5.1, in which the bands chosen are 2 mm wide. For instance, there are 11 measurements in the range between 405.5 and 407.5 and so the height of the histogram for this range is 11 units. Also, there are 5 measurements in the range from 407.5 to 409.5 and so the height of the histogram over this range is 5 units. The rest of the histogram is completed in a similar fashion. (N.B. The scaling of the bands was deliberately chosen so that no measurements fell on the boundary between different bands and caused ambiguity about which band to put them in.) Such a histogram has the characteristic shape shown by truly random data, with symmetry about the mean value of the measurements.

As it is the actual value of measurement error that is usually of most concern, it is often more useful to draw a histogram of the deviations of the measurements



from the mean value rather than to draw a histogram of the measurements them[1]selves. The starting point for this is to calculate the deviation of each measurement away from the calculated mean value. Then a histogram of deviations can be drawn by defining deviation bands of equal width and counting the number of deviation values in each band. This histogram has exactly the same shape as the histogram of the raw measurements except that the scaling of the horizontal axis has to be redefined in terms of the deviation values (these units are shown in brackets on Figure 3.5).

Let us now explore what happens to the histogram of deviations as the number of measurements increases. As the number of measurements increases, smaller bands can be defined for the histogram, which retains its basic shape but then consists of a larger number of smaller steps on each side of the peak. In the limit, as the number of measurements approaches infinity, the histogram becomes a smooth curve known as a frequency distribution curve as shown in Figure 3.6. The ordinate of this curve is the frequency of occurrence of each deviation value, F(D), and the abscissa is the magnitude of deviation, D.

The symmetry of Figures 3.5 and 3.6 about the zero deviation value is very useful for showing graphically that the measurement data only has random errors. Although these figures cannot easily be used to quantify the magnitude and distribution of the errors, very similar graphical techniques do achieve this. If the height of the frequency distribution curve is normalized such that the area under it is unity, then the curve in this form is known as a probability curve, and the height F(D) at any particular deviation magnitude D is known as the probability density function (p.d.f.). The condition that


the area under the curve is unity can be expressed mathematically as:



The probability that the error in any one particular measurement lies between two levels D1 and D2 can be calculated by measuring the area under the curve contained between two vertical lines drawn through D1 and D2, as shown by the right-hand hatched area in Figure 3.6. This can be expressed mathematically as:



articular importance for assessing the maximum error likely in any one measure[1]ment is the cumulative distribution function (c.d.f.). This is defined as the probability of observing a value less than or equal to D0, and is expressed mathematically as:

Thus, the c.d.f. is the area under the curve to the left of a vertical line drawn through D0, as shown by the left-hand hatched area on Figure 3.6.

The deviation magnitude Dp corresponding with the peak of the frequency distri[1]bution curve (Figure 3.6) is the value of deviation that has the greatest probability. If the errors are entirely random in nature, then the value of Dp will equal zero. Any non-zero value of Dp indicates systematic errors in the data, in the form of a bias that is often removable by recalibration.

 


Friday, November 26, 2021

Errors during the measurement process


3.5.1 Statistical analysis of measurements subject to random errors

Mean and median values

The average value of a set of measurements of a constant quantity can be expressed as either the mean value or the median value. As the number of measurements increases, the difference between the mean value and median values becomes very small. However, for any set of n measurements x1, x2 … xn of a constant quantity, the most likely true value is the mean given by:

This is valid for all data sets where the measurement errors are distributed equally about the zero error value, i.e. where the positive errors are balanced in quantity and magnitude by the negative errors.

 The median is an approximation to the mean that can be written down without having to sum the measurements. The median is the middle value when the measurements in the data set are written down in ascending order of magnitude. For a set of n measurements x1, x2 ÐÐÐ xn of a constant quantity, written down in ascending order of magnitude, the median value is given by:

Thus, for a set of 9 measurements x1, x2 … x9 arranged in order of magnitude, the median value is x5. For an even number of measurements, the median value is midway between the two centre values, i.e. for 10 measurements x1 … x10, the median value is given by: (x5 + x6)/2.

Suppose that the length of a steel bar is measured by a number of different observers and the following set of 11 measurements are recorded (units mm). We will call this measurement set A.

                          398 420 394 416 404 408 400 420 396 413 430                       (Measurement set A)

sing (3.4) and (3.5), mean = 409.0 and median = 408. Suppose now that the measure[1]ments are taken again using a better measuring rule, and with the observers taking more care, to produce the following measurement set B:

                                409 406 402 407 405 404 407 404 407 407 408                 (Measurement set B)

or these measurements, mean D 406.0 and median D 407. Which of the two measure[1]ment sets A and B, and the corresponding mean and median values, should we have most confidence in? Intuitively, we can regard measurement set B as being more reli[1]able since the measurements are much closer together. In set A, the spread between the smallest (396) and largest (430) value is 34, whilst in set B, the spread is only 6.

 Thus, the smaller the spread of the measurements, the more confidence we have in the mean or median value calculated.

Let us now see what happens if we increase the number of measurements by extending measurement set B to 23 measurements. We will call this measurement set C.

                      409 406 402 407 405 404 407 404 407 407 408 406 410 406 405 408

                                   406 409 406 405 409 406 407                                             (Measurement set C)

Now, mean = 406.5 and median = 406.

This confirms our earlier statement that the median value tends towards the mean value as the number of measurements increases.

Standard deviation and variance

Expressing the spread of measurements simply as the range between the largest and smallest value is not in fact a very good way of examining how the measurement values are distributed about the mean value. A much better way of expressing the distribution is to calculate the variance or standard deviation of the measurements. The starting point for calculating these parameters is to calculate the deviation (error) di of each measurement xi from the mean value xmean:



Solution

First, draw a table of measurements and deviations for set A (mean D 409 as calculated earlier):


Note that the smaller values of V and  for measurement set B compared with A correspond with the respective size of the spread in the range between maximum and minimum values for the two sets.

Thus, as V and  decrease for a measurement set, we are able to express greater confidence that the calculated mean or median value is close to the true value, i.e. that the averaging process has reduced the random error value close to zero.

Comparing V and  for measurement sets B and C, V and  get smaller as the number of measurements increases, confirming that confidence in the mean value increases as the number of measurements increases.

We have observed so far that random errors can be reduced by taking the average (mean or median) of a number of measurements. However, although the mean or median value is close to the true value, it would only become exactly equal to the true value if we could average an infinite number of measurements. As we can only make a finite number of measurements in a practical situation, the average value will still have some error. This error can be quantified as the standard error of the mean, which will be discussed in detail a little later. However, before that, the subject of graphical analysis of random measurement errors needs to be covered. 


Errors during the measurement process

 

3.3.4 Calibration

Instrument calibration is a very important consideration in measurement systems and calibration procedures are considered in detail in Chapter 4. All instruments suffer drift in their characteristics, and the rate at which this happens depends on many factors, such as the environmental conditions in which instruments are used and the frequency of their use. Thus, errors due to instruments being out of calibration can usually be rectified by increasing the frequency of recalibration.

 

3.3.5 Manual correction of output reading

In the case of errors that are due either to system disturbance during the act of measurement or due to environmental changes, a good measurement technician can substantially reduce errors at the output of a measurement system by calculating the effect of such systematic errors and making appropriate correction to the instrument readings. This is not necessarily an easy task, and requires all disturbances in the measurement system to be quantified. This procedure is carried out automatically by intelligent instruments.

 

3.3.6 Intelligent instruments

Intelligent instruments contain extra sensors that measure the value of environmental inputs and automatically compensate the value of the output reading. They have the ability to deal very effectively with systematic errors in measurement systems, and errors can be attenuated to very low levels in many cases. A more detailed analysis of intelligent instruments can be found in Chapter 9.

 

3.4 Quantification of systematic errors

Once all practical steps have been taken to eliminate or reduce the magnitude of system[1]atic errors, the final action required is to estimate the maximum remaining error that may exist in a measurement due to systematic errors. Unfortunately, it is not always possible to quantify exact values of a systematic error, particularly if measurements are subject to unpredictable environmental conditions. The usual course of action is to assume mid-point environmental conditions and specify the maximum measurement error as šx% of the output reading to allow for the maximum expected deviation in environmental conditions away from this mid-point. Data sheets supplied by instrument manufacturers usually quantify systematic errors in this way, and such figures take account of all systematic errors that may be present in output readings from the instrument.

 

3.5 Random errors

Random errors in measurements are caused by unpredictable variations in the measure[1]ment system. They are usually observed as small perturbations of the measurement either side of the correct value, i.e. positive errors and negative errors occur in approximately equal numbers for a series of measurements made of the same constant quantity. Therefore, random errors can largely be eliminated by calculating the average of a number of repeated measurements, provided that the measured quantity remains constant during the process of taking the repeated measurements. This averaging process of repeated measurements can be done automatically by intelligent instruments, as discussed in Chapter 9. The degree of confidence in the calculated mean/median values can be quantified by calculating the standard deviation or variance of the data, these being parameters that describe how the measurements are distributed about the mean value/median. All of these terms are explained more fully in section 3.5.1.

3.3.3 High-gain feedback

The benefit of adding high-gain feedback to many measurement systems is illustrated by considering the case of the voltage-measuring instrument whose block diagram is shown in Figure 3.3. In this system, the unknown voltage Ei is applied to a motor of torque constant Km, and the induced torque turns a pointer against the restraining action of a spring with spring constant Ks. The effect of environmental inputs on the



motor and spring constants is represented by variables Dm and Ds. In the absence of environmental inputs, the displacement of the pointer X0 is given by: X0 D KmKsEi. However, in the presence of environmental inputs, both Km and Ks change, and the relationship between X0 and Ei can be affected greatly. Therefore, it becomes difficult or impossible to calculate Ei from the measured value of X0. Consider now what happens if the system is converted into a high-gain, closed-loop one, as shown in Figure 3.4, by adding an amplifier of gain constant Ka and a feedback device with gain constant Kf. Assume also that the effect of environmental inputs on the values of Ka and Kf are represented by Da and Df. The feedback device feeds back a voltage E0 proportional to the pointer displacement X0. This is compared with the unknown voltage Ei by a comparator and the error is amplified. Writing down the equations of the system, we have:


 This is a highly important result because we have reduced the relationship between X0 and Ei to one that involves only Kf. The sensitivity of the gain constants Ka, Km and Ks to the environmental inputs Da, Dm and Ds has thereby been rendered irrelevant, and we only have to be concerned with one environmental input Df. Conveniently, it is usually easy to design a feedback device that is insensitive to environmental inputs: this is much easier than trying to make a motor or spring insensitive. Thus, high[1]gain feedback techniques are often a very effective way of reducing a measurement system’s sensitivity to environmental inputs. However, one potential problem that must be mentioned is that there is a possibility that high-gain feedback will cause instability in the system. Therefore, any application of this method must include careful stability analysis of the system.

Errors during the measurement process

 

3.2.3 Wear in instrument components

Systematic errors can frequently develop over a period of time because of wear in instrument components. Recalibration often provides a full solution to this problem.

 

3.2.4 Connecting leads

In connecting together the components of a measurement system, a common source of error is the failure to take proper account of the resistance of connecting leads (or pipes in the case of pneumatically or hydraulically actuated measurement systems). For instance, in typical applications of a resistance thermometer, it is common to find that the thermometer is separated from other parts of the measurement system by perhaps 100 metres. The resistance of such a length of 20 gauge copper wire is 7 , and there is a further complication that such wire has a temperature coefficient of 1 m /°C. Therefore, careful consideration needs to be given to the choice of connecting leads. Not only should they be of adequate cross-section so that their resistance is minimized, but they should be adequately screened if they are thought likely to be subject to electrical or magnetic fields that could otherwise cause induced noise. Where screening is thought essential, then the routing of cables also needs careful planning. In one application in the author’s personal experience involving instrumentation of an electric[1]arc steel making furnace, screened signal-carrying cables between transducers on the arc furnace and a control room at the side of the furnace were initially corrupted by high amplitude 50 Hz noise. However, by changing the route of the cables between the transducers and the control room, the magnitude of this induced noise was reduced by a factor of about ten.

 

3.3 Reduction of systematic errors

The prerequisite for the reduction of systematic errors is a complete analysis of the measurement system that identifies all sources of error. Simple faults within a system, such as bent meter needles and poor cabling practices, can usually be readily and cheaply rectified once they have been identified. However, other error sources require more detailed analysis and treatment. Various approaches to error reduction are consid[1]ered below.

 

3.3.1 Careful instrument design

Careful instrument design is the most useful weapon in the battle against environmental inputs, by reducing the sensitivity of an instrument to environmental inputs to as low a level as possible. For instance, in the design of strain gauges, the element should be constructed from a material whose resistance has a very low temperature coefficient (i.e. the variation of the resistance with temperature is very small). However, errors due to the way in which an instrument is designed are not always easy to correct, and a choice often has to be made between the high cost of redesign and the alternative of accepting the reduced measurement accuracy if redesign is not undertaken.

 

3.3.2 Method of opposing inputs

The method of opposing inputs compensates for the effect of an environmental input in a measurement system by introducing an equal and opposite environmental input that cancels it out. One example of how this technique is applied is in the type of milli[1]voltmeter shown in Figure 3.2. This consists of a coil suspended in a fixed magnetic field produced by a permanent magnet. When an unknown voltage is applied to the coil, the magnetic field due to the current interacts with the fixed field and causes the coil (and a pointer attached to the coil) to turn. If the coil resistance Rcoil is sensitive to temperature, then any environmental input to the system in the form of a temperature change will alter the value of the coil current for a given applied voltage and so alter the pointer output reading. Compensation for this is made by introducing a compen[1]sating resistance Rcomp into the circuit, where Rcomp has a temperature coefficient that is equal in magnitude but opposite in sign to that of the coil. Thus, in response to an increase in temperature, Rcoil increases but Rcomp decreases, and so the total resistance remains approximately the same.

Errors during the measurement process

 

3.2.2 Errors due to environmental inputs

An environmental input is defined as an apparently real input to a measurement system that is actually caused by a change in the environmental conditions surrounding the measurement system. The fact that the static and dynamic characteristics specified for measuring instruments are only valid for particular environmental conditions (e.g. of temperature and pressure) has already been discussed at considerable length in Chapter 2. These specified conditions must be reproduced as closely as possible during calibration exercises because, away from the specified calibration conditions, the char[1]acteristics of measuring instruments vary to some extent and cause measurement errors. The magnitude of this environment-induced variation is quantified by the two constants known as sensitivity drift and zero drift, both of which are generally included in the published specifications for an instrument. Such variations of environmental conditions away from the calibration conditions are sometimes described as modifying inputs to the measurement system because they modify the output of the system. When such modifying inputs are present, it is often difficult to determine how much of the output change in a measurement system is due to a change in the measured variable and how much is due to a change in environmental conditions. This is illustrated by the following example. Suppose we are given a small closed box and told that it may contain either a mouse or a rat. We are also told that the box weighs 0.1 kg when empty. If we put the box onto bathroom scales and observe a reading of 1.0 kg, this does not immediately tell us what is in the box because the reading may be due to one of three things:

(a) a 0.9 kg rat in the box (real input)

(b) an empty box with a 0.9 kg bias on the scales due to a temperature change (envi[1]ronmental input)

(c) a 0.4 kg mouse in the box together with a 0.5 kg bias (real + environmental inputs).

Thus, the magnitude of any environmental input must be measured before the value of the measured quantity (the real input) can be determined from the output reading of an instrument.

 In any general measurement situation, it is very difficult to avoid environmental inputs, because it is either impractical or impossible to control the environmental condi[1]tions surrounding the measurement system. System designers are therefore charged with the task of either reducing the susceptibility of measuring instruments to environmental inputs or, alternatively, quantifying the effect of environmental inputs and correcting for them in the instrument output reading. The techniques used to deal with envi[1]ronmental inputs and minimize their effect on the final output measurement follow a number of routes as discussed below.

Thursday, November 25, 2021

Errors during the measurement process

3.2 Sources of systematic error

Systematic errors in the output of many instruments are due to factors inherent in the manufacture of the instrument arising out of tolerances in the components of the instrument. They can also arise due to wear in instrument components over a period of time. In other cases, systematic errors are introduced either by the effect of envi[1]ronmental disturbances or through the disturbance of the measured system by the act of measurement. These various sources of systematic error, and ways in which the magnitude of the errors can be reduced, are discussed below.

3.2.1 System disturbance due to measurement

Disturbance of the measured system by the act of measurement is a common source of systematic error. If we were to start with a beaker of hot water and wished to measure its temperature with a mercury-in-glass thermometer, then we would take the thermometer, which would initially be at room temperature, and plunge it into the water. In so doing, we would be introducing a relatively cold mass (the thermometer) into the hot water and a heat transfer would take place between the water and the thermometer. This heat transfer would lower the temperature of the water. Whilst the reduction in temperature in this case would be so small as to be undetectable by the limited measurement resolution of such a thermometer, the effect is finite and clearly establishes the principle that, in nearly all measurement situations, the process of measurement disturbs the system and alters the values of the physical quantities being measured.

A particularly important example of this occurs with the orifice plate. This is placed into a fluid-carrying pipe to measure the flow rate, which is a function of the pressure that is measured either side of the orifice plate. This measurement procedure causes a permanent pressure loss in the flowing fluid. The disturbance of the measured system can often be very significant.

Thus, as a general rule, the process of measurement always disturbs the system being measured. The magnitude of the disturbance varies from one measurement system to the next and is affected particularly by the type of instrument used for measurement. Ways of minimizing disturbance of measured systems is an important consideration in instrument design. However, an accurate understanding of the mechanisms of system disturbance is a prerequisite for this.

Measurements in electric circuits

In analysing system disturbance during measurements in electric circuits, Thevenin’s ´ theorem (see Appendix 3) is often of great assistance. For instance, consider the circuit shown in Figure 3.1(a) in which the voltage across resistor R5 is to be measured by a voltmeter with resistance Rm. Here, Rm acts as a shunt resistance across R5, decreasing the resistance between points AB and so disturbing the circuit. Therefore, the voltage Em measured by the meter is not the value of the voltage E0 that existed prior to measurement. The extent of the disturbance can be assessed by calculating the open[1]circuit voltage E0 and comparing it with Em.

Thevenin’s theorem allows the circuit of Figure 3.1(a) comprising two voltage ´ sources and five resistors to be replaced by an equivalent circuit containing a single resistance and one voltage source, as shown in Figure 3.1(b). For the purpose of defining the equivalent single resistance of a circuit by Thevenin’s theorem, all voltage ´ sources are represented just by their internal resistance, which can be approximated to zero, as shown in Figure 3.1(c). Analysis proceeds by calculating the equivalent resistances of sections of the circuit and building these up until the required equivalent resistance of the whole of the circuit is obtained. Starting at C and D, the circuit to the left of C and D consists of a series pair of resistances (R1 and R2) in parallel with R3, and the equivalent resistance can be written as:

                    1 /RCD = 1 /(R1 + R2) + 1 /R3  or  RCD = (R1 + R2)R3 /(R1 + R2 + R3)

Moving now to A and B, the circuit to the left consists of a pair of series resistances (RCD and R4) in parallel with R5. The equivalent circuit resistance RAB can thus be 



 written as:

                          1 /RAB = 1 /(RCD + R4) + 1 /R5  or  RAB = (R4 + RCD)R5 /R4 + RCD + R5

Substituting for RCD using the expression derived previously, we obtain:

 RAB =  [{(R1 + R2)R3/ R1 + R2 + R3} + R4]  R5 /[{(R1 + R2)R3 /R1 + R2 + R3} + R4 + R5]

Defining I as the current flowing in the circuit when the measuring instrument is connected to it, we can write:

                                                         I = E0 /(RAB + Rm) ,

and the voltage measured by the meter is then given by:

                                                       Em = RmE0 /(RAB + Rm) .

In the absence of the measuring instrument and its resistance Rm, the voltage across AB would be the equivalent circuit voltage source whose value is E0. The effect of measurement is therefore to reduce the voltage across AB by the ratio given by:

                                                        Em /E0 = Rm /(RAB + Rm)

It is thus obvious that as Rm gets larger, the ratio Em/E0 gets closer to unity, showing that the design strategy should be to make Rm as high as possible to minimize disturbance of the measured system. (Note that we did not calculate the value of E0, since this is not required in quantifying the effect of Rm.)

Example 3.1

Suppose that the components of the circuit shown in Figure 3.1(a) have the following values:


The voltage across AB is measured by a voltmeter whose internal resistance is 9500 . What is the measurement error caused by the resistance of the measuring instrument?

Solution

Proceeding by applying Thevenin’s theorem to find an equivalent circuit to that of ´ Figure 3.1(a) of the form shown in Figure 3.1(b), and substituting the given component values into the equation for RAB (3.1), we obtain:


 

From equation (3.2), we have:

                                                         Em /E0 = Rm /(RAB + Rm)

The measurement error is given by (E0 – Em):

                                                   E0 - Em = E0 { 1 - Rm /(RAB + Rm) }

Substituting in values:

 

E0 - Em = E0  {1 - 9500 /10 000} = 0.95E0

Thus, the error in the measured value is 5%.

At this point, it is interesting to note the constraints that exist when practical attempts are made to achieve a high internal resistance in the design of a moving-coil voltmeter. Such an instrument consists of a coil carrying a pointer mounted in a fixed magnetic field. As current flows through the coil, the interaction between the field generated and the fixed field causes the pointer it carries to turn in proportion to the applied current (see Chapter 6 for further details). The simplest way of increasing the input impedance (the resistance) of the meter is either to increase the number of turns in the coil or to construct the same number of coil turns with a higher-resistance material. However, either of these solutions decreases the current flowing in the coil, giving less magnetic torque and thus decreasing the measurement sensitivity of the instrument (i.e. for a given applied voltage, we get less deflection of the pointer). This problem can be overcome by changing the spring constant of the restraining springs of the instrument, such that less torque is required to turn the pointer by a given amount. However, this reduces the ruggedness of the instrument and also demands better pivot design to reduce friction. This highlights a very important but tiresome principle in instrument design: any attempt to improve the performance of an instrument in one respect generally decreases the performance in some other aspect. This is an inescapable fact of life with passive instruments such as the type of voltmeter mentioned, and is often the reason for the use of alternative active instruments such as digital voltmeters, where the inclusion of auxiliary power greatly improves performance.

Bridge circuits for measuring resistance values are a further example of the need for careful design of the measurement system. The impedance of the instrument measuring the bridge output voltage must be very large in comparison with the component resist[1]ances in the bridge circuit. Otherwise, the measuring instrument will load the circuit and draw current from it. This is discussed more fully in Chapter 7.

Labels

ACTUATORS (10) AIR CONTROL/MEASUREMENT (38) ALARMS (20) ALIGNMENT SYSTEMS (2) Ammeters (12) ANALYSERS/ANALYSIS SYSTEMS (33) ANGLE MEASUREMENT/EQUIPMENT (5) APPARATUS (6) Articles (3) AUDIO MEASUREMENT/EQUIPMENT (1) BALANCES (4) BALANCING MACHINES/SERVICES (1) BOILER CONTROLS/ACCESSORIES (5) BRIDGES (7) CABLES/CABLE MEASUREMENT (14) CALIBRATORS/CALIBRATION EQUIPMENT (19) CALIPERS (3) CARBON ANALYSERS/MONITORS (5) CHECKING EQUIPMENT/ACCESSORIES (8) CHLORINE ANALYSERS/MONITORS/EQUIPMENT (1) CIRCUIT TESTERS CIRCUITS (2) CLOCKS (1) CNC EQUIPMENT (1) COIL TESTERS EQUIPMENT (4) COMMUNICATION EQUIPMENT/TESTERS (1) COMPARATORS (1) COMPASSES (1) COMPONENTS/COMPONENT TESTERS (5) COMPRESSORS/COMPRESSOR ACCESSORIES (2) Computers (1) CONDUCTIVITY MEASUREMENT/CONTROL (3) CONTROLLERS/CONTROL SYTEMS (35) CONVERTERS (2) COUNTERS (4) CURRENT MEASURMENT/CONTROL (2) Data Acquisition Addon Cards (4) DATA ACQUISITION SOFTWARE (5) DATA ACQUISITION SYSTEMS (22) DATA ANALYSIS/DATA HANDLING EQUIPMENT (1) DC CURRENT SYSTEMS (2) DETECTORS/DETECTION SYSTEMS (3) DEVICES (1) DEW MEASURMENT/MONITORING (1) DISPLACEMENT (2) DRIVES (2) ELECTRICAL/ELECTRONIC MEASUREMENT (3) ENCODERS (1) ENERGY ANALYSIS/MEASUREMENT (1) EQUIPMENT (6) FLAME MONITORING/CONTROL (5) FLIGHT DATA ACQUISITION and ANALYSIS (1) FREQUENCY MEASUREMENT (1) GAS ANALYSIS/MEASURMENT (1) GAUGES/GAUGING EQUIPMENT (15) GLASS EQUIPMENT/TESTING (2) Global Instruments (1) Latest News (35) METERS (1) SOFTWARE DATA ACQUISITION (2) Supervisory Control - Data Acquisition (1)