google.com, pub-4497197638514141, DIRECT, f08c47fec0942fa0 Industries Needs: November 2021

Tuesday, November 30, 2021

Errors during the measurement process

Rogue data points

In a set of measurements subject to random error, measurements with a very large error sometimes occur at random and unpredictable times, where the magnitude of the error is much larger than could reasonably be attributed to the expected random varia[1]tions in measurement value. Sources of such abnormal error include sudden transient voltage surges on the mains power supply and incorrect recording of data (e.g. writing down 146.1 when the actual measured value was 164.1). It is accepted practice in such cases to discard these rogue 



measurements, and a threshold level of a š3 deviation is often used to determine what should be discarded. It is extremely rare for measure[1]ment errors to exceed š3 limits when only normal random effects are affecting the measured value.

Special case when the number of measurements is small

When the number of measurements of a quantity is particularly small and statistical analysis of the distribution of error values is required, problems can arise when using standard Gaussian tables in terms of z as defined in equation (3.16) because the mean of only a small number of measurements may deviate significantly from the true measure[1]ment value. In response to this, an alternative distribution function called the Student-t distribution can be used which gives a more accurate prediction of the error distribution when the number of samples is small. This is discussed more fully in Miller (1990).

3.6 Aggregation of measurement system errors

Errors in measurement systems often arise from two or more different sources, and these must be aggregated in the correct way in order to obtain a prediction of the total likely error in output readings from the measurement system. Two different forms of aggregation are required. Firstly, a single measurement component may have both systematic and random errors and, secondly, a measurement system may consist of several measurement components that each have separate errors.

3.6.1 Combined effect of systematic and random errors

If a measurement is affected by both systematic and random errors that are quantified as ±x (systematic errors) and ±y (random errors), some means of expressing the combined effect of both types of error is needed. One way of expressing the combined error would be to sum the two separate components of error, i.e. to say that the total possible error is e = ±(x + y). However, a more usual course of action is to express the likely maximum error as follows:


It can be shown (ANSI/ASME, 1985) that this is the best expression for the error statistically, since it takes account of the reasonable assumption that the systematic and random errors are independent and so are unlikely to both be at their maximum or minimum value at the same time.

 Electrochemical Analysers


Sunday, November 28, 2021

Errors during the measurement process

 

Distribution of manufacturing tolerances

Many aspects of manufacturing processes are subject to random variations caused by factors that are similar to those that cause random errors in measurements. In most cases, these random variations in manufacturing, which are known as tolerances, fit a Gaussian distribution, and the previous analysis of random measurement errors can be applied to analyse the distribution of these variations in manufacturing parameters.

Example 3.5

An integrated circuit chip contains 105 transistors. The transistors have a mean current gain of 20 and a standard deviation of 2. Calculate the following:

(a) the number of transistors with a current gain between 19.8 and 20.2

(b) the number of transistors with a current gain greater than 17

Solution

(a) The proportion of transistors where 19.8 < gain < 20.2 is:

                    P[X < 20] - P[X < 19.8] = P[z < 0.2] - P[z < -0.2] (for z = (X - µ)/α)

For X = 20.2; z = 0.1 and for X = 19.8; z = - 0.1

From tables, P[z < 0.1] = 0.5398 and thus P[z < - 0.1] = 1 - P[z < 0.1] = 1 - 0.5398 = 0.4602

Hence, P[z < 0.1] - P[z < -0.1] = 0.5398 - 0.4602 = 0.0796

Thus 0.0796 × 105 = 7960 transistors have a current gain in the range from 19.8 to 20.2.

(b) The number of transistors with gain >17 is given by:

                            P[x > 17] = 1 - P[x < 17] = 1 - P[z < -1.5] = P[z < +1.5] = 0.9332

Thus, 93.32%, i.e. 93 320 transistors have a gain >17.

Goodness of fit to a Gaussian distribution

All of the analysis of random deviations presented so far only applies when the data being analysed belongs to a Gaussian distribution. Hence, the degree to which a set of data fits a Gaussian distribution should always be tested before any analysis is carried out. This test can be carried out in one of three ways:

(a) Simple test: The simplest way to test for Gaussian distribution of data is to plot a histogram and look for a ‘Bell-shape’ of the form shown earlier in Figure 3.5. Deciding whether or not the histogram confirms a Gaussian distribution is a matter of judgement. For a Gaussian distribution, there must always be approximate symmetry about the line through the centre of the histogram, the highest point of the histogram must always coincide with this line of symmetry, and the histogram must get progressively smaller either side of this point. However, because the histogram can only be drawn with a finite set of measurements, some deviation from the perfect shape of the histogram as described above is to be expected even if the data really is Gaussian.

(b) Using a normal probability plot: A normal probability plot involves dividing the data values into a number of ranges and plotting the cumulative probability of summed data frequencies against the data values on special graph paper.Ł This line should be a straight line if the data distribution is Gaussian. However, careful judgement is required since only a finite number of data values can be used and therefore the line drawn will not be entirely straight even if the distribution is Gaussian. Considerable experience is needed to judge whether the line is straight enough to indicate a Gaussian distribution. This will be easier to understand if the data in measurement set C is used as an example. Using the same five ranges as used to draw the histogram, the following table is first drawn:


The normal probability plot drawn from the above table is shown in Figure 3.9. This is sufficiently straight to indicate that the data in measurement set C is Gaussian.

(c) Chi-squared test: A further test that can be applied is based on the chi-squared (x2) distribution. This is beyond the scope of this book but full details can be found in Caulcott (1973).


Errors during the measurement process

 

Standard error of the mean

The foregoing analysis has examined the way in which measurements with random errors are distributed about the mean value. However, we have already observed that some error remains between the mean value of a set of measurements and the true value, i.e. averaging a number of measurements will only yield the true value if the number of measurements is infinite. If several subsets are taken from an infinite data population, then, by the central limit theorem, the means of the subsets will be distributed about the mean of the infinite data set. The error between the mean of a finite data set and the true measurement value (mean of the infinite data set) is defined as the standard error of the mean, α. This is calculated as:

                                                    α = α /n1/2                                                      (3.18)

α tends towards zero as the number of measurements in the data set expands towards infinity. The measurement value obtained from a set of n measurements, x1, x2, … xn,

can then be expressed as:

                                                          x = xmean ± α

For the data set C of length measurements used earlier, n = 23, α = 1.88 and α = 0.39. The length can therefore be expressed as 406.5 ± 0.4 (68% confidence limit). However, it is more usual to express measurements with 95% confidence limits (±2 α boundaries). In this case, 2 α = 3.76, 2α = 0.78 and the length can be expressed as 406.5 ± 0.8 (95% confidence limits).

Estimation of random error in a single measurement

In many situations where measurements are subject to random errors, it is not practical to take repeated measurements and find the average value. Also, the averaging process becomes invalid if the measured quantity does not remain at a constant value, as is usually the case when process variables are being measured. Thus, if only one measurement can be made, some means of estimating the likely magnitude of error in it is required. The normal approach to this is to calculate the error within 95% confidence limits, i.e. to calculate the value of the deviation D such that 95% of the area under the probability curve lies within limits of ±D. These limits correspond to a deviation of ±1.96. Thus, it is necessary to maintain the measured quantity at a constant value whilst a number of measurements are taken in order to create a reference measurement set from which α can be calculated. Subsequently, the maximum likely deviation in a single measurement can be expressed as: Deviation = ±1.96. However, this only expresses the maximum likely deviation of the measurement from the calculated mean of the reference measurement set, which is not the true value as observed earlier. Thus the calculated value for the standard error of the mean has to be added to the likely maximum deviation value. Thus, the maximum likely error in a single measurement can be expressed as:

                                         Error = ±(1.96 α + α )                                                          (3.19)

Example 3.4

Suppose that a standard mass is measured 30 times with the same instrument to create a reference data set, and the calculated values of α and ˛ are α = 0.43 and α = 0.08. If the instrument is then used to measure an unknown mass and the reading is 105.6 kg, how should the mass value be expressed?

Solution Using (3.19), 1.96 α + α = 0.92. The mass value should therefore be expressed as: 105.6 ± 0.9 kg.

Before leaving this matter, it must be emphasized that the maximum error specified for a measurement is only specified for the confidence limits defined. Thus, if the maximum error is specified as ±1% with 95% confidence limits, this means that there is still 1 chance in 20 that the error will exceed ±1%.



Errors during the measurement process


Standard Gaussian tables

A standard Gaussian table, such as that shown in Table 3.1, tabulates F(z) for various values of z, where F(z) is given by:



Thus, F(z) gives the proportion of data values that are less than or equal to z. This proportion is the area under the curve of F(z) against z that is to the left of z. There[1]fore, the expression given in (3.15) has to be evaluated as [F(z2) – F(z1)]. Study of Table 3.1 shows that F(z) = 0.5 for z = 0. This confirms that, as expected, the number of data values
 0 is 50% of the total. This must be so if the data only has random errors. It will also be observed that Table 3.1, in common with most published standard Gaussian tables, only gives F(z) for positive values of z. For negative values of z, we can make use of the following relationship because the frequency distribution curve is normalized:

 

                                                        F(-z) - 1 - F(z)                                                              (3.17)

(F(-z) is the area under the curve to the left of (-z), i.e. it represents the proportion of data values ≤ -z.)

Example 3.3

How many measurements in a data set subject to random errors lie outside deviation boundaries of +α and - α, i.e. how many measurements have a deviation greater than j α j? Solution The required number is represented by the sum of the two shaded areas in Figure 3.8. This can be expressed mathematically as:




(This last step is valid because the frequency distribution curve is normalized such that the total area under it is unity.)

Thus

                                    P[E < - α] + P[E > + α] = 0.1587 C 0.1587 = 0.3174 ~ 32%

i.e. 32% of the measurements lie outside the š boundaries, then 68% of the measure[1]ments lie inside.

The above analysis shows that, for Gaussian-distributed data values, 68% of the measurements have deviations that lie within the bounds of š. Similar analysis shows 



that boundaries of š2 contain 95.4% of data points, and extending the boundaries to š3 encompasses 99.7% of data points. The probability of any data point lying outside particular deviation boundaries can therefore be expressed by the following table.


 

Saturday, November 27, 2021

Errors during the measurement process

 

Gaussian distribution

Measurement sets that only contain random errors usually conform to a distribution with a particular shape that is called Gaussian, although this conformance must always be tested (see the later section headed ‘Goodness of fit’). The shape of a Gaussian curve is such that the frequency of small deviations from the mean value is much greater than the frequency of large deviations. This coincides with the usual expectation in measure[1]ments subject to random errors that the number of measurements with a small error is much larger than the number of measurements with a large error. Alternative names for the Gaussian distribution are the Normal distribution or Bell-shaped distribution. A Gaussian curve is formally defined as a normalized frequency distribution that is symmetrical about the line of zero error and in which the frequency and magnitude of quantities are related by the expression:



where m is the mean value of the data set x and the other quantities are as defined before. Equation (3.11) is particularly useful for analysing a Gaussian set of measurements and predicting how many measurements lie within some particular defined range. If the measurement deviations D are calculated for all measurements such that D – x - m, then the curve of deviation frequency F(D) plotted against deviation magnitude D is a Gaussian curve known as the error frequency distribution curve. The mathematical relationship between F(D) and D can then be derived by modifying equation (3.11) to give:


The shape of a Gaussian curve is strongly influenced by the value of , with the width of the curve decreasing as α becomes smaller. As a smaller α corresponds with the typical deviations of the measurements from the mean value becoming smaller, this confirms the earlier observation that the mean value of a set of measurements gets closer to the true value as α decreases.

 If the standard deviation is used as a unit of error, the Gaussian curve can be used to determine the probability that the deviation in any particular measurement in a Gaussian data set is greater than a certain value. By substituting the expression for F(D) in (3.12) into the probability equation (3.9), the probability that the error lies in a band between error levels D1 and D2 can be expressed as:


Solution of this expression is simplified by the substitution:

                                                     z = D/ α                                                                       (3.14)

The effect of this is to change the error distribution curve into a new Gaussian distri[1]bution that has a standard deviation of one (α = 1) and a mean of zero. This new form, shown in Figure 3.7, is known as a standard Gaussian curve, and the dependent



variable is now z instead of D. Equation (3.13) can now be re-expressed as:

Unfortunately, neither equation (3.13) nor (3.15) can be solved analytically using tables of standard integrals, and numerical integration provides the only method of solu[1]tion. However, in practice, the tedium of numerical integration can be avoided when analysing data because the standard form of equation (3.15), and its independence from the particular values of the mean and standard deviation of the data, means that standard Gaussian tables that tabulate F(z) for various values of z can be used.



Errors during the measurement process

3.5.2 Graphical data analysis techniques – frequency distributions

Graphical techniques are a very useful way of analysing the way in which random measurement errors are distributed. The simplest way of doing this is to draw a histogram, in which bands of equal width across the range of measurement values are defined and the number of measurements within each band is counted. Figure 3.5 shows a histogram for set C of the length measurement data given in section 3.5.1, in which the bands chosen are 2 mm wide. For instance, there are 11 measurements in the range between 405.5 and 407.5 and so the height of the histogram for this range is 11 units. Also, there are 5 measurements in the range from 407.5 to 409.5 and so the height of the histogram over this range is 5 units. The rest of the histogram is completed in a similar fashion. (N.B. The scaling of the bands was deliberately chosen so that no measurements fell on the boundary between different bands and caused ambiguity about which band to put them in.) Such a histogram has the characteristic shape shown by truly random data, with symmetry about the mean value of the measurements.

As it is the actual value of measurement error that is usually of most concern, it is often more useful to draw a histogram of the deviations of the measurements



from the mean value rather than to draw a histogram of the measurements them[1]selves. The starting point for this is to calculate the deviation of each measurement away from the calculated mean value. Then a histogram of deviations can be drawn by defining deviation bands of equal width and counting the number of deviation values in each band. This histogram has exactly the same shape as the histogram of the raw measurements except that the scaling of the horizontal axis has to be redefined in terms of the deviation values (these units are shown in brackets on Figure 3.5).

Let us now explore what happens to the histogram of deviations as the number of measurements increases. As the number of measurements increases, smaller bands can be defined for the histogram, which retains its basic shape but then consists of a larger number of smaller steps on each side of the peak. In the limit, as the number of measurements approaches infinity, the histogram becomes a smooth curve known as a frequency distribution curve as shown in Figure 3.6. The ordinate of this curve is the frequency of occurrence of each deviation value, F(D), and the abscissa is the magnitude of deviation, D.

The symmetry of Figures 3.5 and 3.6 about the zero deviation value is very useful for showing graphically that the measurement data only has random errors. Although these figures cannot easily be used to quantify the magnitude and distribution of the errors, very similar graphical techniques do achieve this. If the height of the frequency distribution curve is normalized such that the area under it is unity, then the curve in this form is known as a probability curve, and the height F(D) at any particular deviation magnitude D is known as the probability density function (p.d.f.). The condition that


the area under the curve is unity can be expressed mathematically as:



The probability that the error in any one particular measurement lies between two levels D1 and D2 can be calculated by measuring the area under the curve contained between two vertical lines drawn through D1 and D2, as shown by the right-hand hatched area in Figure 3.6. This can be expressed mathematically as:



articular importance for assessing the maximum error likely in any one measure[1]ment is the cumulative distribution function (c.d.f.). This is defined as the probability of observing a value less than or equal to D0, and is expressed mathematically as:

Thus, the c.d.f. is the area under the curve to the left of a vertical line drawn through D0, as shown by the left-hand hatched area on Figure 3.6.

The deviation magnitude Dp corresponding with the peak of the frequency distri[1]bution curve (Figure 3.6) is the value of deviation that has the greatest probability. If the errors are entirely random in nature, then the value of Dp will equal zero. Any non-zero value of Dp indicates systematic errors in the data, in the form of a bias that is often removable by recalibration.



Friday, November 26, 2021

Errors during the measurement process


3.5.1 Statistical analysis of measurements subject to random errors

Mean and median values

The average value of a set of measurements of a constant quantity can be expressed as either the mean value or the median value. As the number of measurements increases, the difference between the mean value and median values becomes very small. However, for any set of n measurements x1, x2 … xn of a constant quantity, the most likely true value is the mean given by:

This is valid for all data sets where the measurement errors are distributed equally about the zero error value, i.e. where the positive errors are balanced in quantity and magnitude by the negative errors.

 The median is an approximation to the mean that can be written down without having to sum the measurements. The median is the middle value when the measurements in the data set are written down in ascending order of magnitude. For a set of n measurements x1, x2 ÐÐÐ xn of a constant quantity, written down in ascending order of magnitude, the median value is given by:

Thus, for a set of 9 measurements x1, x2 … x9 arranged in order of magnitude, the median value is x5. For an even number of measurements, the median value is midway between the two centre values, i.e. for 10 measurements x1 … x10, the median value is given by: (x5 + x6)/2.

Suppose that the length of a steel bar is measured by a number of different observers and the following set of 11 measurements are recorded (units mm). We will call this measurement set A.

                          398 420 394 416 404 408 400 420 396 413 430                       (Measurement set A)

sing (3.4) and (3.5), mean = 409.0 and median = 408. Suppose now that the measure[1]ments are taken again using a better measuring rule, and with the observers taking more care, to produce the following measurement set B:

                                409 406 402 407 405 404 407 404 407 407 408                 (Measurement set B)

or these measurements, mean D 406.0 and median D 407. Which of the two measure[1]ment sets A and B, and the corresponding mean and median values, should we have most confidence in? Intuitively, we can regard measurement set B as being more reli[1]able since the measurements are much closer together. In set A, the spread between the smallest (396) and largest (430) value is 34, whilst in set B, the spread is only 6.

 Thus, the smaller the spread of the measurements, the more confidence we have in the mean or median value calculated.

Let us now see what happens if we increase the number of measurements by extending measurement set B to 23 measurements. We will call this measurement set C.

                      409 406 402 407 405 404 407 404 407 407 408 406 410 406 405 408

                                   406 409 406 405 409 406 407                                             (Measurement set C)

Now, mean = 406.5 and median = 406.

This confirms our earlier statement that the median value tends towards the mean value as the number of measurements increases.

Standard deviation and variance

Expressing the spread of measurements simply as the range between the largest and smallest value is not in fact a very good way of examining how the measurement values are distributed about the mean value. A much better way of expressing the distribution is to calculate the variance or standard deviation of the measurements. The starting point for calculating these parameters is to calculate the deviation (error) di of each measurement xi from the mean value xmean:



Solution

First, draw a table of measurements and deviations for set A (mean D 409 as calculated earlier):


Note that the smaller values of V and  for measurement set B compared with A correspond with the respective size of the spread in the range between maximum and minimum values for the two sets.

Thus, as V and  decrease for a measurement set, we are able to express greater confidence that the calculated mean or median value is close to the true value, i.e. that the averaging process has reduced the random error value close to zero.

Comparing V and  for measurement sets B and C, V and  get smaller as the number of measurements increases, confirming that confidence in the mean value increases as the number of measurements increases.

We have observed so far that random errors can be reduced by taking the average (mean or median) of a number of measurements. However, although the mean or median value is close to the true value, it would only become exactly equal to the true value if we could average an infinite number of measurements. As we can only make a finite number of measurements in a practical situation, the average value will still have some error. This error can be quantified as the standard error of the mean, which will be discussed in detail a little later. However, before that, the subject of graphical analysis of random measurement errors needs to be covered. 


What Are Spectroscopic Analysers?

GLOBAL MANUFACTURERS OF ELECTROCHEMICAL ANALYSERS

Electrochemical Analysers

Global Composition Analyser


Errors during the measurement process

 

3.3.4 Calibration

Instrument calibration is a very important consideration in measurement systems and calibration procedures are considered in detail in Chapter 4. All instruments suffer drift in their characteristics, and the rate at which this happens depends on many factors, such as the environmental conditions in which instruments are used and the frequency of their use. Thus, errors due to instruments being out of calibration can usually be rectified by increasing the frequency of recalibration.

 

3.3.5 Manual correction of output reading

In the case of errors that are due either to system disturbance during the act of measurement or due to environmental changes, a good measurement technician can substantially reduce errors at the output of a measurement system by calculating the effect of such systematic errors and making appropriate correction to the instrument readings. This is not necessarily an easy task, and requires all disturbances in the measurement system to be quantified. This procedure is carried out automatically by intelligent instruments.

 

3.3.6 Intelligent instruments

Intelligent instruments contain extra sensors that measure the value of environmental inputs and automatically compensate the value of the output reading. They have the ability to deal very effectively with systematic errors in measurement systems, and errors can be attenuated to very low levels in many cases. A more detailed analysis of intelligent instruments can be found in Chapter 9.

 

3.4 Quantification of systematic errors

Once all practical steps have been taken to eliminate or reduce the magnitude of system[1]atic errors, the final action required is to estimate the maximum remaining error that may exist in a measurement due to systematic errors. Unfortunately, it is not always possible to quantify exact values of a systematic error, particularly if measurements are subject to unpredictable environmental conditions. The usual course of action is to assume mid-point environmental conditions and specify the maximum measurement error as šx% of the output reading to allow for the maximum expected deviation in environmental conditions away from this mid-point. Data sheets supplied by instrument manufacturers usually quantify systematic errors in this way, and such figures take account of all systematic errors that may be present in output readings from the instrument.

 

3.5 Random errors

Random errors in measurements are caused by unpredictable variations in the measure[1]ment system. They are usually observed as small perturbations of the measurement either side of the correct value, i.e. positive errors and negative errors occur in approximately equal numbers for a series of measurements made of the same constant quantity. Therefore, random errors can largely be eliminated by calculating the average of a number of repeated measurements, provided that the measured quantity remains constant during the process of taking the repeated measurements. This averaging process of repeated measurements can be done automatically by intelligent instruments, as discussed in Chapter 9. The degree of confidence in the calculated mean/median values can be quantified by calculating the standard deviation or variance of the data, these being parameters that describe how the measurements are distributed about the mean value/median. All of these terms are explained more fully in section 3.5.1.

What Are Spectroscopic Analysers?

GLOBAL MANUFACTURERS OF ELECTROCHEMICAL ANALYSERS

Electrochemical Analysers

Global Composition Analyser



3.3.3 High-gain feedback

The benefit of adding high-gain feedback to many measurement systems is illustrated by considering the case of the voltage-measuring instrument whose block diagram is shown in Figure 3.3. In this system, the unknown voltage Ei is applied to a motor of torque constant Km, and the induced torque turns a pointer against the restraining action of a spring with spring constant Ks. The effect of environmental inputs on the



motor and spring constants is represented by variables Dm and Ds. In the absence of environmental inputs, the displacement of the pointer X0 is given by: X0 D KmKsEi. However, in the presence of environmental inputs, both Km and Ks change, and the relationship between X0 and Ei can be affected greatly. Therefore, it becomes difficult or impossible to calculate Ei from the measured value of X0. Consider now what happens if the system is converted into a high-gain, closed-loop one, as shown in Figure 3.4, by adding an amplifier of gain constant Ka and a feedback device with gain constant Kf. Assume also that the effect of environmental inputs on the values of Ka and Kf are represented by Da and Df. The feedback device feeds back a voltage E0 proportional to the pointer displacement X0. This is compared with the unknown voltage Ei by a comparator and the error is amplified. Writing down the equations of the system, we have:


 This is a highly important result because we have reduced the relationship between X0 and Ei to one that involves only Kf. The sensitivity of the gain constants Ka, Km and Ks to the environmental inputs Da, Dm and Ds has thereby been rendered irrelevant, and we only have to be concerned with one environmental input Df. Conveniently, it is usually easy to design a feedback device that is insensitive to environmental inputs: this is much easier than trying to make a motor or spring insensitive. Thus, high[1]gain feedback techniques are often a very effective way of reducing a measurement system’s sensitivity to environmental inputs. However, one potential problem that must be mentioned is that there is a possibility that high-gain feedback will cause instability in the system. Therefore, any application of this method must include careful stability analysis of the system.



Errors during the measurement process

 

3.2.3 Wear in instrument components

Systematic errors can frequently develop over a period of time because of wear in instrument components. Recalibration often provides a full solution to this problem.

 

3.2.4 Connecting leads

In connecting together the components of a measurement system, a common source of error is the failure to take proper account of the resistance of connecting leads (or pipes in the case of pneumatically or hydraulically actuated measurement systems). For instance, in typical applications of a resistance thermometer, it is common to find that the thermometer is separated from other parts of the measurement system by perhaps 100 metres. The resistance of such a length of 20 gauge copper wire is 7 , and there is a further complication that such wire has a temperature coefficient of 1 m /°C. Therefore, careful consideration needs to be given to the choice of connecting leads. Not only should they be of adequate cross-section so that their resistance is minimized, but they should be adequately screened if they are thought likely to be subject to electrical or magnetic fields that could otherwise cause induced noise. Where screening is thought essential, then the routing of cables also needs careful planning. In one application in the author’s personal experience involving instrumentation of an electric[1]arc steel making furnace, screened signal-carrying cables between transducers on the arc furnace and a control room at the side of the furnace were initially corrupted by high amplitude 50 Hz noise. However, by changing the route of the cables between the transducers and the control room, the magnitude of this induced noise was reduced by a factor of about ten.

 

3.3 Reduction of systematic errors

The prerequisite for the reduction of systematic errors is a complete analysis of the measurement system that identifies all sources of error. Simple faults within a system, such as bent meter needles and poor cabling practices, can usually be readily and cheaply rectified once they have been identified. However, other error sources require more detailed analysis and treatment. Various approaches to error reduction are consid[1]ered below.

 

3.3.1 Careful instrument design

Careful instrument design is the most useful weapon in the battle against environmental inputs, by reducing the sensitivity of an instrument to environmental inputs to as low a level as possible. For instance, in the design of strain gauges, the element should be constructed from a material whose resistance has a very low temperature coefficient (i.e. the variation of the resistance with temperature is very small). However, errors due to the way in which an instrument is designed are not always easy to correct, and a choice often has to be made between the high cost of redesign and the alternative of accepting the reduced measurement accuracy if redesign is not undertaken.

 

3.3.2 Method of opposing inputs

The method of opposing inputs compensates for the effect of an environmental input in a measurement system by introducing an equal and opposite environmental input that cancels it out. One example of how this technique is applied is in the type of milli[1]voltmeter shown in Figure 3.2. This consists of a coil suspended in a fixed magnetic field produced by a permanent magnet. When an unknown voltage is applied to the coil, the magnetic field due to the current interacts with the fixed field and causes the coil (and a pointer attached to the coil) to turn. If the coil resistance Rcoil is sensitive to temperature, then any environmental input to the system in the form of a temperature change will alter the value of the coil current for a given applied voltage and so alter the pointer output reading. Compensation for this is made by introducing a compen[1]sating resistance Rcomp into the circuit, where Rcomp has a temperature coefficient that is equal in magnitude but opposite in sign to that of the coil. Thus, in response to an increase in temperature, Rcoil increases but Rcomp decreases, and so the total resistance remains approximately the same.

What Are Spectroscopic Analysers?

GLOBAL MANUFACTURERS OF ELECTROCHEMICAL ANALYSERS

Electrochemical Analysers

Global Composition Analyser



Popular Posts

Labels

ACTUATORS (16) AIR CONTROL/MEASUREMENT (43) ALARMS (25) ALIGNMENT SYSTEMS (22) Ammeters (15) ANALYSERS/ANALYSIS SYSTEMS (67) ANGLE MEASUREMENT/EQUIPMENT (9) APPARATUS (21) Articles (3) AUDIO MEASUREMENT/EQUIPMENT (34) BALANCES (4) BALANCING MACHINES/SERVICES (1) BOILER CONTROLS/ACCESSORIES (5) BRIDGES (7) CABLES/CABLE MEASUREMENT (14) CALIBRATORS/CALIBRATION EQUIPMENT (19) CALIPERS (3) CARBON ANALYSERS/MONITORS (5) CHECKING EQUIPMENT/ACCESSORIES (8) CHLORINE ANALYSERS/MONITORS/EQUIPMENT (1) CIRCUIT TESTERS CIRCUITS (2) CLOCKS (1) CNC EQUIPMENT (1) COIL TESTERS EQUIPMENT (4) COMMUNICATION EQUIPMENT/TESTERS (1) COMPARATORS (1) COMPASSES (1) COMPONENTS/COMPONENT TESTERS (5) COMPRESSORS/COMPRESSOR ACCESSORIES (2) Computers (1) CONDUCTIVITY MEASUREMENT/CONTROL (3) CONTROLLERS/CONTROL SYTEMS (35) CONVERTERS (2) COUNTERS (4) CURRENT MEASURMENT/CONTROL (2) DAQ (2) DAS (1) Data Acquisition Addon Cards (4) DATA ACQUISITION SOFTWARE (5) DATA ACQUISITION SYSTEMS (27) DATA ANALYSIS/DATA HANDLING EQUIPMENT (1) Data Analytics (7) Data Science (7) DC CURRENT SYSTEMS (2) DETECTORS/DETECTION SYSTEMS (3) DEVICES (1) DEW MEASURMENT/MONITORING (1) DISPLACEMENT (2) DRIVES (2) ELECTRICAL/ELECTRONIC MEASUREMENT (3) ENCODERS (1) ENERGY ANALYSIS/MEASUREMENT (1) EQUIPMENT (6) FLAME MONITORING/CONTROL (5) FLIGHT DATA ACQUISITION and ANALYSIS (1) formulas (1) FREQUENCY MEASUREMENT (1) GAS ANALYSIS/MEASURMENT (1) GAUGES/GAUGING EQUIPMENT (15) GLASS EQUIPMENT/TESTING (2) Global Instruments (1) Industrial and Scientific (10) instrumentation (1) Latest News (35) METERS (26) Questions and Answers (1) SOFTWARE DATA ACQUISITION (2) Startups and Innovation (1) Supervisory Control - Data Acquisition (1)