"It is better to be right than exactly wrong." –Alan Greenspan
measurement uncertainty
Some numbers are correct: Maria has 3 siblings and 2 + 2 = 4. However, allmeasurementsthey have a degree of uncertainty that can come from a variety of sources. The process of evaluating the uncertainty associated with a measurement result is often calleduncertainty analysisosyntax error.A complete statement of a measured value must include an estimate of the confidence level associated with the value. Correctly reporting an experimental result, together with its uncertainty, allows others to make judgments about the quality of the experiment and facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an estimate of uncertainty, it is impossible to answer the fundamental scientific question: "Does my result agree with a theoretical prediction or with the results of other experiments?" This question is essential to decide whether to confirm or refute a scientific hypothesis.When we take a measurement, we generally assume that there is an exact or true value, depending on how we define what is being measured. Although we may never know exactly this actual value, we try to find this optimal amount to the best of our ability with the time and resources available. Because we take measurements using different methods, or even take multiple measurements using the same method, we may get slightly different results. So how do we relate our results to our best estimate of this elusive subject?real bravery🇧🇷 The most common way to display the range of values that we think contains the true value is:
(1)
measure = (best estimate ± uncertainty) units
Let's take an example. Suppose you want to find the mass of a gold ring that you want to sell to a friend. You don't want to jeopardize your friendship, so you'll want to get an accurate mass of the ring to ask for a fair market price. They estimate the mass to be 1020 grams based on the weight you feel in your hand, but that's not a very accurate estimate. After doing some research, you find an electronic balance that shows a mass of 17.43 grams. Although this measure is much moreprecisethat the original estimate as knownprecise, and how sure are you that this measurement represents the true value of the mass of the ring? Since the scale's digital display is limited to 2 decimal places, you can enter the mass as
metro= 17,43 ± 0,01 g.
Suppose you are using the same electronic balance and you get several different readings: 17.46 g, 17.42 g, 17.44 g, so the average mass appears to be in the range of17,44 ± 0,02 g.
By now, you can be sure that you know the mass of this ring to the nearest hundredth of a gram, but how do you know that the actual value is definitely between 17.43 g and 17.45 g? Honestly, you opt for another scale that reads 17.22 g. This amount is well below the first balance amount range, and under normal circumstances, you may not care, but you want to be fair to your friend. So, what are you doing now? The answer lies in knowing something about the precision of each instrument.To answer these questions, we must first define the termsprecisionmiprecision:precisionIt is the degree of agreement between a measured value and a true or accepted value. MeasurementErroris the amount of inaccuracy.
precisionit is a measure of how well a result can be determined (without reference to a theoretical or true value). It is the degree of consistency and agreement between independent measurements of the same size; also the reliability or reproducibility of the result.
ouncertaintyThe estimate associated with a measurement must take into account both the accuracy and the precision of the measurement.
(2)
Relative uncertainty =
uncertainty 
measured quantity 
Example:
metro= 75,5 ± 0,5 g
has a fractional uncertainty of:0.5 grams 
75.5 grams 
(3)
relative error =
measured value  expected value 
expected value 
If the expected value ofmetrois 80.0 g, so the relative error is:
75,5 − 80,0 
80,0 
Surveillance:The minus sign indicates that the measured value isnot lessthan the expected value.
When analyzing experimental data, it is important to understand the difference between precision and accuracy.precisionindicates the quality of the measurement, with no guarantee that the measurement is "correct".precisionon the other hand, it assumes an ideal value and tells you how far your answer is from this ideal "correct" answer. These concepts are directly relatedCoincidentallymisystematicMeasurement error.types of errors
Measurement errors can be classified asCoincidentallyosystematic, depending on how the measurement was obtained (an instrument can introduce random error in one situation and systematic error in another).
random bugsare statistical fluctuations (in any direction) in the measured data due to precision limitations of the measuring device. Random errors can be assessed by statistical analysis and reduced by averaging a large number of observations (see standard errors).
systematic errorsthey are reproducible inaccuracies that constantly go in the same direction. These errors are difficult to detect and cannot be statistically analyzed. If bias is identified when calibrating against a standard, applying a correction or correction factor to compensate for the effect can reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced by increasing the number of observations.
Our goal is to reduce as many sources of error as possible through careful measurement and to keep an eye on errors that we cannot eliminate. It is useful to know the types of errors that can occur so that we can detect them when they do occur. Common sources of error in experiments in the physics laboratory:
incomplete definition(can be systematic or random)  One of the reasons for the impossibility of making exact measurements is that the measurement is not always clearly defined. For example, if two different people were to measure the length of the same rope, they would probably get different results because each person can stretch the rope with a different tension. The best way to minimize definition errors is to carefully consider and specify the conditions that may affect the measurement.Error when considering a factor(usually systematic) – The hardest part of designing an experiment is trying to control or account for all possible factors other than the only independent variable being tested. For example, you may inadvertently ignore air resistance when measuring acceleration in free fall, or you may not take into account the effect of the Earth's magnetic field when measuring the field near a small magnet. The best way to explain these sources of error is to discuss with your colleagues all the factors that could affect your result. This brainstorming should be doneIn frontStart the experiment to plan and account for confounding factors before collecting data. sometimes acorrectioncan be applied to a resultafter thisTaking into account the data of an error that was not previously detected.environmental factors(systematic or random)  Watch for errors caused by your immediate working environment. You may need to accommodate or shield your experiment from vibrations, drafts, temperature changes, and electronic noise or other effects from nearby equipment.device resolution(random): All instruments have finite precision, which limits the ability to resolve small measurement differences. For example, a metric ruler cannot be used to distinguish distances with a precision much better than half its smallest division (0.5mm in this case). One of the best ways to get more accurate measurements is to use azero differencemethod instead of measuring a lot directly.NulloEquilibriumThe methods involve the use of instruments to measure the difference between two similar quantities, one of which is known with great precision and is adjustable. The adjustable reference quantity is varied until the difference is reduced to zero. The two quantities are then balanced, and the size of the unknown quantity can be determined by comparison with a standard of measurement. With this method, source instability problems are eliminated and the meter can be very sensitive and does not even need a scale.calibration(systematic) — Whenever possible, the calibration of an instrument should be verified before data is collected. If a calibration standard is not available, the accuracy of the device should be verified by comparing it to another device that is at least as accurate, or to the specifications provided by the manufacturer. Calibration errors are generally linear (measured as a fraction of full scale), so larger values result in larger absolute errors.zero change(systematically) — Whenever you take a measurement with a gauge micrometer, electronic balance, or electric meter, always check the zero reading first. If possible, zero the instrument or at least measure and record the zero offset so that the readings can be corrected later. It's also a good idea to check for null during your experiment. Failure to zero a device will result in a constant error that is more significant at smaller readings than at larger values.physical variations(random) — It is always a good idea to take multiple measurements over as large an area as possible. This often reveals deviations that would otherwise go undetected. These variances may require closer examination, or they may be combined to create an average.Parallax(systematic or random) – This error can occur when there is some distance between the measurement scale and the indicator used to obtain a measurement. If the viewer's eye is not aligned with the pointer and scale, the reading may be too high or too low (some analog gauges have mirrors to help with this alignment).instrument drift(systematic) – Most electronic instruments have readings that change over time. The amount of deviation is not usually an issue, but on occasion this source of error can be significant.make an appointmentmihysteresis(Systematic): Some meters take time to reach equilibrium and taking a reading before the instrument is stable will result in a higher or lower reading. A common example is temperature measurements with a thermometer that has not reached thermal equilibrium with the environment. A similar effect ishysteresiswhere instrument readings lag and appear to have a "memory effect" as data is acquired sequentially by moving up or down through a range of values. Hysteresis is most commonly associated with materials that become magnetized when a changing magnetic field is applied.personal mistakesthey come from carelessness, poor technique, or bias on the part of the experimenter. The experimenter may measure incorrectly or use poor technique when making a measurement, or introduce bias into the measurements by expecting (and inadvertently forcing) the results to match the expected result.
Serious personal mistakes, sometimes also calledErroroError, should be avoided and corrected when discovered. Personal errors are usually excluded from the discussion of error analysis, since it is generally assumed that the experimental result was obtained by doing the right thing.Also, the term human error should be avoided in discussions of error analysis, as it is too general to be useful..
Estimation of experimental uncertainty for a single measurement
Every measurement you make comes with some degree of uncertainty, no matter how precise your measurement tool is. So how do you determine and report this uncertainty?
The uncertainty of a single measurement is limited by the precision and accuracy of the measuring device, along with other factors that may affect the experimenter's ability to make the measurement.
For example, if you are trying to measure the diameter of a tennis ball with a metric ruler, the uncertainty might be higher.
± 5 mm,
but if you use calipers the uncertainty could be reduced to maybe± 2 mm.
The limiting factor with the metric rule is parallax, while the second case is limited by the ambiguity in the definition of the tennis ball's diameter (it's inaccurate!). In both cases, the uncertainty is greater than the smallest graduation marked on the measuring tool (probably 1 mm and 0.05 mm respectively). Unfortunately, there is no general rule for determining uncertainty in all measurements. The experimenter is the one who can best estimate and quantify the uncertainty of a measurement based on all the possible factors that affect the result. Therefore, the person making the measurement is obliged to make the best possible assessment of the uncertainty and to express the uncertainty in a way that clearly explains what the uncertainty represents:(4)
measurement = (measured value ± standard uncertainty) unit of measurement
where the ±standard uncertaintygives a confidence interval of approximately 68% (see sections on Standard Deviation and Statement of Uncertainties).
Example: diameter of tennis ball =
6,7 ± 0,2 cm.
Estimation of uncertainty in repeated measurements
Suppose you measure the period of a pendulum with a digital instrument (which you assume to measure accurately) and find:T= 0.44 seconds. This single period measurement suggests an accuracy of ±0.005 s, but the accuracy of this instrument may not provide a complete sense of uncertainty. By repeating the measurement several times and examining the scatter between readings, you can get a better picture of uncertainty over time. For example, here are the results of 5 measurements in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.
(5)
Means (Means) =
X_{1}+X_{2}++X_{norte} 
norte 
For this situation, the best estimate of the period is theAverage, omi.
If possible, repeat a measurement several times and average the results. This average is usually the best estimate of the "true" value (unless the data set is skewed by one or moredeviatethat should be examined to determine if they are incorrect data points that should be omitted from the average or valid measurements that require further investigation). In general, the more times you repeat a measurement, the better the estimate. However, be careful not to waste time taking more measurements than are necessary for the required accuracy.
As another example, consider measuring the width of a piece of paper with a metric ruler. Taking care to keep the ruler parallel to the edge of the paper (to avoid a systematic error that would cause the measured value to be constantly greater than the correct value), the width of the paper is measured at different points on the sheet and the obtained values are inserted into a data table. Please note that the last digit is only a rough estimate, as it is difficult to read a metric ruler to the nearest tenth of a millimeter (0.01 cm).
(6)
average =
Sum of observed latitudes 
Do not do. of the observations 
155,96cm 
5 
This average is the best available estimate of the width of the paper, but it is certainly not exact. We would have to average an infinite number of measurements to get close to the true mean, and even then we have no guarantee that the mean is correct.precisebecause there is stillnonesystematic error of the measurement tool that can never be calibratedPerfect🇧🇷 So how do we express uncertainty in our environment?One way to express the variation between measurements is to usemean deviation🇧🇷 This statistic tells us on average (with 50% confidence) how much individual readings deviate from the mean.
(7)
d=
X_{1}−X + X_{2}−X ++ X_{norte}−X 
norte 
However theStandard deviationIt is the most common way to characterize the variability of a data set. EITHERStandard deviationis always a little bigger than thatmean deviation, and is used because of its association withnormal distributionwhich is often found in statistical analyses.
Standard deviation
To calculate the standard deviation of a sample ofnorteMeasurements:
1
Add all the measurements and dividenorteto get thatAverage, omi.2
Now subtract thisAveragedecadenorteget actionnorte"deviations".3
Squareeach one of thesenorte deviationsand add them all.4
Share this result(norte− 1)
and take the square root.
We can write the standard deviation formula as follows. leave it alonenortethe measures are calledX_{1},X_{2}, ...,X_{norte}🇧🇷 Leave the stockingnorteare called values
X.
Any deviation is given bydX_{UE}=X_{UE}−X, ProUE= 1, 2,,norte.
oStandard deviationEs:(8)
s=


In our example above, the average width
X
measures 31.19cm The deviations are:oAveragedeviation is:d= 0,086 cm.
omeetingdeviation is:s=

X± 2 s,
and almost all (99.7%) of the readings are within 3 standard deviations of the mean. The smooth curve superimposed on the histogram is thegaussianoonormalDistribution predicted by theory for measurements with random errors. As more measurements are taken, the histogram more closely follows the Gaussian bell curve, but the standard deviation of the distribution remains about the same.
illustration 1
Standard deviation of the mean (standard error)
If we report the average value ofnortemeasurements, the uncertainty that we have to assign to this average value is thestandard deviation of the mean, often calleddefault error(SE).
(9)
pag_{X}=
s  

odefault errorit's smaller than thatStandard deviationby a factor of
1/
norte 
5 
Average paper width = 31.19 ± 0.05 cm.
abnormal date
The first step you should take when analyzing data (and even collecting data) is to examine the data set as a whole to look for patterns anddeviate.anomalousdata points that lieForThe general trend of the data may indicate an interesting phenomenon that may lead to a new discovery, or it may simply be the result of error or random fluctuations. In either case, an outlier should be investigated further to determine the cause of the unexpected result. Extreme data should never be "thrown away" without a clear justification and explanation, as the most important part of the research may be lost! However, if you can clearly justify omitting an inconsistent data point, you should exclude the outlier from your analysis so that the average is notwrongthe "true" means.
The uncertainty of the fracture is resumed
When a reported value is determined by averaging a series of independent readings, the fractional uncertainty is the ratio of the uncertainty divided by the average value. For this example
(10)
Bruchunsicherheit =
uncertainty 
Average 
0,05cm 
31,19cm 
Note that fractional uncertainty is dimensionless, but is usually expressed as a percentage or in parts per million (ppm) to emphasize the fractional nature of the value. A scientist might also claim that this measurement is "good to about 1 in 500" or "accurate to about 0.2%."Fractional uncertainty is also important because it is used inPropagandaUncertainty in calculations using the result of a measurement, as discussed in the next section.
propagation of uncertainty
Suppose we want to determine a lotF, which dependsXand maybe several other variablesj,z, etc. We want to know the error inFwhen we measureX,j, ... with mistakespag_{X},pag_{j}, ...Examples:
(11)
F=xy(area of a rectangle)
(12)
F=pagwhyUE(X moment component)
(13)
F=X/t(Velocity)
For a function of a single variableF(X), the deviation inFmay be related to the discrepancyXUse calculation:
(14)
dF=
d.f. 
dx 
So let's take the square and the mean:
(quince)
dF^{2}=
d.f. 
dx 
2  
and with the definition ofpag, Have:
(sixteen)
pag_{F}=
d.f. 
dx 
Examples:(a)
F=
X 
(17)
d.f. 
dx 
1  
2

(18)
pag_{F}=
pag_{X}  
2

pag_{F} 
F 
1 
2 
pag_{X} 
X 
(b)
F=X^{2}
(19)
d.f. 
dx 
(20)
pag_{F} 
F 
pag_{X} 
X 
(C)
F= becauseUE
(21)
d.f. 
dUE 
(22)
pag_{F}🇺🇸 SinUEpag_{UE}, o
pag_{F} 
F 
surveillance: in this situation,pag_{UE}must be specified in radians.
cae whereFdepends on two or more variables, the above derivation can be repeated with minor modifications. For two variables it applies:F(X,j), have:
(23)
dF=
∂F 
∂X 
∂F 
∂j 
A partial derivative
∂F 
∂X 
(24)
(dF)^{2}=
∂F 
∂X 
2  
∂F 
∂j 
2  
∂F 
∂X 
∂F 
∂j 
If the measurementsXmijesnot correlated, after
dX dj= 0,
and we get:(25)
pag_{F}=

Examples:(a)
F=X+j
(26)
∂F 
∂X 
∂F 
∂j 
(27)
∴pag_{F}=
pag_{X}^{2}+pag_{j}^{2} 
When adding (or subtracting)Independentmeasures thatabsolute uncertaintythe sum (or difference) is the individual's root sum of squares (RSS)absolute uncertainties🇧🇷 By addingcorrelatedmeasurements, the uncertainty in the result is simply the sum of the absolute uncertainties, which is always an uncertainty estimate greater than the sum in quadrature (RSS). Adding or subtracting a constant does not change the absolute uncertainty of the calculated value as long as the constant is an exact value.
(b)
F=xy
(28)
∂F 
∂X 
∂F 
∂j 
(29)
∴pag_{F}=
j^{2}pag_{X}^{2}+X^{2}pag_{j}^{2} 
Divide the above equation byF=xy, Have:
(30)
pag_{F} 
F 

(C)
F=X/j
(31)
∂F 
∂X 
1 
j 
∂F 
∂j 
X 
j^{2} 
(32)
∴pag_{F}=

Divide the above equation by
F=X/j,
Have:(33)
pag_{F} 
F 

When multiplying (or dividing) independent measurements, therelative uncertaintyof the product (quotient) is the RSS of the individualrelative uncertainties🇧🇷 By multiplyingcorrelatedmeasurements, the uncertainty in the result is just the sum of the relative uncertainties, which is always an uncertainty estimate greater than quadrature addition (RSS). Multiplying or dividing by a constant does not change the relative uncertainty of the calculated value.
Note that the relative uncertainty inF, as shown in (b) and (c) above, has the same form for multiplication and division: the relative uncertainty in a product or quotient depends onrelativeUncertainty of each term.Example: find the uncertainty inv, From where
v=no
coma= 9,8 ± 0,1 m/s^{2},t= 1,2 ± 0,1 s(34)
pag_{v} 
v 


(0,010)^{2}+ (0,029)^{2} 
Note that the relative uncertainty int(2.9%) is significantly greater than the relative uncertainty fora(1.0%) and therefore the relative uncertainty invis essentially the same as fort(about 3%).Graphically, RSS is like the Pythagorean theorem:
Figure 2
The total uncertainty is the length of the hypotenuse of a right triangle with legs of the length of each uncertainty component.
Time saving approach:"A chain is only as strong as its weakest link."
If one of the uncertainty terms is more than three times greater than the other terms, the square root formula can be ignored and the combined uncertainty is simply the largest uncertainty. This shortcut can save a lot of time without sacrificing accuracy when estimating total uncertainty.
The upperlower bound of the uncertainty propagation method
An alternative and sometimes simpler procedure to the tedious procedurePropagation of the Law of Uncertaintyand theUpperLower Bound Methodspread of uncertainty. This alternative method does not produce astandard uncertaintyEstimate (with a 68% confidence interval), but returns aappropriateUncertainty estimation for practically any situation. The basic idea of this method is to use the uncertainty ranges of each variable to calculate the maximum and minimum values of the function. You can also think of this technique as an examination of best and worst case scenarios. Suppose you measure an angle like this:UE= 25° ± 1° and you had to findF= becauseUE, after:
(35)
F_{maximum}= cos(26°) = 0,8988
(36)
F_{Minimum}= cos(24°) = 0,9135
(37)
∴F= 0,906 ±0,007
where 0.007 is half the difference betweenF_{maximum}miF_{Minimum}Note that even thoughUEwas measured with only 2 significant digits,Fis known by 3figuren. Use ofPropagation of the law of uncertainty:
pag_{F}🇺🇸 SinUEpag_{UE}= (0,423)(Pi/180) =0,0074
(same result as above).The uncertainty estimate of the upperlower bound method is generally larger than the standard uncertainty estimate found in uncertainty law propagation, but both methods provide a reasonable estimate of the uncertainty in a computed value.
The upperlower bound method is particularly useful when the functional relationship is unclear or incomplete. A practical application is forecasting the expected range in an expense budget. In this case, some costs may be fixed while others may be uncertain, and the range of these uncertain terms can be used to predict upper and lower bounds on total costs.
significant algharisms
The number of significant digits in a value can be defined as all digits between and including the first nonzero digit from the left to the last digit. For example, 0.44 has two significant digits and the number 66.770 has 5 significant digits. Zeros are significant except when used to locate the decimal point, such as in the number 0.00030, which has 2 significant digits. Zeros may or may not be significant for numbers like 1200 when it is not clear whether two, three, or four significant digits are shown. To avoid this ambiguity, these numbers should be expressed in scientific notation (for example, 1.20 × 10^{3}clearly shows three significant digits).When using a calculator, the display often shows many digits, only a few of which areimportant(it makes sense in another sense). For example, if you want to estimate the area of a circular playing field, you can set the radius to 9 meters and use the formula:A=Pir^{2}🇧🇷 When calculating this area, the calculator can report a value of 254.4690049 m^{2}🇧🇷 It would be extremely misleading to give this number as the area of the field, as it would suggest that you know the area with an absurd degree of precision, down to a fraction of a square millimeter! Since only the radius of one significant digit is known, the final answer must also contain only one significant digit: area = 3 × 10^{2}metro^{2}.From this example, we can see that the number of significant numbers reported for a value implies a certain level of precision. In fact, the number of significant numbers suggests a rough estimate of the relative uncertainty:
The number of significant digits implies an approximate relative uncertainty:
1 significant number indicates a relative uncertainty of approximately 10% to 100%
2 significant figures indicate a relative uncertainty of approximately 1% to 10%
3 significant figures indicate a relative uncertainty of approximately 0.1% to 1%
Use significant numbers to easily propagate uncertainty
By following a few simple rules, significant numbers can be used to find the proper precision for a calculated result for the four most basic mathematical functions, without using complicated uncertainty propagation formulas.
In multiplication and division, the number of reliably known significant digits in a product or quotient is the same as the smallest number of significant digits in any of the original numbers.
Example:
6.6  
×  7328,7  
48369.42^{}  =  48 × 10^{3} 
(2 significant digits) 
(5 significant digits) 
(2 significant digits) 
For addition and subtraction, the result must be rounded to the last decimal place entered for the less precise number.
Examples:
223,64  5560,5  
+  54  +  0,008  
278  5560,5 
Uncertainty, significant numbers, and rounding
For the same reason that it is dishonest to state a result with more significant numbers than are known with certainty, the value of uncertainty should also not be stated with undue precision.For example, it would be inappropriate for a student to report a result such as:
(38)
measured density = 8.93 ± 0.475328 g/cm^{3}INCORRECT!
The measurement uncertainty cannot be known exactly! For most experimental work, the confidence in the uncertainty estimate is not much better than about ±50% because of all the different sources of error, none of which can be known exactly. Therefore, uncertainty values should be given to a single significant digit (or perhaps 2 significant digits if the first digit is 1).
Because experimental uncertainties are inherently imprecise, they should be rounded to one, or at most, to two significant digits.
pag_{pag} 
pag 
1  

measured density = 8.9 ± 0.5 g/cm^{3}.
RIGHT!An experimental value should be rounded to be consistent with the magnitude of its uncertainty. This generally means that the last significant digit in any reported value must be reported to the same decimal place as the uncertainty.
In most cases, this practice of rounding an experimental result to be consistent with the estimate of uncertainty produces the same number of significant digits as the rules discussed above for simple propagation of uncertainty for addition, subtraction, multiplication, and divisions.
Caution:When conducting an experiment, it is important to keep this in mind.precision is expensive(both in terms of time and material). Don't waste time trying to get an accurate result when only a rough estimate is required. The cost increases exponentially with the precision required, so the potential benefit of that precision must be weighed against the additional cost.
Combine and report uncertainties
In 1993, the International Organization for Standardization (ISO) published the first official version of theGuide to express measurement uncertainties.Prior to this time, uncertainty estimates were evaluated and reported according to different conventions depending on the measurement context or scientific discipline. Here are some key points from this 100page guide, which can be found in modified form atTuned NIST website.When reporting a measurement, the measured value should be reported along with an estimate of the totalcombined standard uncertainty
Uds_{C}
of value. The total uncertainty is found by combining the uncertainty components based on the two types of uncertainty analysis: Type A evaluation of standard uncertainty Uncertainty evaluation method through statistical analysis of a set of observations. This method mainly involvesCoincidentallyError.
 Type B evaluation of standard uncertainty Method of determination of uncertainty by means other than the statistical analysis of series of observations. This method includessystematicErrors and any other uncertainties that the experimenter considers important.
Conclusion: "When do the measurements match up?"
We now have the resources to answer the fundamental scientific question posed at the beginning of this discussion of error analysis: "Does my result agree with a theoretical prediction or with the results of other experiments?"In general, a measured result agrees with a theoretical prediction if the prediction is within the range of experimental uncertainty. The same applies if two measured values are availablestandard uncertaintyIntervals that overlap, so it says in the measuresconsistent(They agree). If the uncertainty bands do not overlap, one speaks of measurementsdeviate(they disagree). However, you should recognize that these overlapping criteria may provide two opposing answers depending on the assessment and the level of confidence in the uncertainty. It would be unethical to arbitrarily inflate the uncertainty range just to make a measurement match an expected value. A better approach would be to discuss the size of the difference between the measured and expected values in the context of uncertainty and try to find the source of the discrepancy if the difference is really significant. To verify your own data, you must use themeasurement comparison toolavailable inLabor website.Here are some examples of using this chart analysis tool:
figure 3
A= 1,2 ± 0,4
B= 1,8 ± 0,4
These measuresto acceptinside his insecurities, even though thePercentage differencebetween its central values is 40%.But with mean uncertainty ± 0.2, same measurementsI disagreesince their uncertainties do not overlap. Further investigation would be necessary to determine the cause of the discrepancy. Perhaps the uncertainties were underestimated, a systematic error may have been overlooked, or there may have been a real difference between these values.
Figure 4
An alternative method of determining agreement between values is to calculate the difference between the values divided by their pooled standard uncertainty. This ratio gives the number of standard deviations that separate the two values. If this ratio is less than 1.0, it is reasonable to assume that the values agree. If the ratio is greater than 2.0, it is very unlikely (less than 5% chance) that the values are equal.Example above with
Uds= 0,4:
1,2 − 1,8 
0,57 
Uds= 0,2:
1,2 − 1,8 
0,28 
references
Baird, DCExperimentation: An Introduction to Measurement Theory and Design of Experiments, 3rd Edition^{third}🇧🇷 editionLehrlingshalle: Englewood Cliffs, 1995.Bevington, Phillip y Robinson, D.Data Reduction and Error Analysis for the Physical Sciences, 2^{North Dakota}🇧🇷 editionMcGrawHill : New York , 1991 .THEY ARE LIKE THIS.Guide to express measurement uncertainties.International Organization for Standardization (ISO) and International Committee for Weights and Measures (CIPM): Switzerland, 1993.Lights, William.Data analysis and errors., 2^{North Dakota}🇧🇷 editionApprentice Hall: Upper Saddle River, NJ, 1999.NIST.Fundamentals of the expression of measurement uncertainty. http://physics.nist.gov/cuu/Uncertainty/Taylor, Juan.Introduction to error analysis, 2^{North Dakota}🇧🇷 editionUniversity Academic Books: Sausalito, 1997.
Copyright © 2011 Advanced Instructional Systems, Inc. and the University of North Carolina credits