TechTip: Accuracy, Precision, Resolution, and Sensitivity

Objective

This document explains the difference between the terms accuracy, precision, resolution, and sensitivity as applied to a measurement system.

Intended Audience

This document is intended for users who operate and interpret the results of a DAQ measurement system.

Overview

Instrument manufacturers usually supply specifications for their equipment that define its accuracy, precision, resolution and sensitivity. Unfortunately, not all of these specifications are uniform from one to another or expressed in the same terms. Moreover, even when they are given, do you know how they apply to your system and to the variables you are measuring? Some specifications are given as worst-case values, while others take into consideration your actual measurements.

Accuracy

Accuracy can be defined as the amount of uncertainty in a measurement with respect to an absolute standard. Accuracy specifications usually contain the effect of errors due to gain and offset parameters. Offset errors can be given as a unit of measurement such as volts or ohms and are independent of the magnitude of the input signal being measured. An example might be given as ±1.0 millivolt (mV) offset error, regardless of the range or gain settings. In contrast, gain errors do depend on the magnitude of the input signal and are expressed as a percentage of the reading, such as ±0.1%. Total accuracy is therefore equal to the sum of the two: ±(0.1% of input +1.0 mV). An example of this is illustrated in Table 1.

Table 1. Readings as a function of accuracy
Conditions: input 0-10 V, Accuracy = ±(0.1% of input + 1 mV))

Input Voltage Range of Readings within the Accuracy Specification
0 V -1 mV to +1 mV
5 V 4.994 V to 5.006 V (±6 mV)
10 V 9.989 V to 10.011 V (±11 mV)

Precision

Precision describes the reproducibility of the measurement. For example, measure a steady state signal many times. In this case if the values are close together then it has a high degree of precision or repeatability. The values do not have to be the true values just grouped together. Take the average of the measurements and the difference is between it and the true value is accuracy.

Resolution

Resolution can be expressed in two ways:

  • It is the ratio between the maximum signal measured to the smallest part that can be resolved - usually with an analog-to-digital (A/D) converter.
  • It is the degree to which a change can be theoretically detected, usually expressed as a number of bits. This relates the number of bits of resolution to the actual voltage measurements.

In order to determine the resolution of a system in terms of voltage, we have to make a few calculations. First, assume a measurement system capable of making measurements across a ±10 V range (20 V span) using a 16-bits A/D converter. Next, determine the smallest possible increment we can detect at 16 bits. That is, 216 = 65,536, or 1 part in 65,536, so 20 V÷65536 = 305 microvolt (µV) per A/D count. Therefore, the smallest theoretical change we can detect is 305 µV.

Unfortunately, other factors enter the equation to diminish the theoretical number of bits that can be used, such as noise. A data acquisition system specified to have a 16-bit resolution may also contain 16 counts of noise. Considering this noise, the 16 counts equal 4 bits (24 = 16); therefore the 16 bits of resolution specified for the measurement system is diminished by four bits, so the A/D converter actually resolves only 12 bits, not 16 bits.

A technique called averaging can improve the resolution, but it sacrifices speed. Averaging reduces the noise by the square root of the number of samples, therefore it requires multiple readings to be added together and then divided by the total number of samples. For example, in a system with three bits of noise, 23 = 8 , that is, eight counts of noise averaging 64 samples would reduce the noise contribution to one count, √64 = 8: 8÷8 = 1. However, this technique cannot reduce the affects of non-linearity, and the noise must have a Gaussian distribution.

Sensitivity

Sensitivity is an absolute quantity, the smallest absolute amount of change that can be detected by a measurement. Consider a measurement device that has a ±1.0 volt input range and ±4 counts of noise, if the A/D converter resolution is 212 the peak-to-peak sensitivity will be ±4 counts × (2 ÷ 4096) or ±1.9 mV p-p. This will dictate how the sensor responds. For example, take a sensor that is rated for 1000 units with an output voltage of 0-1 volts (V). This means that at 1 volt the equivalent measurement is 1000 units or 1 mV equals one unit. However the sensitivity is 1.9 mV p-p so it will take two units before the input detects a change.

The Measurement Computing USB-1608G Series Example

Let’s use the USB-1608G DAQ device and determine its resolution, accuracy, and sensitivity. (Refer to Table 2 and Table 3, below, for its specifications.) Consider a sensor that outputs a signal between 0 and 3 volts and is connected to the USB-1608G analog input. We will determine the accuracy at two conditions: Condition No. 1 when the sensor output is 200 mV, and Condition No. 2 when the sensor output is 3.0 volts.

Accuracy: The USB-1608G uses a 16 bit A/D converter

Condition No. 1: 200 mV measurement on a ±1 volt single-ended range

  • Temperature = 25 °C
  • Resolution = 2 V ÷ 216 = 30.5 µV
  • Sensitivity = 30.5 µV × 1.36 LSB rms = 41.5 µV rms
  • Gain Error = 0.024% × 200 mV = ±48 µV
  • Offset Error = ±245 µV
  • Linearity Error = 0.0076% of range = 76 µV
  • Total Error = 48 µV + 245 µV + 76 µV = 369 µV

Therefore a 200 mV reading could fall within a range of 199.631 mV to 200.369 mV.

Condition No. 2: 3.0 V measurement on a ±5 volt single-ended range

  • Temperature = 25 °C
  • Resolution = 10 volts ÷ 216 = 152.6 µV
  • Sensitivity = 152.6 µV × 0.91 LSB rms= 138.8 µV rms
  • Gain Error = 0.024% × 3.0 V = 720 µV
  • Offset Error = 686 µV
  • Linearity Error = 0.0076% of range = 380 µV
  • Total Error = 720 µV + 686 µV + 380 µV = 1.786 mV

Therefore, a 3.0 V reading could fall within a range of 2.9982 mV to 3.0018 mV.

Summary Analysis:

Accuracy: Consider Condition No. 1. The total accuracy is 369 µV ÷ 2 V × 100 = 0.0184%

Accuracy: Consider Condition No. 2. The total accuracy is 1.786 mV ÷ 10 V × 100 = 0.0177%

Effective Resolution: The USB-1608G has a specification of 16 bits of theoretical resolution. However the effective resolution is the ratio between the maximum signal being measured and the smallest voltage that can be resolved, i.e. the sensitivity. For example, if we consider Condition No. 2, divide the sensitivity value by the measured signal value or (138.5 µV ÷ 3.0 V) = 46.5e-6 and then converting to the equivalent bit value produces (1 V ÷ 46.5e-6) = 21660 or 14.4 bits of effective resolution. To further improve on the effective resolution, consider averaging the values as previously discussed.

Sensitivity: The most sensitive measurement is made on the ±1 volt range where the noise is only 41.5 µV rms whereas the sensitivity of the 5 volt range is 138.8 µV rms. In general, when selecting a sensor, set the equipment to capture the highest output with the best sensitivity. For example, if the output signal is 0-3 volts select the 5 volt range instead of the 10 volt.

Table2.


Analog Input DC Measurement All Values are (±)
Range Gain Error(% of reading) Offset Error (µV) INL Error (% of Range) Absolute Accuracy at Full Scale (µV) Gain Temperature Coefficient (% reading/°C) Offset Temperature Coefficient (µV/°C)
±10 V 0.024 915 0.0076 4075 0.0014 47
±5 V 0.024 686 0.0076 2266 0.0014 24
±2 V 0.024 336 0.0076 968 0.0014 10
±1 V 0.024 245 0.0076 561 0.0014 5

Table3.


Noise Performance
Range Counts LSBrms
±10 V 6 0.91
±5 V 6 0.91
±2 V 7 1.06
±1 V 9 1.36

More Information

Please reach out to our support team on the Digilent Forum if you have any questions: https://forum.digilent.com/

Additional TechTips are available here in Digilent Reference: App Notes, Tech Tips, and Other Documents