While checking the Home Depot website for a new thermostat, I found a unit with a weight of 8.899999 oz. I doubt Home Depot has a scale that measures millionths of an ounce. But because those digits exist, some people will believe they represent an accurate measurement.
That's the problem with accuracy and resolution—people often think they represent the same thing. At its simplest, resolution tells you how finely you can represent a quantity and accuracy tells you how much of the measurement really has value.
Companies that sell instruments such as dataacquisition boards and digital multimeters (DMMs) often make a big deal of the fact that a given card provides a 16, 18, or 24bit analogtodigital converter (ADC). But they may not tell users, at least not clearly, how many of those bits they actually can use. In the thermostat example, it's safe to assume only the two most significant digits have any value.
Keep in mind that all ADCs have builtin inaccuracies because they digitize a signal in discrete steps. There's just no way the output can perfectly represent an analog input signal. So, a 12bit converter would provide a leastsignificant bit (LSB) with a 2.44 mV step, so the ADC only can digitize values in 2.44 mV steps; 2.44 mV, 4.88 mV, 7.32 mV,

Perfection Please: A perfect ADC produces a unique digital code as an analog input increases through small voltage increments. Each increment equals the voltage represented by the converter's leastsignificant bit. For a 12bit converter with a 010 V range, that comes to 2.44mV per step.

and so on. Through its entire 10V range, a measurement never can be more accurate than ±1/2 LSB, or ±1.22 mV.
Today, the cost of a 16bit ADC exceeds that of a 12bit device by only a few dollars. So users ask, "Why can't manufacturers use 16bit ADCs and give us four more bits?" They can, but adding four bits of resolution doesn't automatically guarantee more accuracy. The 2.44 mV steps in a 12bit ADC shrink to 0.15 mV steps in a 16bit ADC. Unless board manufacturers also improve the performance of the ADC's supporting circuits, which costs money, those extra bits amount to naught. In particular, they must pay careful attention to stability and noise reduction.
Watch for Errors
Several characteristics of the ADCs themselves, if not accounted for, can introduce errors and reduce accuracy. These errors stem from offset, gain, temperature drift, and nonlinear performance.
Although they don't often measure such errors, users should know they exist. Offset refers to a fixed difference between an actual signal and what an ADC measures. The offset remains constant throughout the ADC's range, and designers can usually reduce it electronically. After designers remove any offset error, a gain error may remain. It represents a difference between the slope of an ideal ADC's measurement steps and those in a real ADC. As with offset error, designers can reduce gain error through circuit trimming. The use of matched components and circuit designs that minimize the effect of temperature changes help overcome problems of thermal drift.

Noise Helps: Adding a bit of controlled noise to an ADC's input can force a signal above or below each step.

After designers remove offset, thermal, and gain errors, they still may face nonlinearity errors. Differential nonlinearity (DNL) occurs at the transitions between each step and DNL values specify the difference between an actual step width and the ideal value of 1 LSB. DNL errors vary from step to step.
Integral nonlinearity (INL) relates all the DNL errors to the ideal performance of an ADC, and ADC suppliers can represent INL in two ways. The first technique plots the best straightline fit for all the DNL errors and relates this to the straight line produced by a perfect ADC. The second technique draws a straight line between the DNL value at the start and end of the measurement range. Manufacturers of analogmeasurement cards strive to find ADCs that minimize any effects of DNL and INL. Specification sheets should list the values for these types of errors, and a DNL of ±1/2 LSB and an INL of ±1 to ±2 LSB are reasonable for a highaccuracy measuring instrument.
Simplify the Specs
These error specifications can get confusing and users often wonder why manufacturers can't specify performance in simpler terms. Actually, some do; they specify a signaltonoise ratio (SNR) that can simplify comparisons. Every measuring device has a noise floor, below which it can't make accurate measurements. The ratio of the maximum measurable signal to the noise floor provides the signaltonoise ratio in units of decibels (dB).
Given a measuring instrument's SNR—also called its dynamic range—you can calculate a quantity called the effective number of bits (ENOB). This value tells you how many of the bits in a given system provide accurate information. Here's an example that compares two hypothetical dataacquisition systems:
System

ADC Bits

Theoretical SNR (dB)

Actual SNR (dB)

A 
16 
96 
92 
B 
18 
108 
90 
At first glance it might appear the 18bit system has an advantage—two more bits and only a slightly smaller SNR than the 16bit system. You can use the formula below to relate a system's SNR to its number of effective bits, n:
n = (SNR  1.76 dB)/6.02 dB
So for the two systems, the 16bit ADC yields 15 effective bits and the 18bit system can provide only 14.7 effective bits. It looks like the 16bit system will perform slightly better than the 18bit system, and probably at lower cost. Real ADC accuracy is usually lower, and never higher, than the number of bits produced by an ADC.
Engineers make realistic comparisons based on signaltonoise ratios because manufacturers make these measurements on an entire system, from the analog signal inputs through a multiplexer and amplifier to an ADC. And the SNR measurements account for any noise generated by the surrounding circuits. Thus the SNR most accurately represents how equipment will operate in actual use.
For a Few Bits More
After taking great pains to reduce errors and noise, some manufacturers actually add a small amount of noise to incoming signals to better resolve them. This technique, called dithering, sounds counterintuitive, but in some situations, it can work well.
Say an incoming dc signal with slight fluctuations exists between two adjacent ADC steps. The ADC produces only one output value, x, because the signal never crosses the threshold for the ADC's next highest step, x + 1 LSB. Adding 1/2 LSBrms of Gaussian white noise to the signal may at times force it above the ADC's next level. Thus a series of measurements will include some values for x and some for x + 1 LSB. Averaging these values can help resolve an unknown signal with better accuracy. But keep in mind that the ADC did not instantaneously measure just one value. Instead, software applied a statistical technique to several acquired values to derive information about the original dc input.