The Case of the Data Spike

DN Staff

August 16, 2010

3 Min Read
The Case of the Data Spike

By Stephen Tilghman

We created for a data recording instrument to measure pressure in an oil well. The pressure was represented by an unknown frequency which needed to be very accurate. The design used the technique of counting N cycles of the unknown pressure signal with a high frequency clock, with high accuracy and temperature stability of known value. There is a little error in the time measured due to errors in syncing the two signals at the start and counts that complete after the Nth cycle of the input signal, but was accurate enough for the recorder. The initial tests gave data results that appeared well within the error band that we wanted.

The first real test of the circuit design ran for several hours. As we read the data we noticed spikes ___|___ in the data. Analysis of the spikes showed that the difference between the stable value being measured and the spike was one cycle of the input clock. The circuit had been designed to use the Q and /Q outputs of a D flip flop to start the two chains of counters and take care of the asynchronous nature of the two signals. Other engineers on the project could not figure where the extra cycle problem was coming from. I suggested that the start process was not working as expected and was asked how that could be. I did not have an immediate answer but decided to build a simulation of the circuit.

For input I used a pair of CD4098 chips for one shot pulse generators with one simulating the input signal and the second the measure signal. I set both to fire from an input signal that simulated the start measure signal. Last I built the timing circuits with variable resistors so that I could change the phase relationship between the two signals. I used a scope to look at the signals. As I varied the phase relationship I discovered that when the CD4013 D flip flop was clocked to start a new measure process, that if the input had a rising edge in the first 150 nanoseconds after the start signal, that the measured frequency lost a cycle of measure time. A look at the CD4013 showed that there was up to 150 nanoseconds of delay between the Q input going high and the /Q going low. This caused the loss of the first cycle of input and thus the spikes. A later analysis showed that the number of spikes in the data versus the time the system was run to be the same ratio as the 150 nanosecond to the measured signal. The problem was the asynchronous nature of the two signals beating together and meeting in the 150 nanosecond window.

We resolved the problem by using a pair of gates NAND and NOR to hold the start process off until the input signal was low and assuring that the two counting chains would clock well after the 150 nanosecond transition time.

Sign up for the Design News Daily newsletter.

You May Also Like