Using DSP for precise zero crossing, measurement?

John,

If want to measure the time difference between two sine waves in noise
and accuracy is your primary objective, then you should start with the
“optimal” solution to the problem and not with an ad-hoc technique
such as measuring the zero crossings.

In the simplest scenario, if your model looks like:
s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))
r(t)=s(t)+n(t)
and if you assume Gaussian noise n(t), then
if your observation is t in (0,T)
the (maximum likelihood) estimate should minimize
the squared Euclidean distance
int_0^T |r(t)-s(t)|^2 dt
or in discrete time (with appropriate sampling Ts)
sum_k |r(k Ts)-s(k Ts)|^2

There are several simplifications (eg, if you know
apriori that the 2 amplitudes are the same, or if you know one
of t1 or t2, etc) you can try (depending on the required accuracy) but
this is the safest way to approach the problem.

Achilleas

Achilleas A. wrote:

and if you assume Gaussian noise n(t), then
this is the safest way to approach the problem.

Achilleas

Achilleas, thanks for the answer, which it will take me a little while
to digest as I’m not a mathematician – more accurately, I’m a
mathematical anti-prodigy! However, I see generally what you’re saying
and measuring zero crossings isn’t the way it “has” to be done, it’s the
way it’s been done in the past for this application.

One factor is that we will not know that the two amplitudes are the
same; while we can aim for that, in some cases they may be several dB
apart because of the nature of the device under test and the reference
signal. And, we don’t want to add additional amplifiers, etc. into the
system unless we really have to because they could contribute their own
noise to the result.

My posting a few minutes ago describes in more detail just what we’re
trying to accomplish. Hopefully that will help explain the goal.

Thanks!

John

John,

Achilleas implies how to compute the “score” for the parameters A1, A2,
t1, and t2 and that the problem is to minimize the distance between r
and s, but he does not say how to reestimate or compute the best set of
parameters. That is all the meat to the problem (the minimization
of this L^2 norm). Since you are the smartest lawyer with no
engineering training I have ever known in so far as your engineering
abilities are concerned you can probably do this but I suspect some
further guidance will make this easier.

73’s
Bob (N4HY)

John Ackermann N8UR wrote:

s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))
apriori that the 2 amplitudes are the same, or if you know one

Thanks!

John


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page


Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)

On Mon, 2006-09-18 at 12:32 -0400, Achilleas A. wrote:

John,

If want to measure the time difference between two sine waves in noise
and accuracy is your primary objective, then you should start with the
“optimal” solution to the problem and not with an ad-hoc technique
such as measuring the zero crossings.

In the simplest scenario, if your model looks like:
s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))
r(t)=s(t)+n(t)

John,

Does your model look like the one Achilleas described above, or is it
like the following?

s1(t) = A1sin(w1(t-t1)+phi1); r1(t) = s1(t) + n1(t)
s2(t) = A2
sin(w2(t-t2)+phi2); r2(t) = s2(t) + n2(t)

In this model, you have separate observations of each sinusoid. i.e.,
r1(t) and r2(t) respectively.

Bob, Achilleas -

On an intuitive level, it seems to me that (ML) estimating the
parameters of s1(t) and s2(t) from r1(t) and r2(t) respectively, instead
of jointly from r(t) = s1(t)+s2(t)+n(t), would result in better
accuracy. Do you agree? At the very least, I think an adaptive
technique would converge faster if only three parameters needed to be
estimated instead of six. (Obviously, you would estimate both signals in
parallel to get dphi = phi1-phi2).

-Lee

If I understand correctly (always a big assumption), it’s the second
model. The two sine waves are electrically independent, and my
assumption is that they would be analyzed by separate, but coherently
clocked, ADCs as two channels.

I also should add that while the signals are generally sinusoid, in the
real world they will have some harmonic and other content – but we do
not care about high frequency components and in fact the whole point of
the exercise is to look at low frequency elements of nominally 1Hz and
lower by determining the Allan Variance at tau=1 second and greater.

John

On Mon, 2006-09-18 at 14:08 -0400, Lee P. wrote:

r(t)=s(t)+n(t)

John,

Does your model look like the one Achilleas described above, or is it
like the following?

s1(t) = A1sin(w1(t-t1)+phi1); r1(t) = s1(t) + n1(t)
s2(t) = A2
sin(w2(t-t2)+phi2); r2(t) = s2(t) + n2(t)

whoops. should have been

s1(t) = A1sin(w1t + phi1) or A1sin(w1(t-t1))
s2(t) = A2sin(w2t + phi2) or A2sin(w2(t-t2))

but you get the point…

Lee:

The marginal distributions for the parameters never contain as much
information as the joint probability distributions except those
conditions where the underlying problem is truly separable. It is
almost always better from an information theoretic and probabilistic
point of view to use the joint conditional distributions for the
parameters given the observations and do the estimation jointly.

In this case however, we are actually making measurements on two
independent signals, and the observations are r1,r2 as you say. It
would be better in this case, since they are truly separated, to
measure each independently. The joint observation process would arise
if John measured the difference signal through a mixer or time interval
counter, etc.

Achilleas identified incorrect parameters given John’s statement of the
problem. Achilleas used amplitude and phase as the parameters and the
original statement of John’s problem has constant 1V peak to peak as the
amplitude and the unknowns are frequency and phase. Your formulation
of the problem is correct, but it is more general that John’s statement
of the problem since you include A1,A2 as (possibly) different when
John’s statement of the problem is that A1=A2=1.

r(t) = sin(wt+phi)+n(t)

and determining w and phi in the presence of noise n(t) is just about
the oldest problem in town. Let us consider John’s original problem
given the system he claims he has. Since John’s statement is that he
is doing the measurements on each separately using a coherent system,
he can repeatedly estimate w and phi using FFT’s and downsampling.
One way to reduce the impact of the noise given a fixed size FFT, is to
use the coherence as stated and to do long term autocorrelations, where
the autocorrelations are computed using FFT’s and then simply added,
complex bin by bin. This coherent addition of the correlations will
produce a very long term autocorrelation where accuracy of the estimates
from this process goes up like N where N is the number of FFT’s added.
THIS ASSUMES THE SYSTEM IS REALLY COHERENT FOR THAT N * FFTsize SAMPLES
and THE NOISE REALLY IS ZERO MEAN GAUSSIAN. Phase noise, drift,
non-Gaussian noise, etc. will destroy this coherence assumption and the
Gaussian properties we use in the analysis. He can reduce the ultimate
computational complexity by mixing, downsampling and doing the FFT
again and then mixing, downsampling and doing the FFT again, etc. until
the final bin traps his w to sufficient accuracy for his needs and then
phi is simply read off from the FFT coefficient. The mixing and
downsampling would be a usable approach. Careful bookkeeping must be
done on the phase group delay through the downsampling filters should
this approach be used or phi will be in error by the neglected phase
group delay.

This is one approach that I believe John can take and it is pretty
simple to put together even if it is not necessarily the most
computationally benign. He can grab tons of samples and do this in
Octave on his favorite Linux computer. In the case the signals are not
actually 1V pk-pk, this will also yield the amplitude since the power
of the sinusoid as measured by the FFT’s in use above will yield the
result for the amplitude. If this is to be done real time, then a
cascade of power of 2 reductions in sample rate and frequency offset can
be done until the parameters are trapped sufficiently for the exercise.

Bob

Lee P. wrote:

s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))

technique would converge faster if only three parameters needed to be
Discuss-gnuradio Info Page


Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)

Ok, so I’ve been working on the digital modulations, and everything
works
great except when I put in the AGC loop. It jumps up in gain when there
is
no signal applied and then takes too long to converge when the signal
comes
in. The result is that it almost always misses the first packet before
converging to receive the rest of a burst of packets. Taking the AGC
loops
out will work when the transmitter amplitude and receiver gain are set
within a reasonable range, but we’re of course loosing dynamic range
this
way.

I’ve played with the rate of the AGC to no avail, and we have a limiter
in
there to clamp the gain, but again, if we clamp too much, we’ll loose
dynamic range.

The implementation is a straight-forward and simple AGC:
gain += (reference - sqrt(real(y)^2 + imag(y)^2))*rate
Where the reference is the value to converge to (±1 usually) and the
rate
is the slew rate of the loop.

I’m looking at Frerking’s “Digital Signal Processing in Communications
Systems” for a possible solution. His AGC circuit is a similar approach
to
the one above with a few more parts like a log(), integrator, blah blah
blah. I’m concerned with adding too much more to the AGC loop for
computational concerns, though, but will if it’s the best way to go.

Does anyone have any recommendations? Is it worth implementing a more
complicated circuit to improve the acquisition time of the AGC? Or am I
looking in the wrong direction?

Thanks,
Tom

Do you already have some extra “sacrificial bits” that you send out to
allow for the AGC to settle? How long is it taking for the AGC to
settle?

You could try adding some extra sacrificial bits that you ramp up when
you transmit to try to allow for steady tracking in the AGC loop
instead of having it get blasted immediately on transmit.

Brian

Tom:

A good agc has at least two time constants. One for attack and one for
decay. Your attack is much too slow.

tmp = (reference - sqrt(real(y)^2 + imag(y)^2));
rate = rate1;
if tmp > gain rate = rate2;

gain += tmp*rate;

rate2 is decay (where gain needs to be increased) and rate1 is attack
where gain needs to be decreased. rate1 > rate2 and rate1 must be fast
enough that you do not clip and destroy the first bits of information in
a packet. If the packet has some idles, or other announcement data,
then you want to recover fast enough to recover receiver parameters such
as clock and carrier in addition to gain.

Since you are doing dsp and have data in buffers, you can do noncausal
agc as well. I do sense of the required signal level three time
constants ahead of the place I apply the actual gain where time constant
is determined by the attack rate. This works spectacularly well in
DttSP. In addition to this, we actually complicate things a bit more
much to our advantage by having two tracks of agc. One where rapids
peaks and impulses flatten the gain but they release quickly. The
other is a slower agc channel. The gain is the min of the two gains.
Again, from all indications of the users of the SDR-1000, this is a
spectacular agc. We allow for thresholded agc which does decrease the
dynamic range but this is subject to user preference. Where you are
doing data, and not listening to it, you probably don’t care about
thresholding.

Bob

Tom R. wrote:

there to clamp the gain, but again, if we clamp too much, we’ll loose
blah. I’m concerned with adding too much more to the AGC loop for


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page


Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)

Thanks, Bob -

I think this problem is a little trickier than previously assumed
because:

  1. I believe John wants to measure frequency drift over time (i.e.,
    drift of one clock with respect to a reference) So, the clock signals
    cannot be assumed to be 100% coherent. Otherwise, there would be no
    drift to measure. The problem here is that there is no underlying model
    for how this drift occurs. (It’s what he’s trying to find out!) The
    signals are probably very close to coherent over some finite window, but
    the problem becomes one of determining a window size that helps
    integrate out noise but doesn’t do too much averaging of the drifting
    signal to obtain parameter estimates.

  2. John wants to measure to the best possible accuracy. So, while we
    assume A1=A2=1, this might not be the best assumption. Even if he
    measures the amplitude at the output of his clocks, there is no
    guarantee that both digitizing channels are completely calibrated. They
    may be “good enough,” but that depends on his application.

  3. Relating to (1), I think John needs estimates of instantaneous
    frequency over time. A sliding window method similar to the one
    outlined below would work. Again though, how is the window size chosen
    if the nature of the drift isn’t known? Similarly, there are adaptive
    techniques that could be used. They should track to some degree, but
    again, at what point do they break lock, and how do we know?

I think this may become a “how do you test the test equipment” problem.

Lee

Bob,

Thanks, I’ll start playing around with these ideas. I was actually
thinking
of this problem like a CVSD codec after I sent the message, which has
different approaches to gain and decay. Glad I wasn’t going too far
astray
with that thought.

We could consider putting in a preamble, like Brian suggested, too, but
as
it stands right now, the decay is far too slow for a reasonable preamble
to
tackle.

Thanks!
Tom

Lee P. wrote:

signals are probably very close to coherent over some finite window, but
the problem becomes one of determining a window size that helps
integrate out noise but doesn’t do too much averaging of the drifting
signal to obtain parameter estimates.

The two signals are definitely not coherent. The offset is likely to be
small, but it’s there.

The real goal is not to measure the long term drift, but rather the
short term variation (i.e., if the deltat between F1 and F2 is 1us, how
much does it wiggle around that 1us value from second to second).

  1. John wants to measure to the best possible accuracy. So, while we
    assume A1=A2=1, this might not be the best assumption. Even if he
    measures the amplitude at the output of his clocks, there is no
    guarantee that both digitizing channels are completely calibrated. They
    may be “good enough,” but that depends on his application.

Ideally, I’d like to avoid measuring the amplitude with any device other
than the ADC that’s being used for the sampling.

  1. Relating to (1), I think John needs estimates of instantaneous
    frequency over time. A sliding window method similar to the one
    outlined below would work. Again though, how is the window size chosen
    if the nature of the drift isn’t known? Similarly, there are adaptive
    techniques that could be used. They should track to some degree, but
    again, at what point do they break lock, and how do we know?

You got it.

As you all know, I’m no mathematician. On the airplane ride home from
the Digital Communications Conference Sunday night, my friend Tom (who’s
an RF engineer but not a DSP theory guy) and I roughed out a simple idea
like this (please feel free to laugh and throw things):

  1. determine when the slope of the waveform was positive.

  2. start grabbing samples at some point shortly before the nominal zero
    crossing.

  3. continue grabbing samples until some point shortly after the nominal
    zero crossing.

  4. use an appropriate set (e.g., the endpoints, or a least-squares fit)
    of those samples to determine the slope of the signal around the zero
    crossing.

  5. find the two samples that most closely bracket the zero crossing,
    and use the slope derived in step 4 to interpolate between those two
    samples to find the real zero crossing time.

  6. use any extra samples in some magical way to
    decrease the impact of any noise on the interpolation (told you, we
    aren’t statisticians or DSP experts!).

  7. timestamp the samples so we can determine the time between zero
    crossing of F1 and zero crossing of F2

  8. log that difference to later feed to the Allan Deviation program

One interesting question was whether we were better served in this
algorithm by having more bits in the ADC for better voltage
discrimination, or a higher sample rate for better frequency
discrimination.

I’m sure that this method has serious flaws, but perhaps the zero
crossing has the advantage of being amplitude-independent (so long as
you stay close enough to the middle of the waveform so that the slope is
approximately constant).

John

Eric B. wrote:

if tmp > gain rate = rate2;
if tmp > reference rate = rate2;

        ^^^^^^^^

I don’t think so. If your noisy current observation, tmp, which is
an instantaneous measurement on the needed gain to attain reference, is
below the current smoothed state, gain, then you are in decay mode.
If your instantaneous needed gain observation is less than the current
smoothed gain, you are in the attack condition and a different time
constant on the smoothing is needed. I am pretty sure what I told him
is right. Does it make more sense now?

Bob


Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)

On Tue, Sep 19, 2006 at 01:36:55PM -0400, Robert W McGwier wrote:

Tom:

A good agc has at least two time constants. One for attack and one for
decay. Your attack is much too slow.

tmp = (reference - sqrt(real(y)^2 + imag(y)^2));
rate = rate1;
if tmp > gain rate = rate2;

gain += tmp*rate;

Shouldn’t this read:

if tmp > reference rate = rate2;
^^^^^^^^

Bob McGwier wrote:

tmp = (reference - sqrt(real(y)^2 + imag(y)^2));

mode. If your instantaneous needed gain observation is less than the
current smoothed gain, you are in the attack condition and a
different time constant on the smoothing is needed. I am pretty sure
what I told him is right. Does it make more sense now?

Bob

Arrrgh. If the instantaneous needed gain ERROR estimate, tmp, is
positive, you are in decay.
If the instaneous needed gain ERROR estimate, tmp, is negative, you are
in attack.

I believe the equations are right even if I cannot answer an email
clearly.

:wink:
Bob


AMSAT VP Engineering. Member: ARRL, AMSAT-DL, TAPR, Packrats,
NJQRP/AMQRP, QRP ARCI, QCWA, FRC. ARRL SDR Wrk Grp Chairman
“You see, wire telegraph is a kind of a very, very long cat.
You pull his tail in New York and his head is meowing in Los
Angeles. Do you understand this? And radio operates exactly
the same way: you send signals here, they receive them there.
The only difference is that there is no cat.” - Einstein

On Thu, Sep 21, 2006 at 12:20:50PM -0400, Robert W McGwier wrote:

rate = rate1;

I don’t think so. If your noisy current observation, tmp, which is
an instantaneous measurement on the needed gain to attain reference, is
below the current smoothed state, gain, then you are in decay mode.

Thanks. Thinking about it as the instantaneous measurement of gain
makes sense.

If your instantaneous needed gain observation is less than the current
smoothed gain, you are in the attack condition and a different time
constant on the smoothing is needed. I am pretty sure what I told him
is right. Does it make more sense now?

Bob

Eric