Lee:
The marginal distributions for the parameters never contain as much
information as the joint probability distributions except those
conditions where the underlying problem is truly separable. It is
almost always better from an information theoretic and probabilistic
point of view to use the joint conditional distributions for the
parameters given the observations and do the estimation jointly.
In this case however, we are actually making measurements on two
independent signals, and the observations are r1,r2 as you say. It
would be better in this case, since they are truly separated, to
measure each independently. The joint observation process would arise
if John measured the difference signal through a mixer or time interval
counter, etc.
Achilleas identified incorrect parameters given John’s statement of the
problem. Achilleas used amplitude and phase as the parameters and the
original statement of John’s problem has constant 1V peak to peak as the
amplitude and the unknowns are frequency and phase. Your formulation
of the problem is correct, but it is more general that John’s statement
of the problem since you include A1,A2 as (possibly) different when
John’s statement of the problem is that A1=A2=1.
r(t) = sin(wt+phi)+n(t)
and determining w and phi in the presence of noise n(t) is just about
the oldest problem in town. Let us consider John’s original problem
given the system he claims he has. Since John’s statement is that he
is doing the measurements on each separately using a coherent system,
he can repeatedly estimate w and phi using FFT’s and downsampling.
One way to reduce the impact of the noise given a fixed size FFT, is to
use the coherence as stated and to do long term autocorrelations, where
the autocorrelations are computed using FFT’s and then simply added,
complex bin by bin. This coherent addition of the correlations will
produce a very long term autocorrelation where accuracy of the estimates
from this process goes up like N where N is the number of FFT’s added.
THIS ASSUMES THE SYSTEM IS REALLY COHERENT FOR THAT N * FFTsize SAMPLES
and THE NOISE REALLY IS ZERO MEAN GAUSSIAN. Phase noise, drift,
non-Gaussian noise, etc. will destroy this coherence assumption and the
Gaussian properties we use in the analysis. He can reduce the ultimate
computational complexity by mixing, downsampling and doing the FFT
again and then mixing, downsampling and doing the FFT again, etc. until
the final bin traps his w to sufficient accuracy for his needs and then
phi is simply read off from the FFT coefficient. The mixing and
downsampling would be a usable approach. Careful bookkeeping must be
done on the phase group delay through the downsampling filters should
this approach be used or phi will be in error by the neglected phase
group delay.
This is one approach that I believe John can take and it is pretty
simple to put together even if it is not necessarily the most
computationally benign. He can grab tons of samples and do this in
Octave on his favorite Linux computer. In the case the signals are not
actually 1V pk-pk, this will also yield the amplitude since the power
of the sinusoid as measured by the FFT’s in use above will yield the
result for the amplitude. If this is to be done real time, then a
cascade of power of 2 reductions in sample rate and frequency offset can
be done until the parameters are trapped sufficiently for the exercise.
Bob
Lee P. wrote:
s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))
technique would converge faster if only three parameters needed to be
Discuss-gnuradio Info Page
–
Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)