Hi all,
In my attempt to create a simple real-time telemetry link I’ve got stuck
in a peculiar and stubborn streaming problem that I hope someone can
help me to resolve. I am using two USRP’s with GnuRadio 3.3.0 and GRC
(since I am still quite new to GnuRadio).
Before going into the details (which unfortunately is necessary in order
to explain this issue, therefore I am really sorry for the length of
this mail!), let me suggest that the problem seem to be related to
stream conversions (rate and type) combined with simultaneous
transmission and reception. This results in overflow/underflow problems
that I cannot get rid of. Note that it is NOT computer congestion
related (> 99% sure…)
So first, let me briefly explain my telemetry link, which I put together
in five parts, and then afterwards let me explain and pinpoint the
actual problem:
-
At transmitter side I first sample an incoming baseband signal via
the LFRX daughterboard at 1 MSa/s into the computer. My test input
signal is currently a square pulse (up to some 50 kHz), with a low 200mV
amplitude, coming from a tone generator.
First part works well! -
Now, since this sampled signal has too high data rate (for my over
the air link, in part 3) I reduce its bit rate by slicing the amplitude
(thus obtaining 1 bit/sample, now the stream flow at 1Mbit/s). The
slicing is okay because the telemetry-source is a NRZ-type signal and
therefore I am only interested in its sign. To avoid redundant dummy
bits after the binary slicing (which produces bytes with “1 significant
bit per byte”) I pack these “unpacked” bits into fully packed bytes
containing all significant bits by using the “Unpacked to Packed” block.
I also decimate the stream by a factor two (simply by throwing away
every 2nd sample) to obtain a 500 kbit/s bitstream. This digital
data-stream which represents the original NRZ signal I want to transmit
over-the-air reconstruct it at the receiver side.
Second part works well ! -
The over the air interface is a simple GMSK transmission link, 500
kbit/s (1MHz bandwidth, 2 samples/symbol). I am currently using the
RFX2400 board at some 2.45 GHz. GMSK transceiver is used with Packet
Encoder & Decoder at Tx and Rx side, respectively. Quite straightforward
and nothing fancy.
Third part works well! -
At the receiver side after the GMSK demodulation I “reconstruct” the
original NRZ-signal by interpolate (“sample repeat”) by a factor 2
(after the GMSK demodulation). “Unpack K bits” with (K=8) and use
“Chunks to Symbols” with “-1.0, 1.0” alphabet. At this point I have
reconstructed the original NRZ signal at the Rx side. This
reconstruction of the NRZ signal work fine (although the original
pulse-timing accuracy is reduced due to the necessary
decimation/interpolation process to adapt data rate for air link).
Fourth part works well! -
The just reconstructed NRZ signal is transmitted via the LFTX
daughterboard with 1 MSa/s out to an oscilloscope (to verify the
complete telemetry-link).
Fifth part works well !
All of the above parts works very good, but unfortunately not when
running the whole chain 1) to 5) all together. That is, for example
using 1) together with 5) works excellent, as do 2) with 4) and the
air-to-air link 3) by its own, etc.
However, when connecting the whole link 1)-5) I run in to
overflow/underflow problems (uU and oU) at the transmitter and receiver
sides, which results in a non-continous square waveform coming out at
the oscilloscope. Using “real time scheduling” does not improve or
change the situation.
Now, the first legitimate suspicion is that the computers are
overwhelmed and cannot keep up with all this. But that is NOT the case I
am more than 99% convinced (I have made some quite extensive testing).
For example, the processors are only running at about 20% of full
capacity when running the full chain. Furthermore when I replace 1) and
5) with “Signal Source” and “Scope Sink” (that is, by using an
artificial source and sink) together with 2)-3)-4) it also works fine,
and even though I simultaneously use the original 1) and 5) parts but
don’t use them (e.g. connect them to null sinks). So the computers do
keep the pace.
Instead the problem seems to be related to my necessary conversion of
the signal streams, data rate and type, combined with receiving and
transmitting simultaneously with the USRP source and sink blocks. Any
ideas or clocks ringing from experienced users?
So, one of my thoughts was that perhaps the stream between the “USRP
source” and “USRP sink” do not become perfectly synched when they are
both used in tandem, although their rates are perfectly adapted to each
other (like to my 500 kbit/s) ?!
For example, if I use a complex “Signal Source” with 1 MSa/s instead of
the “original” sampled signal in step 1), I can make the overflow
problem at the TX side go away. Correspondingly at the Rx side. That is,
anytime I de-connect the USRP source from the USRP sink (still using the
explained stream conversion processes in between) or replace the actual
USRP source with an artificial “Signal Source” with the same sample
rate, the overflow/underflow problem go away.
I thought also that perhaps some stringent timing issues combined with
the stream conversion processes lead to the overflow and underflow
problems that I experience. Like a “takt time” synch issue on a conveyor
belt. Therefore, in an attempt to possibly alleviate such a problem I
tried to use “Sample and Hold” circuits between any “USRP source” and
“USRP sink” pair, like at the transmitter in between step 1) and 2),
respectively. This however didn’t improve or change the situation.
So, is there anyone out there who have some qualified guess or hint how
to overcome this peculiar overflow/underflow problem? I would be
relieved!
Thanks,
Rickard