I’m having trouble trying to implement an energy detector in GNU Radio.
I want to find empty channels that have a signal power less than or
equal to the FCC’s -114 dBm detection threshold. I’ve been modifying the
usrp_spectrum_sense.py code that came with GNU Radio, and it’s been a
headache trying to convert the magnitude squared values from the FFT
block to proper dBm values. I am using the USRP N200 along with the WBX
daughterboard.
I am simply trying to get the averaged values to match up with the
values in uhd_fft, but they seem to be off. I’ve got tune delay and
dwell delay both set to 0.05 seconds, and changing these options along
with the sampling rate and FFT size seem to give drastically different
results.
How is uhd_fft able to get the dBm values that it uses? Right now I take
the 10 * log10(bin[i]) - 20 * log10(fft_size) - 10 * log10(tb.power /
fft_size) for each sample, but they don’t seem to match up.
Should I even bother using usrp_spectrum_sense? Using bin statistics is
for looking at really wide ranges of spectrum that are outside the range
of maximum bandwidth the USRP can look at, but here I only want to look
at 6MHz TV channels consecutively.
I’m having trouble trying
to implement an energy detector in GNU Radio. I want to find empty
channels that have a signal power less than or equal to the FCC’s -114
dBm detection threshold. I’ve been modifying the usrp_spectrum_sense.py
code that came with GNU Radio, and it’s been a headache trying to
convert the magnitude squared values from the FFT block to proper dBm
values. I am using the USRP N200 along with the WBX daughterboard. I am
simply trying to get the averaged values to match up with the values in
uhd_fft, but they seem to be off. I’ve got tune delay and dwell delay
both set to 0.05 seconds, and changing these options along with the
sampling rate and FFT size seem to give drastically different results.
How is uhd_fft able to get the dBm values that it uses? Right now I take
the 10 * log10(bin[i]) - 20 * log10(fft_size) - 10 * log10(tb.power /
fft_size) for each sample, but they don’t seem to match up. Should I
even bother using usrp_spectrum_sense? Using bin statistics is for
looking at really wide ranges of spectrum that are outside the range of
maximum bandwidth the USRP can look at, but here I only want to look at
6MHz TV channels consecutively.
uhd_fft is not in any way
showing real-world dBm values – it’s merely showing FFT magnitude
values in dB – but it’s dB relative to a pretty-arbitrary scaling.
Different sample rates result from different decimation values in the
FPGA, which amounts to different amounts of filtering between the ADC
and the towards-the-host data stream.
In order to get actual dBm
values, you need to calibrate, at various frequencies, adn if you’re
going to be running at various sample rates, you’ll have to calibrate
for that sample-rate as well. And the calibration will vary from
card-to-card due to inherent variability in analog gain components.
The reason your $10K spectrum analyser shows actual dBm values is that
it has been calibrated in the factory prior to shipping to you, and it
has to be recalibrated every couple of years.
uhd_fft, on the other
hand, is simply processing a stream of normalized complex voltage
samples from whatever hardware happens to be sending said samples. It
has no idea what those normalized voltage samples correspond to in the
real-world, which is why you have to calibrate for your particular
hardware setup.
have to calibrate for that sample-rate as well. And the calibration will vary
from card-to-card due to inherent variability in analog gain components.
The reason your $10K spectrum analyser shows actual dBm values is that it has
been calibrated in the factory prior to shipping to you, and it has to be
recalibrated every couple of years.
Marcus, you’re way too helpful.
One could think you’re Canadian