Usrp_spectrum_sense and average dBm power levels

I’m having trouble trying to implement an energy detector in GNU Radio.
I want to find empty channels that have a signal power less than or
equal to the FCC’s -114 dBm detection threshold. I’ve been modifying the
usrp_spectrum_sense.py code that came with GNU Radio, and it’s been a
headache trying to convert the magnitude squared values from the FFT
block to proper dBm values. I am using the USRP N200 along with the WBX
daughterboard.

I am simply trying to get the averaged values to match up with the
values in uhd_fft, but they seem to be off. I’ve got tune delay and
dwell delay both set to 0.05 seconds, and changing these options along
with the sampling rate and FFT size seem to give drastically different
results.

How is uhd_fft able to get the dBm values that it uses? Right now I take
the 10 * log10(bin[i]) - 20 * log10(fft_size) - 10 * log10(tb.power /
fft_size) for each sample, but they don’t seem to match up.

Should I even bother using usrp_spectrum_sense? Using bin statistics is
for looking at really wide ranges of spectrum that are outside the range
of maximum bandwidth the USRP can look at, but here I only want to look
at 6MHz TV channels consecutively.

Here is the code I am working on:
http://uploaded.to/file/0gqabz13

On 06 Jul 2012 13:05, Eric Jaso wrote:

I’m having trouble trying
to implement an energy detector in GNU Radio. I want to find empty
channels that have a signal power less than or equal to the FCC’s -114
dBm detection threshold. I’ve been modifying the usrp_spectrum_sense.py
code that came with GNU Radio, and it’s been a headache trying to
convert the magnitude squared values from the FFT block to proper dBm
values. I am using the USRP N200 along with the WBX daughterboard. I am
simply trying to get the averaged values to match up with the values in
uhd_fft, but they seem to be off. I’ve got tune delay and dwell delay
both set to 0.05 seconds, and changing these options along with the
sampling rate and FFT size seem to give drastically different results.
How is uhd_fft able to get the dBm values that it uses? Right now I take
the 10 * log10(bin[i]) - 20 * log10(fft_size) - 10 * log10(tb.power /
fft_size) for each sample, but they don’t seem to match up. Should I
even bother using usrp_spectrum_sense? Using bin statistics is for
looking at really wide ranges of spectrum that are outside the range of
maximum bandwidth the USRP can look at, but here I only want to look at
6MHz TV channels consecutively.

Here is the code I am working on:

http://uploaded.to/file/0gqabz13 [1]

uhd_fft is not in any way
showing real-world dBm values – it’s merely showing FFT magnitude
values in dB – but it’s dB relative to a pretty-arbitrary scaling.

Different sample rates result from different decimation values in the
FPGA, which amounts to different amounts of filtering between the ADC
and the towards-the-host data stream.

In order to get actual dBm
values, you need to calibrate, at various frequencies, adn if you’re
going to be running at various sample rates, you’ll have to calibrate
for that sample-rate as well. And the calibration will vary from
card-to-card due to inherent variability in analog gain components.

The reason your $10K spectrum analyser shows actual dBm values is that
it has been calibrated in the factory prior to shipping to you, and it
has to be recalibrated every couple of years.

uhd_fft, on the other
hand, is simply processing a stream of normalized complex voltage
samples from whatever hardware happens to be sending said samples. It
has no idea what those normalized voltage samples correspond to in the
real-world, which is why you have to calibrate for your particular
hardware setup.

Links:

On Fri, Jul 06, 2012 at 01:24:12PM -0400, [email protected] wrote:

have to calibrate for that sample-rate as well. And the calibration will vary
from card-to-card due to inherent variability in analog gain components.

The reason your $10K spectrum analyser shows actual dBm values is that it has
been calibrated in the factory prior to shipping to you, and it has to be
recalibrated every couple of years.

Marcus, you’re way too helpful.
One could think you’re Canadian :slight_smile:

Eric, your question pops up a lot here and is thus on the gnuradio.org
FAQ:
http://gnuradio.org/redmine/projects/gnuradio/wiki/FAQ#How-do-I-know-the-exact-voltagepower-of-my-received-input-signal

The wiki gets a lot of flak for being crap, but there’s actually a good
amount of useful information on there. Please check it out.

MB

Karlsruhe Institute of Technology (KIT)
Communications Engineering Lab (CEL)

Dipl.-Ing. Martin B.
Research Associate

Kaiserstraße 12
Building 05.01
76131 Karlsruhe

Phone: +49 721 608-43790
Fax: +49 721 608-46071
www.cel.kit.edu

KIT – University of the State of Baden-Württemberg and
National Laboratory of the Helmholtz Association

On 06 Jul 2012 13:59, Martin B. (CEL) wrote:

Marcus, you’re
way too helpful.
One could think you’re Canadian :slight_smile:

It’s my most
serious personality flaw :slight_smile: