Spent some time tracking down a memory allocation issue. The SYSV shm
allocator was getting errors on
a request for 1.56GB. Now, it turns out that the segment size used by
SYSV shm uses a signed 32-bit
int for the size of the segment, which means you can’t allocate
segments larger than 2*31 bytes. But
why was the request for 1.56GB being blown away? Because the SYSV shm
allocator in Gnu Radio
multiplies the request size by 2 before asking the system for that
much shared memory.
So, why does it do that? And why is the memory allocation so incredibly
piggish? We went through this
question a couple of years ago, and I’m running into similar problems
again–my application uses
HUGE FFTs–1Hz resolution at up to 16MHz (USRP1) or 25MHz (USRP2)
bandwidth. This, not surprisingly,
leads to some large memory requirements, but Gnu Radios allocator
seems to allocate a significant amount
more than is really needed.
In the case cited above, the FFT size was 6M bins, which granted is
outside the “usual” range of most
Gnu Radio applications, but I was able to make this work last year up
to 16M bins.
The flowgraph involved is quite simple. A source, a short FFT-based
filter, the main HUGE FFT, and then
a complex-to-mag**2, then a file sink.
Should this really require gigabytes of memory?
–
Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium