I have a simple flow-graph,
A USRP, single channel, and a large FFT block, followed by a “vector to
stream”, and a file-output block.
The FFT is 8M points, which for complex float samples, give something
like a 64M memory footprint.
But I’m finding that the virtual size of the resulting process is
4.5GB!, with an RSS of about 2.3G.
This seems larger than you’d expect, by a fairly large factor!
For comparison, I have a C program that processes the samples created by
the Gnu Radio “front end”,
which has:
120 8M buffers (unsigned char)
2 x 8M buffers (float)
The virtual size of this process is only about 1G, with a similar RSS
(it touches those buffers very
regularly!).
It seems to me that there only needs to be perhaps a couple of buffers
per block inside Gnu Radio,
which at worst gives you something like 8 complex buffers (about 64M
per buffer for complex float)
in my flow-graph. Why the massive hugeness in memory footprint?
Now, I’ll agree that memory is cheap these days, but scaling my
application is going to be limited
by what Gnu Radio is capable of, rather than my back-end C program.
–
Marcus L.
Principal Investigator, Shirleys Bay Radio Astronomy Consortium