Discuss-gnuradio mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
On 07/09/2013 05:06 PM, Tommy T. II wrote:
I am working on a GNU Radio Router block that will serve as a
communication block between multiple flow graphs. My router will receive
information via TCP, and then send it to several other blocks to be
processed. After those blocks have completed their processing, my
original idea was to take that data and return it to the router to be
sent back to a different node. This would introduce a cycle in the flow
graph. Is there any way to disable cycle prevention?
There is no way to disable cycle prevention; the GNU Radio scheduler
algorithm requires streaming ports to be a directed acyclic graph.
However, this applies to streaming ports only. It’s possible (though
probably lower in performance) for you to encapsulate data into async
messages and use message ports connected in an arbitrary topology.
On 07/09/2013 08:25 PM, Johnathan C. wrote:
There is no way to disable cycle prevention; the GNU Radio
scheduler algorithm requires streaming ports to be a directed
acyclic graph.However, this applies to streaming ports only. It’s possible
(though probably lower in performance) for you to encapsulate data
into async messages and use message ports connected in an arbitrary
topology.
Checkout the advanced scheduler. There is no problem with feedback
loops, and there is no penalty for passing buffers as messages instead
of streams: Home · guruofquality/gras Wiki · GitHub
-josh
Perfect; thank you!
Tommy James Tracy II
Ph.D Student
High Performance Low Power Lab
University of Virginia
Phone: 913-775-2241
Another alternative would be to pass around shared pointers to a queue.
Does that seem like a reasonable, albeit hack-ee, approach?
Sincerely,
Tommy James Tracy II
Ph.D Student
High Performance Low Power Lab
University of Virginia
Phone: 913-775-2241
On Wed, Jul 10, 2013 at 1:25 AM, Johnathan C.
[email protected] wrote:
There is no way to disable cycle prevention; the GNU Radio scheduler
algorithm requires streaming ports to be a directed acyclic graph.However, this applies to streaming ports only. It’s possible (though
probably lower in performance) for you to encapsulate data into async
messages and use message ports connected in an arbitrary topology.–
Johnathan C.
Corgan Labs - SDR Training and Development Services
http://corganlabs.com
The thing is, you don’t want your streaming ports to have cycles. It’s
not a fundamental limitation of GNU Radio; it’s just not the right
thing to do. The streaming ports are for streams of data, which tend
to have strong temporal relationships with each other.
Cycles in data streams are (generally; I’m sure there are a few
exceptions) usually very time-specific. Think of a PLL: if you have
more than 1 sample delay in your loop, it falls apart as an algorithm
(I have a paper on this somewhere that shows the math behind how delay
effects the locking performance). We don’t do cycles because we
transfer large (ideally) chunks of data between blocks. If you’re
processing 8191 items in one work function and try to feed that back,
you’re now that many samples delayed in time. Then next call could be
a different number. So not only do you have this delay, you have a
varying time delay. Doesn’t make sense for these kinds of streams. And
if we set N=1 for all calls to work, you’re going to maximize the
scheduler overhead, which is also bad.
What you’re talking about sounds like a job for the message passing
interface, as Johnathan recommended. You’re not time dependent from
what I gather from your email, so the async message interface will
work well. That’s basically what it’s meant for. You would post
messages when ready. The blocks that are receiving blocks would simply
block until a message is posted to them and then wake up and process
it.
Tom
On 07/09/2013 09:20 PM, Tommy T. II wrote:
Another alternative would be to pass around shared pointers to a queue. Does
that seem like a reasonable, albeit hack-ee, approach?
Of course that could work, but then you arent really letting the
framework work for you. The scheduler handles all the thread spawning,
thread safety, delivery of information through the topology…
At its core, the scheduler is just a collection of threads and shared
queues. So you dont have to re-implement that
-josh
The thing is, you don’t want your streaming ports to have cycles. It’s
not a fundamental limitation of GNU Radio; it’s just not the right
thing to do. The streaming ports are for streams of data, which tend
to have strong temporal relationships with each other.
I think Tommy needs to figure out what interface is more applicable for
his purposes: streaming or messaging. Since TCP is involved, its sounds
like blocks will be passing around chunks of memory with length
boundaries. Something message based would be preferable here. But if the
TCP packets are just transporting boundless samples, then a streaming
interface is more applicable. Whatever makes the processing logic most
natural to code in is probably the right answer.
But suppose that a streaming interface is most applicable. Then I
believe that this new router block will cause the stock scheduler to
detect a cycle or loop and fail, even though there technically is not
a cycle, since the inputs are unrelated to the outputs. Right?
One way to fix this might be to separate the input ports and the output
ports into two different blocks. This is the trick to keep the stock
scheduler from discovering a loop. However, now with two blocks, you
might face some thread synchronization issues between the two workers,
this can probably be addressed with simple mutexes.
Anywhoo… point being, if the framework is flexible to all these
options, the user can choose what is best to implement their design.
if we set N=1 for all calls to work, you’re going to maximize the
scheduler overhead, which is also bad.
Yea, when most people are talking about cycles, they want to implement
something from a DSP text book with feedback to an adder or multiplier.
This is a totally inefficient way to implement the design. The ratio of
scheduler overhead to work overhead is too-damn-high!
But feedback loops are nice to have for academic satisfaction and
simulation purposes. Its really awesome to be able to bring concepts
from basic communications courses to life in GNURadio Companion, with
test signals, sliders, and live FFTs. It was for me anyway
This morning, a few people trying out the advanced scheduler asked me
about using feedback in GRC. You see, getting a feedback loop working is
a matter of preloading the loop with something to set the initial delay,
but there isnt a GRC-friendly way to set the preload on arbitrary
blocks. To keep it simple, I just added an option to the math blocks in
GrExtras, which I think covers most cases:
Cheers,
-Josh
Discuss-gnuradio mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
What I’m trying to do is this:
-
The Root Router receives data from an input flow graph, packages it
and sends it to its children in a balanced manner. -
The Child Routers receive data, and, as a Source block, streams the
data to the Child’s flow graph. The resulting output needs to be
returned to the Root, so the Child Router serves as a sink as well!
(here’s a cycle) -
The Child Router sends the data back to the Root Router, which
re-orders it and streams the result to it’s sink.
-------------- (no cycle)
…---------------------- (cycle)
This won’t work with the existing gnu radio framework because of that
cycle. One alternative is the following:
- The Root flow graph dumps data into a shared input_queue via an
input queue sink block. The Router has a shared_ptr to the input_queue,
reads the data, and distributes it to its children. - The children receive the data and dump it into their input_queue via
shared ptr. - The child also has a queue source block that also has a shared_ptr to
the input_queue, and it reads the data to stream through its flow graph. - The child then uses an output queue sink block to dump data into a
shared output_queue. - The child router reads from the output_queue (via shared_ptr), and
sends data to the Root. - The Root receives the data, reorders it, and dumps it into its output
queue. - A queue source reads from the Root’s output_queue, and writes it to
the Root’s sink.
----<INPUT_QUEUE SINK[shared_ptr]>
<[shared_ptr]ROUTER[shared_ptr]> <OUTPUT_QUEUE
SOURCE>-------
<[shared_ptr]INPUT QUEUE SOURCE>-------------<OUTPUT
QUEUE SINK[shared_ptr]> <[shared_ptr]ROUTER[shared_ptr]>
This all seems a bit convoluted.
Sincerely,
Tommy James Tracy II
Ph.D Student
High Performance Low Power Lab
University of Virginia
Phone: 913-775-2241