Hi,
I wonder how the timestamps are being generated for each ethernet-packet
sent from the USRP2 to the host? My initial idea about how it works was
that timestamps are generated at 100MHz (same as the samples) and then
the timestamp associated with the first sample in an ethernet data
packet will be put in the metadata which could then be unpacked in the
host. I then would assume that the next packet after the first one will
have a timestamp value that is proportional to the number of samples per
packet times the decimation rate. However I get timestamp values that
increase much much more for each received packet, so I wonder if my idea
of how timestamps are generated is wrong?
I run the stable 3.2 version of gnuradio on Ubuntu 9.04 and I have an
USRP2 with the RFX2400. (I also was going to try the gnuradio trunk but
I got problems with building, see my other post “Error on make from git
development trunk”). I tried both an old version of the fpga bin-file
and one that I just recently downloaded (but both gave the same result).
I put some printouts in usrp2_impl.cc in the handle_data_packet function
and the output when I run rx_streaming_samples then looks like this:
./rx_streaming_samples -f 2457e6 -d 16 -N 400 -v
…
Daughterboard configuration:
baseband_freq=2456250000.000000
ddc_freq=-750000.000000
residual_freq=-0.016764
inverted=no
USRP2 using decimation rate of 16
Receiving 400 samples
ts_in = 1435221596, ts_last = 0, diff = 1435221596
ts_in = 2560802396, ts_last = 1435221596, diff = 1125580800
ts_in = 3367616092, ts_last = 2560802396, diff = 806813696
ts_in = 4174429788, ts_last = 3367616092, diff = 806813696
ts_in = 686341724, ts_last = 4174429788, diff = 806879232
ts_in = 1493155420, ts_last = 686341724, diff = 806813696
ts_in = 2283192156, ts_last = 1493155420, diff = 790036736
ts_in = 3090005852, ts_last = 2283192156, diff = 806813696
ts_in = 3896819548, ts_last = 3090005852, diff = 806813696
ts_in = 408731484, ts_last = 3896819548, diff = 806879232
Copy handler called 2 times.
Copy handler called with 2968 bytes.
Elapsed time was 0.000 seconds.
Packet rate was 100000 pkts/sec.
Approximate throughput was 148.40 MB/sec.
Total instances of overruns was 0.
Total missing frames was 0.
…
ts_in is the timestamp found in the metadata of the packet just
received, ts_last is the one from previous packet and diff is just the
difference them between. Since there seems to be no missing frames I’m
guessing the big value of diff can’t be related to lost packets?
If I try different decimation rates, I see no obvious relation between
the difference in value between two timestamps…
Do anyone know why the difference in timestamp value between received
packets is so big? What am I missing here?
Thanks,
/Ulrika