You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the C API to stream a binary data file. The file is saved as interleaved int16_t [I][Q] samples. I am using an N210r4 with the latest UHD and fpga image. MTU on the NIC is configured at 3000, buffer size is 10000 samples. The same file, when streamed with the included tx_samples_from_file example works fine and the baseband is received correctly. When the equivalent code is written in C, the baseband is not correct.
Setup Details
Implemented C code streaming loop:
while (1) {
if (stop_signal_called)
break;
uhd_tx_metadata_make(&md, false, 0, 0.1, false, false);
size_t read = fread(buff, sizeof(int16_t), samps_per_buff, file);
for(int i = 0; i < read; i++){
printf("%d \n", buff[i]);
}
if(read > 0){
uhd_tx_streamer_send(tx_streamer, buffs_ptr, read, &md, 0.1, &num_samps_sent);
total_num_samps += num_samps_sent;
}
else
break;
if (verbose)
printf("\n Sent %ld - from file %ld\n ", total_num_samps, read);
}
buff containes the data block to stream and is defined as buff = malloc(samps_per_buff*sizeof(int16_t));
I would expect the two samples to perform exactly the same. The baseband should be identical
Actual Behaviour
Once shown on a spectrum analyzer, the C example shows a much larger gain an the baseband appears fragmented. I don't understand how the C api handles the buffer streaming. According to the code the C function wraps the exact same behavior of the send call.
Steps to reproduce the problem
I can provide source code for the two examples and binary file. Both samples are executed at 2.5Msps
Question
What am I missing here to correctly stream the baseband? My understanding is that once the data type is fixed in the streaming metadata, we call uhd_tx_streamer_send with the number of samples that we want to stream (which is what the C++ example does using as type std::complex<short>. In the C case, how do we achieve the same behavior?
The text was updated successfully, but these errors were encountered:
Issue Description
I am using the C API to stream a binary data file. The file is saved as interleaved int16_t [I][Q] samples. I am using an N210r4 with the latest UHD and fpga image. MTU on the NIC is configured at 3000, buffer size is 10000 samples. The same file, when streamed with the included tx_samples_from_file example works fine and the baseband is received correctly. When the equivalent code is written in C, the baseband is not correct.
Setup Details
Implemented C code streaming loop:
buff containes the data block to stream and is defined as
buff = malloc(samps_per_buff*sizeof(int16_t));
C metadata
Reference C++ streaming loop:
Reference C++ metadata
Expected Behavior
I would expect the two samples to perform exactly the same. The baseband should be identical
Actual Behaviour
Once shown on a spectrum analyzer, the C example shows a much larger gain an the baseband appears fragmented. I don't understand how the C api handles the buffer streaming. According to the code the C function wraps the exact same behavior of the send call.
Steps to reproduce the problem
I can provide source code for the two examples and binary file. Both samples are executed at 2.5Msps
Question
What am I missing here to correctly stream the baseband? My understanding is that once the data type is fixed in the streaming metadata, we call uhd_tx_streamer_send with the number of samples that we want to stream (which is what the C++ example does using as type
std::complex<short>
. In the C case, how do we achieve the same behavior?The text was updated successfully, but these errors were encountered: