These were added in #805, as a workaround to prevent their parent
functions from returning before transfer cancellations had completed.
This has since been fixed properly in #1029.
There are now only two possible outcomes to this function: either we
successfully resubmitted a transfer, or the transfer is finished and we
end up calling transfer_finished().
So we can go ahead and simplify it accordingly.
If a transfer was cancelled, we are on our way to shutdown.
If hackrf_stop_tx() or hackrf_stop_rx() were called, they will already
have cleared this flag, but it is not cleared in hackrf_close(), and
for consistency with other paths it makes sense to clear it here.
If result < 0 here, libusb_submit_transfer returned an error, so we
need to shut down.
If !resubmit, then cancel_transfers() was already called by one of the
stop or close functions, so streaming is already false.
In the case of a libusb error, we still need the transfer thread
running, in order to handle outstanding cancellations and to signal the
condition variable when that is done.
transfer_threadproc has a timeout of half a second, so
when kill_transfer_thread tries to pthread_join, it often
has to wait until the timeout kicks in.
With this fix, we ensure that a final request is made after
request_exit has been called, so that transfer_threadproc can exit its
loop in a fast and clean manner.
This change avoids various possible races in which an autonomous mode
change by the M0 might clobber a mode change made from the M4, as well
as related races on other state fields that can be written by the M4.
The previous mode field is replaced by two separate ones:
- active_mode, which is written only by the M0, and indicates the
current operating mode.
- requested_mode, which is written by the M4 to request a change.
This field includes both the requested mode, and a flag bit. The M4
writes the field with the flag bit set, and must then wait for the
M0 to signal completion of the request by clearing the flag bit.
Whilst the M4 is blocked waiting for the flag bit to be cleared, the
M0 can safely make all the required changes to the state that are
needed for the transition to the requested mode. Once the transition
is complete, the M0 clears the flag bit and the M4 continues execution.
Request handling is implemented in the idle loop. To handle requests,
mode-specific loops simply need to check the request flag and branch to
idle if it is set.
A request from the M4 to change modes will always require passing
through the idle loop, and is not subject to timing guarantees. Only
transitions made autonomously by the M0 have guaranteed timing
constraints.
The work previously done in reset_counts is now implemented as part of
the request handling, so the tx_start, rx_start and wait_start labels
are no longer required.
An extra two cycles are required in the TX shortfall path because we
must now load the active mode to check whether we are in TX_START.
Two cycles are saved in the normal TX path because updating the active
mode to TX_RUN can now be done without checking the previous value.
Previously, finding the M0 in IDLE mode was ambiguous; it could indicate
either a normal outcome, or a shortfall limit having being hit.
To disambiguate, we add an error field to the M0 state. The errors
currently possible are an RX timeout or a TX timeout, both of which
can be obtained efficiently from the current operating mode due to
the values used.
This adds 3 cycles to both shortfall paths, in order to shift down
the mode to obtain the error code, and store it to the M0 state.
The previous implementation of sweep mode had the M0 continuing to
receive and buffer samples during retuning. To avoid using data affected
by retuning, the code discarded two 16K blocks of samples after
retuning, before transferring one 16K block to the host.
However, retuning has to be done with the USB IRQ masked. The M4 byte
count cannot be advanced by the bulk transfer completion callback whilst
retuning is ongoing. This makes an RX buffer overrun likely, and
overruns now stall the M0, causing sweep timing to become inconsistent.
It makes much more sense to stop the M0 receiving data during retuning.
Using scheduled M0 mode changes between the RX and WAIT modes, it's now
possible to do this whilst retaining consistent sweep timing. The
comment block added to the start of the `sleep_mode()` function explains
the new implementation.
The new scheme substantially reduces the timing constraints on the host
retrieving the data. Previously, the host had to retrieve each sample
block before the M0 overwrote it, which would occur midway through
retuning for the next sweep, with samples that were going to be
discarded anyway.
With the new scheme, buffer space is used efficiently. No data is
written to the buffer which will be discarded. The host does not need to
finish retrieving each 16K block until its buffer space is due to be
reused, which is not until two sweep steps later. A great deal more
jitter in the bulk transfer timing can therefore now be tolerated,
without affecting sweep timing.
If the host does delay the retrieval of a block enough that its buffer
space is about to be reused, the M0 now stalls. This in turn will stall
the M4 sweep loop, causing the sweep to be paused until there is enough
buffer space to continue. Previously, sweeping continued regardless, and
the host received corrupted data if it did not keep up.
This lays the groundwork for implementing timed operations (#86). The M0
can be configured to automatically change modes when its byte count
reaches a specific value.
Checking the counter against the threshold and dispatching to the next
mode is handled by a new `jump_next_mode` macro, which replaces the
unconditional branches back to the start of the TX and RX loops.
Making this change work requires some rearrangement of the code, such
that the destinations of all conditional branch instructions are within
reach. These branch instructions (`b[cond] label`) have a range of -256
to +254 bytes from the current program counter.
For this reason, the TX shortfall handling is moved earlier in the file,
and branches in the idle loop are restructured to use an unconditional
branch to rx_start, which is furthest away.
The additional code for switching modes adds 9 cycles to the normal RX
path, and 10 to the TX path (the difference is because the dispatch in
`jump_next_mode` is optimised for the longer RX path).
During shutdown of TX or RX, the host may stop supplying or retrieving
sample data some time before a stop request causes the M0 to be set back
to idle mode.
This makes it common for a spurious shortfall to occur during shutdown,
giving the misleading impression that there has been a throughput
problem. In fact, the final shortfall is simply an artifact.
This commit detects when this happens, and excludes the spurious
shortfall from the stats.
To implement this, we back up the shortfall stats whenever a new
shortfall begins. If the new shortfall later turns out to be spurious,
as indicated by a transition to IDLE while it is ongoing, then we roll
back the stats to their previous values.
We actually only need to back up previous longest shortfall length. To
get a previous shortfall count, can simply to subtract one from the
current shortfall count.
This change adds four cycles to the two shortfall paths - a load and
store to back up the previous longest shortfall length.
This count is only written by the M0, so there's no need to reload it
when the current value is already retained in a register.
Removing this load saves two cycles in all code paths.
Using our newly-defined macros, it's now straightforward to write
separate loops for RX and TX, with the idle loop dispatching to them
when a new mode setting is written by the M4.
This saves some cycles by reducing branches needed within each loop, and
makes it simpler to add new modes.
For macros which use internal labels, a name parameter is added. This
parameter is prefixed to the labels used, so that each mode's use of
that macro produces its own label names.
Similarly, where branches were taken in the handle_shortfall macro to
the "loop" label, these are replaced with the appropriate tx_loop or
rx_loop label.
The syntax `\name\()_suffix` is necessary to perform concatenation in
the GNU assembler.
The M4 previously buffered 16K of zeroes for the M0 to transmit, whilst
waiting for the first USB bulk transfer from the host to complete. The
first bulk transfer was placed in the second 16K buffer.
This avoided the M0 transmitting uninitialised data, but was not a
reliable solution, and delayed the transmission of the first
host-supplied samples.
Now that the M0 is placed in TX_START mode, this trick is no longer
necessary, because the M0 can automatically send zeroes until the first
bulk transfer is completed.
As such, the first bulk transfer now goes to the first 16K buffer.
Once the M4 byte count is increased by the bulk transfer completion
callback, the M0 will start transmitting the samples immediately.
In TX_START mode, a lack of data to send is not treated as a shortfall.
Zeroes are written to SGPIO, but no shortfall is recorded in the stats.
Using this mode helps avoid spurious shortfalls at startup.
As soon as there is data to transmit, the M0 switches to TX_RUN mode.
This change adds five cycles to the normal TX path, in order to check
for TX_START mode before sending data, and to switch to TX_RUN in that
case.
It also adds two cycles to the TX shortfall path, to check for TX_START
mode and skip shortfall processing in that mode.
Note the allocation of r3 to store the mode setting, such that this
value is still available after the tx_zeros routine.
This limit allows implementing a timeout: if a TX underrun or RX overrun
continues for the specified number of bytes, the M0 will revert to idle.
A setting of zero disables the limit.
This change adds 5 cycles to the TX & RX shortfall paths, to check if a
limit is set and to check the shortfall length against the limit.
To enable this, we keep a count of the current shortfall length. Each
time an SGPIO read/write cannot be completed due to a shortfall, we
increase this length. Each time an SGPIO read/write is completed
successfully, we reset the shortfall length to zero.
When a shortfall occurs and the existing shortfall length is zero, this
indicates a new shortfall, and the shortfall count is incremented.
This change adds one cycle to the normal RX & TX paths, to zero the
shortfall count. To enable this to be done in a single cycle, we keep a
zero handy in a high register.
The extra accounting adds 10 cycles to the TX and RX shortfall paths,
plus an additional 3 cycles to the RX shortfall path since there are
now two branches involved: one to the shortfall handler, and another
back to the main loop.
Previously, these counts were zeroed by the M4 when leaving the OFF
transceiver mode. Instead, do this on the M0 at the point where the M0
leaves IDLE mode.
This avoids a potential race in which the M4 zeroes the M0 count after
the M0 has already started incrementing it.
At this point, streaming has been stopped, so there will be no further
SGPIO interrupts. However, the M0 will still be spinning on the interrupt
flag, waiting to proceed.
To ensure that the M0 actually reaches its idle loop, we set the SGPIO
interrupt flag once. The M0 will then finish spinning on the flag, clear
the flag, see the new mode setting, and jump to the idle loop.
In the idle mode, the M0 simply waits for a different mode to be set.
No SGPIO access is done.
One extra cycle is added to both TX code paths, to check whether the
M0 should return to the idle loop based on the mode setting. The RX
paths are unaffected as the branch to RX is handled first.
In TX, check if there are sufficient bytes in the buffer to write a
block to SGPIO. If not, write zeros to SGPIO instead.
In RX, check if there is sufficent space in the buffer to store a block
read from SGPIO. If not, do nothing, which discards the data.
In both of these shortfall cases, the M0 count is not incremented.
This ensures that in TX, old data is never repeated. The M0 will not
resume writing TX samples to SGPIO until the M4 count advances,
indicating new data being ready in the buffer. This fixes bug #180.
Similarly, in RX, old data is never overwritten. The M0 will not resume
writing RX samples to the buffer until the M4 count advances, indicating
new space being available in the buffer.
This adds the `hackrf_transfer -B` option, which displays the number of
bytes currently in the buffer along with the existing per-second stats.
The number of bytes in the buffer is indicated by the difference between
the M0 and M4 byte counters. In TX, the M4 count should lead the M0 count.
In RX, the M0 count should lead the M4 count.
With both counters in place, the number of bytes in the buffer is now
indicated by the difference between the M0 and M4 counts.
The M4 count needs to be increased whenever the M4 produces or consumes
data in the USB bulk buffer, so that the two counts remain correctly
synchronised.
There are three places where this is done:
1. When a USB bulk transfer in or out of the buffer completes, the count
is increased by the number of bytes transferred. This is the most
common case.
2. At TX startup, the M4 effectively sends the M0 16K of zeroes to
transmit, before the first host-provided data.
This is done by zeroing the whole 32K buffer area, and then setting
up the first bulk transfer to write to the second 16K, whilst the M0
begins transmission of the first 16K.
The count is therefore increased by 16K during TX startup, to account
for the initial 16K of zeros.
3. In sweep mode, some data is discarded. When this is done, the count
is incremented by the size of the discarded data.
The USB IRQ is masked whilst doing this, since a read-modify-write is
required, and the bulk transfer completion callback may be called at
any point, which also increases the count.