2578 Commits

Author SHA1 Message Date
Martin Ling
b109a31fd3 Merge hackrf_stop_tx_cmd and hackrf_start_tx_cmd.
These both do the same thing: set transceiver mode to OFF.
2022-03-18 10:56:58 +00:00
Martin Ling
958c742189 Remove delays from hackrf_stop_rx_cmd and hackrf_stop_tx_cmd.
These were added in #805, as a workaround to prevent their parent
functions from returning before transfer cancellations had completed.
This has since been fixed properly in #1029.
2022-03-18 10:42:40 +00:00
Martin Ling
503cd3316c Remove request_exit() function.
This just set the do_exit flag, and was now only called in one place.
2022-03-18 02:20:34 +00:00
Martin Ling
c4789df44c Set streaming flag in prepare_transfers().
This simplifies prepare_setup_transfers(), which was just setting
the flag if prepare_transfers() returned success, and passing on
its return value.
2022-03-18 02:20:34 +00:00
Martin Ling
5afd31e21c Set streaming flag in prepare_setup_transfers().
Avoids conditionally duplicating this across three other places.
2022-03-18 02:20:34 +00:00
Martin Ling
960d8015a4 Clear streaming flag in cancel_transfers().
Moving this into cancel_transfers() avoids duplicating it in the two
stop functions.
2022-03-18 02:20:34 +00:00
Martin Ling
c74c742391 Simplify hackrf_libusb_transfer_callback.
There are now only two possible outcomes to this function: either we
successfully resubmitted a transfer, or the transfer is finished and we
end up calling transfer_finished().

So we can go ahead and simplify it accordingly.
2022-03-18 02:20:34 +00:00
Martin Ling
54e00de167 Clear streaming flag in transfer_finished().
Since we always do these together, move it into the function.
2022-03-18 02:20:34 +00:00
Martin Ling
6bd9cb0553 Clear streaming flag if a transfer was cancelled.
If a transfer was cancelled, we are on our way to shutdown.

If hackrf_stop_tx() or hackrf_stop_rx() were called, they will already
have cleared this flag, but it is not cleared in hackrf_close(), and
for consistency with other paths it makes sense to clear it here.
2022-03-18 02:20:34 +00:00
Martin Ling
125bf9f7bb Don't call callback or submit new transfers once streaming stops.
This stops the RX callback from being called again with further data
once it has returned nonzero, or after a transfer had an error status.
2022-03-18 02:20:34 +00:00
Martin Ling
6720e56fc0 Clear streaming flag if we didn't resubmit a transfer.
If result < 0 here, libusb_submit_transfer returned an error, so we
need to shut down.

If !resubmit, then cancel_transfers() was already called by one of the
stop or close functions, so streaming is already false.
2022-03-18 02:20:34 +00:00
Martin Ling
9e1cb5c003 Don't exit transfer thread if an error occurs.
In the case of a libusb error, we still need the transfer thread
running, in order to handle outstanding cancellations and to signal the
condition variable when that is done.
2022-03-18 01:57:40 +00:00
Martin Ling
d3e4e9b6de Merge pull request #1067 from Matioupi/master
proposal to fix regression in 4c9fcf86651232c2104b57510a0ac86cf86123e4 #1057
2022-03-17 23:04:50 +00:00
Mathieu Peyréga
a8a6618728 fix regression in 4c9fcf86651232c2104b57510a0ac86cf86123e4 2022-03-17 21:06:21 +01:00
Martin Ling
8b9a33d440 Merge pull request #1063 from metayan/fast-exit
Ensure fast exit
2022-03-17 12:07:54 +00:00
Yan
790de7f47b Cleaner fast exit
Interrupt the event handling thread instead of waiting for timeout.
2022-03-16 11:13:00 +00:00
Yan
7ff92b3b05 Ensure fast exit
transfer_threadproc has a timeout of half a second, so
when kill_transfer_thread tries to pthread_join, it often
has to wait until the timeout kicks in.

With this fix, we ensure that a final request is made after
request_exit has been called, so that transfer_threadproc can exit its
loop in a fast and clean manner.
2022-03-15 12:43:45 +00:00
Michael Ossmann
3344ea8fce Merge pull request #982 from martinling/bug-180
Firmware sample buffer management overhaul, including safe handling of TX underruns
2022-03-01 19:07:07 -05:00
Martin Ling
ad3216435a Fix overlapping register allocations. 2022-02-28 23:02:34 +00:00
Martin Ling
d755f7a5c8 Correct order of requested mode and flag. 2022-02-28 17:12:45 +00:00
Michael Ossmann
de76404e60 Merge pull request #1045 from martinling/usb-isr
Move transceiver mode changes out of USB ISR.
2022-02-14 10:15:29 -07:00
Martin Ling
779483b9bd Make M0 state retrieval endian-safe. 2022-02-13 17:53:34 +00:00
Martin Ling
1fe06b425a Rename m0_state.{c,h} to usb_api_m0_state.{c,h} 2022-02-13 17:53:34 +00:00
Martin Ling
f3633e285f Replace direct setting of M0 mode with a request/ack mechanism.
This change avoids various possible races in which an autonomous mode
change by the M0 might clobber a mode change made from the M4, as well
as related races on other state fields that can be written by the M4.

The previous mode field is replaced by two separate ones:

- active_mode, which is written only by the M0, and indicates the
  current operating mode.

- requested_mode, which is written by the M4 to request a change.
  This field includes both the requested mode, and a flag bit. The M4
  writes the field with the flag bit set, and must then wait for the
  M0 to signal completion of the request by clearing the flag bit.

Whilst the M4 is blocked waiting for the flag bit to be cleared, the
M0 can safely make all the required changes to the state that are
needed for the transition to the requested mode. Once the transition
is complete, the M0 clears the flag bit and the M4 continues execution.

Request handling is implemented in the idle loop. To handle requests,
mode-specific loops simply need to check the request flag and branch to
idle if it is set.

A request from the M4 to change modes will always require passing
through the idle loop, and is not subject to timing guarantees. Only
transitions made autonomously by the M0 have guaranteed timing
constraints.

The work previously done in reset_counts is now implemented as part of
the request handling, so the tx_start, rx_start and wait_start labels
are no longer required.

An extra two cycles are required in the TX shortfall path because we
must now load the active mode to check whether we are in TX_START.

Two cycles are saved in the normal TX path because updating the active
mode to TX_RUN can now be done without checking the previous value.
2022-02-13 17:53:34 +00:00
Martin Ling
137f2481e5 Make an error code available when a shortfall limit is hit.
Previously, finding the M0 in IDLE mode was ambiguous; it could indicate
either a normal outcome, or a shortfall limit having being hit.

To disambiguate, we add an error field to the M0 state. The errors
currently possible are an RX timeout or a TX timeout, both of which
can be obtained efficiently from the current operating mode due to
the values used.

This adds 3 cycles to both shortfall paths, in order to shift down
the mode to obtain the error code, and store it to the M0 state.
2022-02-13 17:53:34 +00:00
Martin Ling
8bd3745253 Add some additional commentary. 2022-02-13 17:53:34 +00:00
Martin Ling
9f79a16b26 Rewrite sweep mode using timed operations.
The previous implementation of sweep mode had the M0 continuing to
receive and buffer samples during retuning. To avoid using data affected
by retuning, the code discarded two 16K blocks of samples after
retuning, before transferring one 16K block to the host.

However, retuning has to be done with the USB IRQ masked. The M4 byte
count cannot be advanced by the bulk transfer completion callback whilst
retuning is ongoing. This makes an RX buffer overrun likely, and
overruns now stall the M0, causing sweep timing to become inconsistent.

It makes much more sense to stop the M0 receiving data during retuning.
Using scheduled M0 mode changes between the RX and WAIT modes, it's now
possible to do this whilst retaining consistent sweep timing. The
comment block added to the start of the `sleep_mode()` function explains
the new implementation.

The new scheme substantially reduces the timing constraints on the host
retrieving the data. Previously, the host had to retrieve each sample
block before the M0 overwrote it, which would occur midway through
retuning for the next sweep, with samples that were going to be
discarded anyway.

With the new scheme, buffer space is used efficiently. No data is
written to the buffer which will be discarded. The host does not need to
finish retrieving each 16K block until its buffer space is due to be
reused, which is not until two sweep steps later. A great deal more
jitter in the bulk transfer timing can therefore now be tolerated,
without affecting sweep timing.

If the host does delay the retrieval of a block enough that its buffer
space is about to be reused, the M0 now stalls. This in turn will stall
the M4 sweep loop, causing the sweep to be paused until there is enough
buffer space to continue. Previously, sweeping continued regardless, and
the host received corrupted data if it did not keep up.
2022-02-13 17:53:34 +00:00
Martin Ling
cca7320fe4 Add a wait mode for the M0.
In wait mode, the byte counter is advanced, but no SGPIO read/writes are
done. This mode is intended to be used for implementing timed operations.
2022-02-13 16:46:12 +00:00
Martin Ling
3618a5352f Add a counter threshold at which the M0 will change to a new mode.
This lays the groundwork for implementing timed operations (#86). The M0
can be configured to automatically change modes when its byte count
reaches a specific value.

Checking the counter against the threshold and dispatching to the next
mode is handled by a new `jump_next_mode` macro, which replaces the
unconditional branches back to the start of the TX and RX loops.

Making this change work requires some rearrangement of the code, such
that the destinations of all conditional branch instructions are within
reach. These branch instructions (`b[cond] label`) have a range of -256
to +254 bytes from the current program counter.

For this reason, the TX shortfall handling is moved earlier in the file,
and branches in the idle loop are restructured to use an unconditional
branch to rx_start, which is furthest away.

The additional code for switching modes adds 9 cycles to the normal RX
path, and 10 to the TX path (the difference is because the dispatch in
`jump_next_mode` is optimised for the longer RX path).
2022-02-13 16:46:12 +00:00
Martin Ling
7124b7192b Roll back shortfall stats if switched to idle in a shortfall.
During shutdown of TX or RX, the host may stop supplying or retrieving
sample data some time before a stop request causes the M0 to be set back
to idle mode.

This makes it common for a spurious shortfall to occur during shutdown,
giving the misleading impression that there has been a throughput
problem. In fact, the final shortfall is simply an artifact.

This commit detects when this happens, and excludes the spurious
shortfall from the stats.

To implement this, we back up the shortfall stats whenever a new
shortfall begins. If the new shortfall later turns out to be spurious,
as indicated by a transition to IDLE while it is ongoing, then we roll
back the stats to their previous values.

We actually only need to back up previous longest shortfall length. To
get a previous shortfall count, can simply to subtract one from the
current shortfall count.

This change adds four cycles to the two shortfall paths - a load and
store to back up the previous longest shortfall length.
2022-02-13 16:46:12 +00:00
Martin Ling
a5e1521535 Don't update buffer pointer until after checking for shortfall.
The buffer pointer is not needed in the shortfall paths. Moving this
update after the shortfall checks saves 3 cycles in each shortfall path.
2022-02-13 16:46:12 +00:00
Martin Ling
0e99419be2 Don't load M0 byte count from memory.
This count is only written by the M0, so there's no need to reload it
when the current value is already retained in a register.

Removing this load saves two cycles in all code paths.
2022-02-13 16:46:12 +00:00
Martin Ling
4e205994e3 Use separate loops for RX and TX modes.
Using our newly-defined macros, it's now straightforward to write
separate loops for RX and TX, with the idle loop dispatching to them
when a new mode setting is written by the M4.

This saves some cycles by reducing branches needed within each loop, and
makes it simpler to add new modes.

For macros which use internal labels, a name parameter is added. This
parameter is prefixed to the labels used, so that each mode's use of
that macro produces its own label names.

Similarly, where branches were taken in the handle_shortfall macro to
the "loop" label, these are replaced with the appropriate tx_loop or
rx_loop label.

The syntax `\name\()_suffix` is necessary to perform concatenation in
the GNU assembler.
2022-02-13 16:46:12 +00:00
Martin Ling
f08e0c17bf Use new macros in M0 code.
This commit is separate from the previous one which adds the macros, in
order to make the diffs easier to read.
2022-02-13 16:46:12 +00:00
Martin Ling
9d570cb558 Add macro versions of key parts of M0 code.
This commit is separate from the following one which uses the macros, in
order to make the diffs easier to read.
2022-02-13 16:46:12 +00:00
Martin Ling
68688e0ec4 Don't send 16K of zeroes to the M0 at TX startup.
The M4 previously buffered 16K of zeroes for the M0 to transmit, whilst
waiting for the first USB bulk transfer from the host to complete. The
first bulk transfer was placed in the second 16K buffer.

This avoided the M0 transmitting uninitialised data, but was not a
reliable solution, and delayed the transmission of the first
host-supplied samples.

Now that the M0 is placed in TX_START mode, this trick is no longer
necessary, because the M0 can automatically send zeroes until the first
bulk transfer is completed.

As such, the first bulk transfer now goes to the first 16K buffer.
Once the M4 byte count is increased by the bulk transfer completion
callback, the M0 will start transmitting the samples immediately.
2022-02-13 16:46:12 +00:00
Martin Ling
00b5ed7d62 Add an M0 TX_START mode, in which zeroes are sent until data is ready.
In TX_START mode, a lack of data to send is not treated as a shortfall.
Zeroes are written to SGPIO, but no shortfall is recorded in the stats.
Using this mode helps avoid spurious shortfalls at startup.

As soon as there is data to transmit, the M0 switches to TX_RUN mode.

This change adds five cycles to the normal TX path, in order to check
for TX_START mode before sending data, and to switch to TX_RUN in that
case.

It also adds two cycles to the TX shortfall path, to check for TX_START
mode and skip shortfall processing in that mode.

Note the allocation of r3 to store the mode setting, such that this
value is still available after the tx_zeros routine.
2022-02-13 16:46:12 +00:00
Martin Ling
5abc39c53a Add USB requests and host support to set TX/RX shortfall limits.
This adds `-T` and `-R` options to `hackrf_debug`, which set the TX
underrun and RX overrun limits in bytes.
2022-02-13 16:46:12 +00:00
Martin Ling
2f79c03b2c hackrf_debug: allow parse_int() to handle 32-bit parameters. 2022-02-13 16:46:12 +00:00
Martin Ling
f0bc6eda30 Add a shortfall length limit.
This limit allows implementing a timeout: if a TX underrun or RX overrun
continues for the specified number of bytes, the M0 will revert to idle.

A setting of zero disables the limit.

This change adds 5 cycles to the TX & RX shortfall paths, to check if a
limit is set and to check the shortfall length against the limit.
2022-02-13 16:46:12 +00:00
Martin Ling
2c86f493d9 Keep track of longest shortfall.
This adds six cycles to the TX and RX shortfall paths.
2022-02-13 16:46:12 +00:00
Martin Ling
a7bd1e3ede Keep count of number of shortfalls.
To enable this, we keep a count of the current shortfall length. Each
time an SGPIO read/write cannot be completed due to a shortfall, we
increase this length. Each time an SGPIO read/write is completed
successfully, we reset the shortfall length to zero.

When a shortfall occurs and the existing shortfall length is zero, this
indicates a new shortfall, and the shortfall count is incremented.

This change adds one cycle to the normal RX & TX paths, to zero the
shortfall count. To enable this to be done in a single cycle, we keep a
zero handy in a high register.

The extra accounting adds 10 cycles to the TX and RX shortfall paths,
plus an additional 3 cycles to the RX shortfall path since there are
now two branches involved: one to the shortfall handler, and another
back to the main loop.
2022-02-13 16:46:12 +00:00
Martin Ling
0f3069ee5e Move resetting of byte counts to the M0.
Previously, these counts were zeroed by the M4 when leaving the OFF
transceiver mode. Instead, do this on the M0 at the point where the M0
leaves IDLE mode.

This avoids a potential race in which the M4 zeroes the M0 count after
the M0 has already started incrementing it.
2022-02-13 16:46:12 +00:00
Martin Ling
3fd3c7786e Set M0 mode to IDLE when transceiver mode is OFF.
At this point, streaming has been stopped, so there will be no further
SGPIO interrupts. However, the M0 will still be spinning on the interrupt
flag, waiting to proceed.

To ensure that the M0 actually reaches its idle loop, we set the SGPIO
interrupt flag once. The M0 will then finish spinning on the flag, clear
the flag, see the new mode setting, and jump to the idle loop.
2022-02-13 16:46:12 +00:00
Martin Ling
32c725dd61 Add an idle mode for the M0.
In the idle mode, the M0 simply waits for a different mode to be set.
No SGPIO access is done.

One extra cycle is added to both TX code paths, to check whether the
M0 should return to the idle loop based on the mode setting. The RX
paths are unaffected as the branch to RX is handled first.
2022-02-13 16:46:12 +00:00
Martin Ling
5b50b2dfac Replace TX flag with a mode setting.
This is to let us start adding new operatin modes for the M0.
2022-02-13 16:46:12 +00:00
Martin Ling
c0d0cd2a1d Check for sufficient bytes, or space in buffer, before proceeding.
In TX, check if there are sufficient bytes in the buffer to write a
block to SGPIO. If not, write zeros to SGPIO instead.

In RX, check if there is sufficent space in the buffer to store a block
read from SGPIO. If not, do nothing, which discards the data.

In both of these shortfall cases, the M0 count is not incremented.

This ensures that in TX, old data is never repeated. The M0 will not
resume writing TX samples to SGPIO until the M4 count advances,
indicating new data being ready in the buffer. This fixes bug #180.

Similarly, in RX, old data is never overwritten. The M0 will not resume
writing RX samples to the buffer until the M4 count advances, indicating
new space being available in the buffer.
2022-02-13 16:46:12 +00:00
Martin Ling
c8d120ff6c Display total M0 and M4 counts at end of hackrf_transfer.
Doing this requires keeping track of when the 32-bit counters wrap, and
updating 64-bit running totals.
2022-02-13 16:46:12 +00:00
Martin Ling
eb2be7995c Add hackrf_transfer option to display buffer stats.
This adds the `hackrf_transfer -B` option, which displays the number of
bytes currently in the buffer along with the existing per-second stats.

The number of bytes in the buffer is indicated by the difference between
the M0 and M4 byte counters. In TX, the M4 count should lead the M0 count.
In RX, the M0 count should lead the M4 count.
2022-02-13 16:46:12 +00:00
Martin Ling
79853d2b28 Add a second counter to keep track of bytes transferred by the M4.
With both counters in place, the number of bytes in the buffer is now
indicated by the difference between the M0 and M4 counts.

The M4 count needs to be increased whenever the M4 produces or consumes
data in the USB bulk buffer, so that the two counts remain correctly
synchronised.

There are three places where this is done:

1. When a USB bulk transfer in or out of the buffer completes, the count
   is increased by the number of bytes transferred. This is the most
   common case.

2. At TX startup, the M4 effectively sends the M0 16K of zeroes to
   transmit, before the first host-provided data.

   This is done by zeroing the whole 32K buffer area, and then setting
   up the first bulk transfer to write to the second 16K, whilst the M0
   begins transmission of the first 16K.

   The count is therefore increased by 16K during TX startup, to account
   for the initial 16K of zeros.

3. In sweep mode, some data is discarded. When this is done, the count
   is incremented by the size of the discarded data.

   The USB IRQ is masked whilst doing this, since a read-modify-write is
   required, and the bulk transfer completion callback may be called at
   any point, which also increases the count.
2022-02-13 16:46:12 +00:00