Currently when using uasyncio.start_server() the socket configuration is
done inside a uasyncio.create_task() background function. If the address
and port are already in use however this throws an OSError which cannot be
cleanly caught behind the create_task().
This commit moves the getaddrinfo and socket binding to the start_server()
function, and only creates the task if that succeeds. This means that any
OSError from the initial socket configuration is propagated directly up the
call stack, compatible with CPython behaviour.
See #7444.
Signed-off-by: Damien George <damien@micropython.org>
Fixes the following (the line numbers match commit 0e87459e2b):
../../extmod/crypto-algorithms/sha256.c:49:19: runtime error: left shif...
../../extmod/moduasyncio.c:106:35: runtime error: member access within ...
../../py/binary.c:210:13: runtime error: left shift of negative value -...
../../py/mpz.c:744:16: runtime error: negation of -9223372036854775808 ...
../../py/objint.c:109:22: runtime error: left shift of 1 by 31 places c...
../../py/objint_mpz.c:374:9: runtime error: left shift of 4611686018427...
../../py/objint_mpz.c:374:9: runtime error: left shift of negative valu...
../../py/parsenum.c:106:14: runtime error: left shift of 46116860184273...
../../py/runtime.c:395:33: runtime error: left shift of negative value ...
../../py/showbc.c:177:28: runtime error: left shift of negative value -...
../../py/vm.c:321:36: runtime error: left shift of negative value -1```
Testing was done on an amd64 Debian Buster system using gcc-8.3 and these
settings:
CFLAGS += -g3 -Og -fsanitize=undefined
LDFLAGS += -fsanitize=undefined
The introduced TASK_PAIRHEAP macro's conditional (x ? &x->i : NULL)
assembles (under amd64 gcc 8.3 -Os) to the same as &x->i, since i is the
initial field of the struct. However, for the purposes of undefined
behavior analysis the conditional is needed.
Signed-off-by: Jeff Epler <jepler@gmail.com>
It reschedules the BT HCI poll soft timer so that it is called exactly when
the next timer expires.
Signed-off-by: Damien George <damien@micropython.org>
The comments in NimBLE for ble_gattc_notify_custom() state that "This
function consumes the supplied mbuf regardless of the outcome.". And
inspection of NimBLE code shows that this is the case. So the comment can
be removed.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes a problem with a race between cancellation of task A and
completion of task B, when A waits on B. If task B completes just before
task A is cancelled then the cancellation of A does not work. Instead,
the CancelledError meant to cancel A gets passed through to B (that's
expected behaviour) but B handles it as a "Task exception wasn't retrieved"
scenario, printing out such a message (this is because finished tasks point
their "coro" attribute to themselves to indicate they are done, and
implement the throw() method, but that method inadvertently catches the
CancelledError). The correct behaviour is for B to bounce that
CancelledError back out.
This bug is mainly seen when wait_for() is used, and in that context the
symptoms are:
- occurs when using wait_for(T, S), if the task T being waited on finishes
at exactly the same time as the wait-for timeout S expires
- task T will have run to completion
- the "Task exception wasn't retrieved message" is printed with
"<class 'CancelledError'>" as the error (ie no traceback)
- the wait_for(T, S) call never returns (it's never put back on the
uasyncio run queue) and all tasks waiting on this are blocked forever
from running
- uasyncio otherwise continues to function and other tasks continue to be
scheduled as normal
The fix here reworks the "waiting" attribute of Task to be called "state"
and uses it to indicate whether a task is: running and not awaited on,
running and awaited on, finished and not awaited on, or finished and
awaited on. This means the task does not need to point "coro" to itself to
indicate finished, and also allows removal of the throw() method.
A benefit of this is that "Task exception wasn't retrieved" messages can go
back to being able to print the name of the coroutine function.
Fixes issue #7386.
Signed-off-by: Damien George <damien@micropython.org>
With docs and a multi-test using TCP server/client.
This method is a MicroPython extension, although there is discussion of
adding it to CPython: https://bugs.python.org/issue41305
Signed-off-by: Mike Teachman <mike.teachman@gmail.com>
This fix prevents server.wait_closed() from raising an AttributeError when
trying to access server.task. This can happen if it is called immediately
after start_server().
The random module's getrandbits() method didn't give a proper error message
when calling it with a value that was outside of the range of 1-32, which
can lead to confusion using this function (which under CPython can accept
numbers larger than 32). Now instead of simply giving a ValueError it
gives an error message that states that the number of bits is constrained.
Also, since the random module's functions getrandbits() and randint()
differ from CPython, tests have been added to describe these differences.
For getrandbits the relevant documentation is shown and added to the docs.
The same is given for randint method so that the information is more easily
found.
Finally, since the int object lacks the bit_length() method there is a test
for that method also to include within the docs, showing the difference to
CPython.
If digest is called then the hash object is put in a "final" state and
calling update() or digest() again will raise a ValueError (instead of
silently producing the wrong result).
See issue #4119.
Signed-off-by: Damien George <damien@micropython.org>
uctypes.FLOAT32 has a special value representation and
uctypes_struct_scalar_size() should be used instead of GET_SCALAR_SIZE().
Signed-off-by: Damien George <damien@micropython.org>
The generated regex code is limited in the range of jumps and counts, and
this commit checks all cases which can overflow given the right kind of
input regex, and returns an error in such a case.
This change assumes that the results that overflow an int8_t do not
overflow a platform int.
Closes: #7078
Signed-off-by: Jeff Epler <jepler@gmail.com>
This helps to reduce memory fragmentation, by freeing the heap data as soon
as it is not needed. It also helps the compiler keeps a reference to the
beginning of both arrays, which need to be traceable by the GC (otherwise
some compilers may optimise this reference to something else).
Signed-off-by: Damien George <damien@micropython.org>
Previously, the MICROPY_PY_BLUETOOTH_ENABLE_CENTRAL_MODE macro
controlled enabling both the central mode and the GATT client
functionality (because usually the two go together).
This commits adds a new MICROPY_PY_BLUETOOTH_ENABLE_GATT_CLIENT
macro that separately enables the GATT client functionality.
This defaults to MICROPY_PY_BLUETOOTH_ENABLE_CENTRAL_MODE.
This also fixes a bug in the NimBLE bindings where a notification
or indication would not be received by a peripheral (acting as client)
as gap_event_cb wasn't handling it. Now both central_gap_event_cb
and peripheral_gap_event_cb share the same common handler for these
events.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Zephyr controllers can be queried for a static address (computed from the
device ID). BlueKitchen already supports this, but make them both use the
same macro to enable the feature.
This is a MicroPython-extension that allows for code running in IRQ
(hard or soft) or scheduler context to sequence asyncio code.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
On error, the handle is only available on err->att_handle rather than
in attr->handle used in the non-error case.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
If the _IRQ_L2CAP_RECV handler does the actual consumption of the incoming
data (i.e. via l2cap_recvinto), rather than setting a flag for
non-scheduler-context to handle it later, then two things can happen:
- It can starve the VM (i.e. the scheduled task never terminates). This is
because calling l2cap_recvinto will empty the rx buffer, which will grant
more credits to the channel (an HCI command), meaning more data can
arrive. This means that the loop in hal_uart.c that keeps reading HCI
data from the uart and executing NimBLE events as they are created will
not terminate, preventing other VM code from running.
- There's no flow control (i.e. data will arrive too quickly). The channel
shouldn't be given credits until after we return from scheduler context.
It's preferable that no work is done in scheduler/IRQ context. But to
prevent this being a problem this commit changes l2cap_recvinto so that if
it is called in IRQ context, and the Python handler empties the rx buffer,
then don't grant credits until the Python handler is complete.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
These args are already bounds checked and clipped, and using unsigned ints
can be more efficient. It also eliminates possible issues and compiler
warnings with shifting of signed integers.
Signed-off-by: Damien George <damien@micropython.org>
The superblock for littlefs is in block 0 and 1, but block 0 may be erased
or partially written, so block 1 must be checked if block 0 does not have a
valid littlefs superblock in it.
Prior to this commit, the mount of a block device which auto-detected the
filysystem type would fail for littlefs if block 0 did not contain a valid
superblock. That is now fixed.
Signed-off-by: Damien George <damien@micropython.org>
This allows sending arbitrary HCI commands and getting the response. The
return value of the function is the status of the command.
This is intended for debugging and not to be a part of the public API, and
must be enabled via mpconfigboard.h. It's currently only implemented for
NimBLE bindings.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit prevents uos.mount() from raising an AttributeError.
vfs_autodetect() is supposed to return an object that has a "mount" method,
so if no filesystem is found it should raise an OSError(ENODEV) and not
return the bdev itself which has no "mount" method.
Since CPython 3.8 the optional "sep" argument to hexlify is officially
supported, so update comments in the code and the docs to reflect this.
Signed-off-by: Damien George <damien@micropython.org>