Current users of fixed vstr buffers (building file paths) assume that there
is no overflow and do not check for overflow after building the vstr. This
has the potential to lead to NULL pointer dereferences
(when vstr_null_terminated_str returns NULL because it can't allocate RAM
for the terminating byte) and stat'ing and loading invalid path names (due
to the path being truncated). The safest and simplest thing to do in these
cases is just raise an exception if a write goes beyond the end of a fixed
vstr buffer, which is what this patch does. It also simplifies the vstr
code.
The vstr argument to the calls to vstr_add_len are dynamically allocated
(ie fixed_buf=false) and so vstr_add_len will never return NULL. So
there's no need to check for it. Any out-of-memory errors are raised by
the call to m_renew in vstr_ensure_extra.
The aim of this patch is to rewrite the functions that create exception
instances (mp_obj_exception_make_new and mp_obj_new_exception_msg_varg) so
that they do not call any functions that may raise an exception. Otherwise
it's possible to create infinite recursion with an exception being raised
while trying to create an exception object.
The two main things that are done to accomplish this are:
1. Change mp_obj_new_exception_msg_varg to just format the string, then
call mp_obj_exception_make_new to actually create the exception object.
2. In mp_obj_exception_make_new and mp_obj_new_exception_msg_varg try to
allocate all memory first using functions that don't raise exceptions
If any of the memory allocations fail (return NULL) then degrade
gracefully by trying other options for memory allocation, eg using the
emergency exception buffer.
3. Use a custom printer backend to conservatively format strings: if it
can't allocate memory then it just truncates the string.
As part of this rewrite, raising an exception without a message, like
KeyError(123), will now use the emergency buffer to store the arg and
traceback data if there is no heap memory available.
Memory use with this patch is unchanged. Code size is increased by:
bare-arm: +136
minimal x86: +124
unix x64: +72
unix nanbox: +96
stm32: +88
esp8266: +92
cc3200: +80
This allows user classes to implement __abs__ special method, and saves
code size (104 bytes for x86_64), even though during refactor, an issue
was fixed and few optimizations were made:
* abs() of minimum (negative) small int value is calculated properly.
* objint_longlong and objint_mpz avoid allocating new object is the
argument is already non-negative.
Prior to this patch calling pyb.Timer(id) would always create a new timer
instance, even if there was an existing one. This patch fixes this
behaviour to match other peripherals, like UART, such that constructing a
timer with just the id will retrieve any existing instances.
The patch also refactors the way timers are validated on construction to
simplify and reduce code size.
This patch also removes the empty type "pinbase_type" (which crashes if
accessed) and uses "machine_pinbase_type" instead as the type of the
PinBase singleton.
Given that various ports now require submodules, rewrite the section
to be more generic.
Also, add git submodule update command to other sections for easy user
start.
If, for class X, X.__add__(Y) doesn't exist (or returns NotImplemented),
try Y.__radd__(X) instead.
This patch could be simpler, but requires undoing operand swap and
operation switch to get non-confusing error message in case __radd__
doesn't exist.
connect, send, recv, sendto and recvfrom now release the GIL. accept
already releases the GIL because it calls mp_hal_delay_ms() within its
busy-wait loop.
Previous to this patch the i2c.scan() method would do up to 100 probes per
I2C address, to detect the devices on the bus. This repeated probing was a
relic from when the code was copied from the accelerometer initialisation,
which requires to do repeated probes while waiting for the accelerometer
chip to turn on.
But I2C devices shouldn't need more than 1 probe to detect their presence,
and the generic software I2C implementation uses 1 probe successfully. So
this patch changes the implementation to use 1 probe per address, which
significantly speeds up the scan operation.
This is to allow to place reverse ops immediately after normal ops, so
they can be tested as one range (which is optimization for reverse ops
introduction in the next patch).
Originally, there were grouped in blocks of 5, to make it easier e.g.
to assess and numeric code of each. But now it makes more sense to
group it by semantics/properties, and then split in chunks still,
which usually leads to chunks of ~6 ops.
It starts a dichotomy of mp_binary_op_t values which can't appear in the
bytecode. Another reason to move it is to VALUES of OP_* and OP_INPLACE_*
nicely adjacent. This also will be needed for OP_REVERSE_*, to be soon
introduced.
With MBEDTLS_DEBUG_C disabled the function mbedtls_debug_set_threshold()
doesn't exist. There's also no need to call mbedtls_ssl_conf_dbg() so a
few bytes can be saved on disabling that and not needing the mbedtls_debug
callback.