If the new name start with '/', cur_dir is not prepened any more, so that
the current working directory is respected. And extend the test cases for
rename to cover this functionality.
This change scans for '.', '..' and multiple '/' and normalizes the new
path name. If the resulting path does not exist, an error is raised.
Non-existing interim path elements are ignored if they are removed during
normalization.
This fixes the bug, that stat(filename) would not consider the current
working directory. So if e.g. the cwd is "lib", then stat("main.py") would
return the info for "/main.py" instead of "/lib/main.py".
On arm64 with CPython:
>>> _thread.stack_size(32*1024)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: size not valid: 32768 bytes
So increase the CPython value in the test to 512k so it runs on more
systems (on modern Linux the default stack size is usually 8MB).
Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
This commit adds several small items to improve the support for OTA
updates on an esp32:
- a partition table for 4MB flash modules that has two OTA partitions ready
to go to do updates
- a GENERIC_OTA board that uses that partition table and that enables
automatic roll-back in the bootloader
- a new esp32.Partition.mark_app_valid_cancel_rollback() class-method to
signal that the boot is successful and should not be rolled back at the
next reset
- an automated test for doing an OTA update
- documentation updates
For ports that have a system malloc which is not garbage collected (eg
unix, esp32), the stream object for the DB must be retained separately to
prevent it from being reclaimed by the MicroPython GC (because the
berkeley-db library uses malloc to allocate the DB structure which stores
the only reference to the stream).
Although in some cases the user code will explicitly retain a reference to
the underlying stream because it needs to call close() on it, this is not
always the case, eg in cases where the DB is intended to live forever.
Fixes issue #5940.
One can now use `-i micropython` and `-i cpython` to add instances using
the `MICROPYTHON` and `CPYTHON3` variables (which can be overridden by env
vars).
This commit consolidates a number of check_esp_err functions that check
whether an ESP-IDF return code is OK and raises an exception if not. The
exception raised is an OSError with the error code as the first argument
(negative if it's ESP-IDF specific) and the ESP-IDF error string as the
second argument.
This commit also fixes esp32.Partition.set_boot to use check_esp_err, and
uses that function for a unit test.
This commit adds an idf_heap_info(capabilities) method to the esp32 module
which returns info about the ESP-IDF heaps. It's useful to get a bit of a
picture of what's going on when code fails because ESP-IDF can't allocate
memory anymore. Includes documentation and a test.
For combinations of certain versions of glibc and gcc the definition of
fpclassify always takes float as argument instead of adapting itself to
float/double/long double as required by the C99 standard. At the time of
writing this happens for instance for glibc 2.27 with gcc 7.5.0 when
compiled with -Os and glibc 3.0.7 with gcc 9.3.0. When calling fpclassify
with double as argument, as in objint.c, this results in an implicit
narrowing conversion which is not really correct plus results in a warning
when compiled with -Wfloat-conversion. So fix this by spelling out the
logic manually.
When the unix and windows ports use MICROPY_FLOAT_IMPL_FLOAT instead of
MICROPY_FLOAT_IMPL_DOUBLE, the test output has for example
complex(-0.15052, 0.34109) instead of the expected
complex(-0.15051, 0.34109).
Use one decimal place less for the output printing to fix this.
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
In this part of the code there is no way to get the ** operator, so no need
to check for it.
This commit also adds tests for this, and other related, invalid const
operations.
The decompression of error-strings is only done if the string is accessed
via printing or via er.args. Tests are added for this feature to ensure
the decompression works.
This adds a couple of commands to the run-tests script to print the diffs
of failed tests and also to clean up the .exp and .out files after failed
tests. (And a spelling error is fixed while we are touching nearby code.)
Travis is also updated to use these new commands, including using it for
more builds.
Since automatically formatting tests with black, we have lost one line of
code coverage. This adds an explicit test to ensure we are testing the
case that is no longer covered implicitly.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py. The basics/ subdirectory is excluded for now so we
aren't changing too much at once.
In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
Includes a test where the (non uasyncio) client does a RST on the
connection, as a simple TCP server/client test where both sides are using
uasyncio, and a test for TCP stream close then write.
Fixes UDP non-blocking recv so it returns EAGAIN instead of ETIMEDOUT.
Timeout waiting for incoming data is also improved by replacing 100ms delay
with poll_sockets(), as is done in other parts of this module.
Fixes issue #5759.
This commit adds micropython.heap_locked() which returns the current
lock-depth of the heap, and can be used by Python code to check if the heap
is locked or not. This new function is configured via
MICROPY_PY_MICROPYTHON_HEAP_LOCKED and is disabled by default.
This commit also changes the return value of micropython.heap_unlock() so
it returns the current lock-depth as well.
This commit changes the BLE _IRQ_SCAN_RESULT data from:
addr_type, addr, connectable, rssi, adv_data
to:
addr_type, addr, adv_type, rssi, adv_data
This allows _IRQ_SCAN_RESULT to handle all scan result types (not just
connectable and non-connectable passive scans), and to distinguish between
them using adv_type which is an integer taking values 0x00-0x04 per the BT
specification.
This is a breaking change to the API, albeit a very minor one: the existing
connectable value was a boolean and True now becomes 0x00, False becomes
0x02.
Documentation is updated and a test added.
Fixes#5738.
This commit adds a test runner and initial test scripts which run multiple
Python/MicroPython instances (eg executables, target boards) in parallel.
This is useful for testing, eg, network and Bluetooth functionality.
Each test file has a set of functions called instanceX(), where X ranges
from 0 up to the maximum number of instances that are needed, N-1. Then
run-multitests.py will execute this script on N separate instances (eg
micropython executables, or attached boards via pyboard.py) at the same
time, synchronising their start in the right order, possibly passing IP
address (or other address like bluetooth MAC) from the "server" instance to
the "client" instances so they can connect to each other. It then runs
them to completion, collects the output, and then tests against what
CPython gives (or what's in a provided .py.exp file).
The tests will be run using the standard unix executable for all instances
by default, eg:
$ ./run-multitests.py multi_net/*.py
Or they can be run with a board and unix executable via:
$ ./run-multitests.py --instance pyb:/dev/ttyACM0 --instance exec:micropython multi_net/*.py
Only the "==" operator was tested by the test suite in for such arguments.
Other comparison operators like "<" take a different path in the code so
need to be tested separately.
When this variable is set to non-empty string it triggers the REPL after a
command/module/file finishes running.
The Python file without the file extension is because the cmdline: parser
in run-test splits on spaces, so we can't use the -c option since
`import os` can't be written without a space.
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects. This addresses the issues
discussed in #5382 and a few other inconsistencies. Improvements over the
old code include:
- Support for returning non-boolean results from comparisons (as used by
numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
(in)equality operators.
The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests. Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively). The
flag is also set for any new class defined by the user.
This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
This commit adds a generator test for throwing into a nested exception, and
one when using yield-from with a pending exception cleanup. Both these
tests currently fail on the native emitter, and are simplified versions of
native test failures from uasyncio in #5332.
This commit adds backward-word, backward-kill-word, forward-word,
forward-kill-word sequences for the REPL, with bindings to Alt+F, Alt+B,
Alt+D and Alt+Backspace respectively. It is disabled by default and can be
enabled via MICROPY_REPL_EMACS_WORDS_MOVE.
Further enabling MICROPY_REPL_EMACS_EXTRA_WORDS_MOVE adds extra bindings
for these new sequences: Ctrl+Right, Ctrl+Left and Ctrl+W.
The features are enabled on unix micropython-coverage and micropython-dev.
As the mktime documentation for CPython states: "The earliest date for
which it can generate a time is platform-dependent". In particular on
Windows this depends on the timezone so e.g. for UTC+2 the earliest is 2
hours past midnight January 1970. So change the reference to the earliest
possible, for UTC+14.
It is possile for `run_feature_check(pyb, args, base_path, 'float.py')` to
return `b'CRASH'`. This causes an unhandled exception in `int()`.
This commit fixes the problem by first testing for `b'CRASH'` before trying
to convert the return value to an integer.
Instances of the slice class are passed to __getitem__() on objects when
the user indexes them with a slice. In practice the majority of the time
(other than passing it on untouched) is to work out what the slice means in
the context of an array dimension of a particular length. Since Python 2.3
there has been a method on the slice class, indices(), that takes a
dimension length and returns the real start, stop and step, accounting for
missing or negative values in the slice spec. This commit implements such
a indices() method on the slice class.
It is configurable at compile-time via MICROPY_PY_BUILTINS_SLICE_INDICES,
disabled by default, enabled on unix, stm32 and esp32 ports.
This commit also adds new tests for slice indices and for slicing unicode
strings.
Allows assigning attributes on class instances that implement their own
__setattr__. Both object.__setattr__ and super(A, b).__setattr__ will work
with this commit.
Because CPython 3.8.0 now produces different output:
- basics/parser.py: CPython does not allow '\\\n' as input.
- import/import_override: CPython imports _io.
This commit adds a sys.implementation.mpy entry when the system supports
importing .mpy files. This entry is a 16-bit integer which encodes two
bytes of information from the header of .mpy files that are supported by
the system being run: the second and third bytes, .mpy version, and flags
and native architecture. This allows determining the supported .mpy file
dynamically by code, and also for the user to find it out by inspecting
this value. It's further possible to dynamically detect if the system
supports importing .mpy files by `hasattr(sys.implementation, 'mpy')`.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
With this patch exceptions that are re-raised have improved tracebacks
(less confusing, match CPython), and it makes re-raise slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Also general VM performance is not measurably affected.
Partially fixes issue #2928.
With this patch exception tracebacks that go through a finally are improved
(less confusing, match CPython), and it makes finally's slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Partially fixes issue #2928.
- Split 'qemu-arm' from 'unix' for generating tests.
- Add frozen module to the qemu-arm test build.
- Add test that reproduces the requirement to half-word align native
function data.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
As per PEP 485, this function appeared in for Python 3.5. Configured via
MICROPY_PY_MATH_ISCLOSE which is disabled by default, but enabled for the
ports which already have MICROPY_PY_MATH_SPECIAL_FUNCTIONS enabled.
Prior to this patch the amount of free space in an array (including
bytearray) was not being maintained correctly for the case of slice
assignment which changed the size of the array. Under certain cases (as
encoded in the new test) it was possible that the array could grow beyond
its allocated memory block and corrupt the heap.
Fixes issue #4127.
JSON requires that keys of objects be strings. CPython will therefore
automatically quote simple types (NoneType, bool, int, float) when they are
used directly as keys in JSON output. To prevent subtle bugs and emit
compliant JSON, MicroPython should at least test for such keys so they
aren't silently let through. Then doing the actual quoting is a similar
cost to raising an exception, so that's what is implemented by this patch.
Fixes issue #4790.
misc_aes.py and misc_mandel.py are adapted from sources in this repository.
misc_pystone.py is the standard Python pystone test. misc_raytrace.py is
written from scratch.
This benchmarking test suite is intended to be run on any MicroPython
target. As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target. When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).
The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters. This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).
To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.
When running the test the number of microseconds taken by the test are
recorded. Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).
Test outputs are also compared against a "truth" value, computed by running
the test with CPython. This provides a basic way of making sure the test
actually ran correctly.
Each test is run multiple times and the results averaged and standard
deviation computed. This is output as a summary of the test.
To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance. Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.
Example invocations for PC, pyboard and esp8266 targets respectively:
$ ./run-perfbench.py 1000 1000
$ ./run-perfbench.py --pyboard 100 100
$ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
Reuse the implementation for bytes since it works the same way regardless
of the underlying type. This method gets added for CPython compatibility
of bytearray, but to keep the code simple and small array.array now also
has a working decode method, which is non-standard but doesn't hurt.
This allows figuring out the number of bytes in the memoryview object as
len(memview) * memview.itemsize.
The feature is enabled via MICROPY_PY_BUILTINS_MEMORYVIEW_ITEMSIZE and is
disabled by default.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
When running Linux on WSL, Popen.kill() can raise a ProcessLookupError if
the process does not exist anymore, which can happen here since the
previous statement already tries to close the process by sending Ctrl-D to
the running repl. This doesn't seem to be a problem on other OSes, so just
swallow the exception silently since it indicates the process has been
closed already, which after all is what we want.
This is an implementation of a sliding qstr window used to reduce the
number of qstrs stored in a .mpy file. The window size is configured to 32
entries which takes a fixed 64 bytes (16-bits each) on the C stack when
loading/saving a .mpy file. It allows to remember the most recent 32 qstrs
so they don't need to be stored again in the .mpy file. The qstr window
uses a simple least-recently-used mechanism to discard the least recently
used qstr when the window overflows (similar to dictionary compression).
This scheme only needs a single pass to save/load the .mpy file.
Reduces mpy file size by about 25% with a window size of 32.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
This patch fixes a bug in the VM when breaking within a try-finally. The
bug has to do with executing a break within the finally block of a
try-finally statement. For example:
def f():
for x in (1,):
print('a', x)
try:
raise Exception
finally:
print(1)
break
print('b', x)
f()
Currently in uPy the above code will print:
a 1
1
1
segmentation fault (core dumped) micropython
Not only is there a seg fault, but the "1" in the finally block is printed
twice. This is because when the VM executes a finally block it doesn't
really know if that block was executed due to a fall-through of the try (no
exception raised), or because an exception is active. In particular, for
nested finallys the VM has no idea which of the nested ones have active
exceptions and which are just fall-throughs. So when a break (or continue)
is executed it tries to unwind all of the finallys, when in fact only some
may be active.
It's questionable whether break (or return or continue) should be allowed
within a finally block, because they implicitly swallow any active
exception, but nevertheless it's allowed by CPython (although almost never
used in the standard library). And uPy should at least not crash in such a
case.
The solution here relies on the fact that exception and finally handlers
always appear in the bytecode after the try body.
Note: there was a similar bug with a return in a finally block, but that
was previously fixed in b735208403
All exceptions that unwind through the async-with must be caught and
BaseException is the top-level class, which includes Exception and others.
Fixes issue #4552.
As mentioned in #4450, `websocket` was experimental with a single intended
user, `webrepl`. Therefore, we'll make this change without a weak
link `websocket` -> `uwebsocket`.
Instead of assuming that the method is a bytecode object, and only
supporting load of __name__, make the operation generic by delegating the
load to the method object itself. Saves a bit of code size and fixes the
case of attempting to load __name__ on a native method, see issue #4028.
As per the machine.UART documentation, this is used to set the length of
the RX buffer. The legacy read_buf_len argument is retained for backwards
compatibility, with rxbuf overriding it if provided.
Also change the order of printing of flow so it is after stop (so bits,
parity, stop are one after the other), and reduce code size by using
mp_print_str instead of mp_printf where possible.
See issue #1981.
CPython does not have an implementation of select.poll() on some
operating systems (Windows, OSX depending on version) so skip the
test in those cases instead of failing it.
This ensures that implicit variables are only converted to implicit
closed-over variables (nonlocals) at the very end of the function scope.
If variables are closed-over when first used (read from, as was done prior
to this commit) then this can be incorrect because the variable may be
assigned to later on in the function which means they are just a plain
local, not closed over.
Fixes issue #4272.
The way it was written previously the variable x was not an implicit
nonlocal, it was just a normal local (but the compiler has a bug which
incorrectly makes it a nonlocal).
Configurable via MICROPY_MODULE_GETATTR, disabled by default. Among other
things __getattr__ for modules can help to build lazy loading / code
unloading at runtime.
Part of this test was trying to test some functionality of __getattribute__
but this method name was misspelt so it wasn't doing anything useful.
Fixing the typo in this name makes the test fail because MicroPython
doesn't support user defined __getattribute__ methods. So this part of the
test is removed. The remaining tests are modified slightly to make it
clearer what they are testing.
This test doesn't check the actual I/O behavior, just "static" invariants
like behavior on duplicate calls or calls when I/O object is not registered
with poller.
This makes these special methods have the same calling behaviour as other
methods in a class instance (mp_convert_member_lookup() is already called
by mp_obj_class_lookup()).
mp_make_raise_obj must be used to convert a possible exception type to an
instance object, otherwise the VM may raise a non-exception object.
An existing test is adjusted to test this case, with the original test
already moved to generator_throw.py.
Nan and inf (signed and unsigned) are also handled correctly by using
signbit (they were also handled correctly with "val<0", but that didn't
handle -0.0 correctly). A test case is added for this behaviour.
This commit adds the math.factorial function in two variants:
- squared difference, which is faster than the naive version, relatively
compact, and non-recursive;
- a mildly optimised recursive version, faster than the above one.
There are some more optimisations that could be done, but they tend to take
more code, and more storage space. The recursive version seems like a
sensible compromise.
The new function is disabled by default, and uses the non-optimised version
by default if it is enabled. The options are MICROPY_PY_MATH_FACTORIAL
and MICROPY_OPT_MATH_FACTORIAL.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
If bytearray is constructed from str, a second argument of encoding is
required (in CPython), and third arg of Unicode error handling is allowed,
e.g.:
bytearray("str", "utf-8", "strict")
This is similar to bytes:
bytes("str", "utf-8", "strict")
This patch just allows to pass 2nd/3rd arguments to bytearray, but
doesn't try to validate them to not impact code size. (This is also
similar to how bytes constructor is handled, though it does a bit
more validation, e.g. check that in case of str arg, encoding argument
is passed.)
The native emitter keeps the current exception in a slot in its C stack
(instead of on its Python value stack), so when it catches an exception it
must explicitly clear that slot so the same exception is not reraised later
on.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
PEP479 (see https://www.python.org/dev/peps/pep-0479/) prohibited raising
StopIteration from within a generator (it is turned into a RuntimeError).
This behaviour was introduced in Python 3.5 and in 3.7 was made compulsory.
Until uPy implements PEP479, this patch adds .py.exp files for the relevant
tests so they can be run under Python 3.7.
In Python 3.7 the behaviour of repr() of an exception with one argument
changed: it no longer prints a trailing comma in the argument list. See
https://bugs.python.org/issue30399
This patch modifies tests that rely on this behaviour to not rely on it.
And the python34.py test is updated to include a test for this behaviour
with a .exp file.
Input files like basics/string_format.py and float/string_format.py have
the same basename so using that name for writing the output (.exp and .out
files) when both tests fail, results in the output of the first one being
overwritten.
Avoid this by using unique names for the output, replacing path characters
with underscores.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
This behaviour of a NULL write C method on a stream that uses the write
adaptor objects is no longer supported. It was only ever used by the
coverage build for testing the fail path of mp_get_stream_raise().
For i2c.py: the accelerometer now uses the new I2C driver so need to
explicitly init the legacy i2c object to get the test working.
For pyb1.py: the legacy pyb.hid() call will crash if the USB_HID object is
not initialised.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
This conditional import was only used to get the tests working on the unix
coverage build, which has now switched to use VFS by default so the uos
module alone has the required functionality.
Printing of uPy floats can differ by the floating-point precision on
different architectures (eg 64-bit vs 32-bit x86), so it's not possible to
using printing of floats in some parts of this test. Instead we can just
check for equivalence with what is known to be the correct answer.
Commit e269cabe3e added a check that the
first argument to the to_bytes() method is an integer, and now uPy
follows CPython behaviour and raises a TypeError for this test.
Note: CPython checks the argument types before checking the number of
arguments, but uPy does it the other way around, so they give different
exception messages for this test, but still the same type, a TypeError.
In adcall.py the pyb module may not be imported, so use ADCAll directly.
In dac.py the DAC object now prints more info, so update .exp file.
In spi.py the SPI should be deinitialised upon exit, so the test can run a
second time correctly.
If MICROPY_USE_INTERNAL_ERRNO is disabled, MP_EINVAL is not guaranteed
to have the value 22, so we cannot depend on OSError(22,).
Instead, to support any given port's errno values, without relying
on uerrno, we just check that the args[0] is positive.
This can be used to select the output buffer behaviour of the DAC. The
default values are chosen to retain backwards compatibility with existing
behaviour.
Thanks to @peterhinch for the initial idea to add this feature.
Reading into a bytearray will truncate values to 0xff so the assertions
checking read_timed() would previously always succeed.
Thanks to @peterhinch for finding this problem and providing the solution.
Keeping all the stress related tests in one place makes it easier to
stress-test a given port, and to also not run such tests on ports that
can't handle them.