The new option is MICROPY_ENABLE_EXTERNAL_IMPORT and is enabled by default
so that the default behaviour is the same as before. With it disabled
import is only supported for built-in modules, not for external files nor
frozen modules. This allows to support targets that have no filesystem of
any kind and that only have access to pre-supplied built-in modules
implemented natively.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
A new memory allocation API is introduced along with this Py-stack. It
includes both "local" and "nonlocal" LIFO allocation. Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory. Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.
Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API. The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.
The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
efficient scoped memory allocation (compared to using the heap instead
of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
efficient use of stack space when calling nested Python functions, due to
the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
efficient stack usage, as well as retaining high performance (because the
heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
recursive in the C mp_execute_bytecode function.
The micropython.pystack_use() function is included to measure usage of the
Python stack.
Before this patch MP_BINARY_OP_IN had two meanings: coming from bytecode it
meant that the args needed to be swapped, but coming from within the
runtime meant that the args were already in the correct order. This lead
to some confusion in the code and comments stating how args were reversed.
It also lead to 2 bugs: 1) containment for a subclass of a native type
didn't work; 2) the expression "{True} in True" would illegally succeed and
return True. In both of these cases it was because the args to
MP_BINARY_OP_IN ended up being reversed twice.
To fix these things this patch introduces MP_BINARY_OP_CONTAINS which
corresponds exactly to the __contains__ special method, and this is the
operator that built-in types should implement. MP_BINARY_OP_IN is now only
emitted by the compiler and is converted to MP_BINARY_OP_CONTAINS by
swapping the arguments.
In mp_binary_op, there is no need to explicitly check for type->getiter
being non-null and raising an exception because this is handled exactly by
mp_getiter(). So just call the latter unconditionally.
The uos.dupterm() signature and behaviour is updated to reflect the latest
enhancements in the docs. It has minor backwards incompatibility in that
it no longer accepts zero arguments.
The dupterm_rx helper function is moved from esp8266 to extmod and
generalised to support multiple dupterm slots.
A port can specify multiple slots by defining the MICROPY_PY_OS_DUPTERM
config macro to an integer, being the number of slots it wants to have;
0 means to disable the dupterm feature altogether.
The unix and esp8266 ports are updated to work with the new interface and
are otherwise unchanged with respect to functionality.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
This allows user classes to implement __abs__ special method, and saves
code size (104 bytes for x86_64), even though during refactor, an issue
was fixed and few optimizations were made:
* abs() of minimum (negative) small int value is calculated properly.
* objint_longlong and objint_mpz avoid allocating new object is the
argument is already non-negative.
If, for class X, X.__add__(Y) doesn't exist (or returns NotImplemented),
try Y.__radd__(X) instead.
This patch could be simpler, but requires undoing operand swap and
operation switch to get non-confusing error message in case __radd__
doesn't exist.
The unary-op/binary-op enums are already defined, and there are no
arithmetic tricks used with these types, so it makes sense to use the
correct enum type for arguments that take these values. It also reduces
code size quite a bit for nan-boxing builds.
- Changed: ValueError, TypeError, NotImplementedError
- OSError invocations unchanged, because the corresponding utility
function takes ints, not strings like the long form invocation.
- OverflowError, IndexError and RuntimeError etc. not changed for now
until we decide whether to add new utility functions.
This patch changes mp_uint_t to size_t for the len argument of the
following public facing C functions:
mp_obj_tuple_get
mp_obj_list_get
mp_obj_get_array
These functions take a pointer to the len argument (to be filled in by the
function) and callers of these functions should update their code so the
type of len is changed to size_t. For ports that don't use nan-boxing
there should be no change in generate code because the size of the type
remains the same (word sized), and in a lot of cases there won't even be a
compiler warning if the type remains as mp_uint_t.
The reason for this change is to standardise on the use of size_t for
variables that count memory (or memory related) sizes/lengths. It helps
builds that use nan-boxing.
In this case, raise an exception without a message.
This would allow to shove few code bytes comparing to currently used
mp_raise_msg(..., "") pattern. (Actual savings depend on function code
alignment used by a particular platform.)
Allows to iterate over the following without allocating on the heap:
- tuple
- list
- string, bytes
- bytearray, array
- dict (not dict.keys, dict.values, dict.items)
- set, frozenset
Allows to call the following without heap memory:
- all, any, min, max, sum
TODO: still need to allocate stack memory in bytecode for iter_buf.
This provides mp_vfs_XXX functions (eg mount, open, listdir) which are
agnostic to the underlying filesystem type, and just require an object with
the relevant filesystem-like methods (eg .mount, .open, .listidr) which can
then be mounted.
These mp_vfs_XXX functions would typically be used by a port to implement
the "uos" module, and mp_vfs_open would be the builtin open function.
This feature is controlled by MICROPY_VFS, disabled by default.
They are one-line functions and having them inline in mp_init/mp_deinit
eliminates the overhead of a function call, and matches how other state
is initialised in mp_init.
If GeneratorExit is injected as a throw-value then that should lead to
the close() method being called, if it exists. If close() does not exist
then throw() should not be called, and this patch fixes this.
Defining and initialising mp_kbd_exception is boiler-plate code and so the
core runtime can provide it, instead of each port needing to do it
themselves.
The exception object is placed in the VM state rather than on the heap.
This includes StopIteration and thus are important to make Python-coded
iterables work with yield from/await.
Exceptions in Python send() are still not handled and left for future
consideration and optimization.
Builtin functions with a fixed number of arguments (0, 1, 2 or 3) are
quite common. Before this patch the wrapper for such a function cost
3 machine words. After this patch it only takes 2, which can reduce the
code size by quite a bit (and pays off even more, the more functions are
added). It also makes function dispatch slightly more efficient in CPU
usage, and furthermore reduces stack usage for these cases. On x86 and
Thumb archs the dispatch functions are now tail-call optimised by the
compiler.
The bare-arm port has its code size increase by 76 bytes, but stmhal drops
by 904 bytes. Stack usage by these builtin functions is decreased by 48
bytes on Thumb2 archs.
Introduce mp_raise_msg(), mp_raise_ValueError(), mp_raise_TypeError()
instead of previous pattern nlr_raise(mp_obj_new_exception_msg(...)).
Save few bytes on each call, which are many.
Passing an mp_uint_t to a %d printf format is incorrect for builds where
mp_uint_t is larger than word size (eg a nanboxing build). This patch
adds some simple casting to int in these cases.
Calling it from mp_init() is too late for some ports (like Unix), and leads
to incomplete stack frame being captured, with following GC issues. So, now
each port should call mp_stack_ctrl_init() on its own, ASAP after startup,
and taking special precautions so it really was called before stack variables
get allocated (because if such variable with a pointer is missed, it may lead
to over-collecting (typical symptom is segfaulting)).
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
When looking up and extracting an attribute of an instance, some
attributes must bind self as the first argument to make a working method
call. Previously to this patch, any attribute that was callable had self
bound as the first argument. But Python specs require the check to be
more restrictive, and only functions, closures and generators should have
self bound as the first argument
Addresses issue #1675.
MICROPY_ENABLE_COMPILER can be used to enable/disable the entire compiler,
which is useful when only loading of pre-compiled bytecode is supported.
It is enabled by default.
MICROPY_PY_BUILTINS_EVAL_EXEC controls support of eval and exec builtin
functions. By default they are only included if MICROPY_ENABLE_COMPILER
is enabled.
Disabling both options saves about 40k of code size on 32-bit x86.
Fixes#1684 and makes "not" match Python semantics. The code is also
simplified (the separate MP_BC_NOT opcode is removed) and the patch saves
68 bytes for bare-arm/ and 52 bytes for minimal/.
Previously "not x" was implemented as !mp_unary_op(x, MP_UNARY_OP_BOOL),
so any given object only needs to implement MP_UNARY_OP_BOOL (and the VM
had a special opcode to do the ! bit).
With this patch "not x" is implemented as mp_unary_op(x, MP_UNARY_OP_NOT),
but this operation is caught at the start of mp_unary_op and dispatched as
!mp_obj_is_true(x). mp_obj_is_true has special logic to test for
truthness, and is the correct way to handle the not operation.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
With this patch parse nodes are allocated sequentially in chunks. This
reduces fragmentation of the heap and prevents waste at the end of
individually allocated parse nodes.
Saves roughly 20% of RAM during parse stage.
Previous to this patch each time a bytes object was referenced a new
instance (with the same data) was created. With this patch a single
bytes object is created in the compiler and is loaded directly at execute
time as a true constant (similar to loading bignum and float objects).
This saves on allocating RAM and means that bytes objects can now be
used when the memory manager is locked (eg in interrupts).
The MP_BC_LOAD_CONST_BYTES bytecode was removed as part of this.
Generated bytecode is slightly larger due to storing a pointer to the
bytes object instead of the qstr identifier.
Code size is reduced by about 60 bytes on Thumb2 architectures.
Previous to this patch a call such as list.append(1, 2) would lead to a
seg fault. This is because list.append is a builtin method and the first
argument to such methods is always assumed to have the correct type.
Now, when a builtin method is extracted like this it is wrapped in a
checker object which checks the the type of the first argument before
calling the builtin function.
This feature is contrelled by MICROPY_BUILTIN_METHOD_CHECK_SELF_ARG and
is enabled by default.
See issue #1216.
Hashing is now done using mp_unary_op function with MP_UNARY_OP_HASH as
the operator argument. Hashing for int, str and bytes still go via
fast-path in mp_unary_op since they are the most common objects which
need to be hashed.
This lead to quite a bit of code cleanup, and should be more efficient
if anything. It saves 176 bytes code space on Thumb2, and 360 bytes on
x86.
The only loss is that the error message "unhashable type" is now the
more generic "unsupported type for __hash__".
Exceptions in .close() should be ignored (dumped to sys.stderr, not
propagated), but in uPy, they are propagated. Fix would require
nlr-wrapping .close() call, which is expensive. Bu on the other hand,
.close() is not called often, so maybe that's not too bad (depends,
if it's finally called and that causes stack overflow, there's nothing
good in that). And yet on another hand, .close() can be implemented to
catch exceptions on its side, and that should be the right choice.
This simplifies the API for objects and reduces code size (by around 400
bytes on Thumb2, and around 2k on x86). Performance impact was measured
with Pystone score, but change was barely noticeable.
Despite initial guess, this code factoring does not hamper performance.
In fact it seems to improve speed by a little: running pystone(1.2) on
pyboard (which gives a very stable result) this patch takes pystones
from 1729.51 up to 1742.16. Also, pystones on x64 increase by around
the same proportion (but it's much noisier).
Taking a look at the generated machine code, stack usage with this patch
is unchanged, and call is tail-optimised with all arguments in
registers. Code size decreases by about 50 bytes on Thumb2 archs.
"Base" should rather refer to "base type"."Base object for attribute
lookup" should rather be just "object".
Also, a case of common subexpression elimination.
Previous to this patch, a big-int, float or imag constant was interned
(made into a qstr) and then parsed at runtime to create an object each
time it was needed. This is wasteful in RAM and not efficient. Now,
these constants are parsed straight away in the parser and turned into
objects. This allows constants with large numbers of digits (so
addresses issue #1103) and takes us a step closer to #722.
To enable parsing constants more efficiently, mp_parse should be allowed
to raise an exception, and mp_compile can already raise a MemoryError.
So these functions need to be protected by an nlr push/pop block.
This patch adds that feature in all places. This allows to simplify how
mp_parse and mp_compile are called: they now raise an exception if they
have an error and so explicit checking is not needed anymore.
Eg, "() + 1" now tells you that __add__ is not supported for tuple and
int types (before it just said the generic "binary operator"). We reuse
the table of names for slot lookup because it would be a waste of code
space to store the pretty name for each operator.
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
This patch adds a configuration option (MICROPY_CAN_OVERRIDE_BUILTINS)
which, when enabled, allows to override all names within the builtins
module. A builtins override dict is created the first time the user
assigns to a name in the builtins model, and then that dict is searched
first on subsequent lookups. Note that this implementation doesn't
allow deleting of names.
This patch also does some refactoring of builtins code, creating the
modbuiltins.c file.
Addresses issue #959.
mp_lexer_t type is exposed, mp_token_t type is removed, and simple lexer
functions (like checking current token kind) are now inlined.
This saves 784 bytes ROM on 32-bit unix, 348 bytes on stmhal, and 460
bytes on bare-arm. It also saves a tiny bit of RAM since mp_lexer_t
is a bit smaller. Also will run a bit more efficiently.
Going from MICROPY_ERROR_REPORTING_NORMAL to
MICROPY_ERROR_REPORTING_TERSE now saves 2020 bytes ROM for ARM Thumb2,
and 2200 bytes ROM for 32-bit x86.
This is about a 2.5% code size reduction for bare-arm.
This allows to implement KeyboardInterrupt on unix, and a much safer
ctrl-C in stmhal port. First ctrl-C is a soft one, with hope that VM
will notice it; second ctrl-C is a hard one that kills anything (for
both unix and stmhal).
One needs to check for a pending exception in the VM only for jump
opcodes. Others can't produce an infinite loop (infinite recursion is
caught by stack check).