Despite initial guess, this code factoring does not hamper performance.
In fact it seems to improve speed by a little: running pystone(1.2) on
pyboard (which gives a very stable result) this patch takes pystones
from 1729.51 up to 1742.16. Also, pystones on x64 increase by around
the same proportion (but it's much noisier).
Taking a look at the generated machine code, stack usage with this patch
is unchanged, and call is tail-optimised with all arguments in
registers. Code size decreases by about 50 bytes on Thumb2 archs.
Eg, "() + 1" now tells you that __add__ is not supported for tuple and
int types (before it just said the generic "binary operator"). We reuse
the table of names for slot lookup because it would be a waste of code
space to store the pretty name for each operator.
Previous patch c38dc3ccc7 allowed any
object to be compared with any other, using pointer comparison for a
fallback. As such, existing code which checked for this case is no
longer needed.
This is a simple optimisation inspired by JITing technology: we cache in
the bytecode (using 1 byte) the offset of the last successful lookup in
a map. This allows us next time round to check in that location in the
hash table (mp_map_t) for the desired entry, and if it's there use that
entry straight away. Otherwise fallback to a normal map lookup.
Works for LOAD_NAME, LOAD_GLOBAL, LOAD_ATTR and STORE_ATTR opcodes.
On a few tests it gives >90% cache hit and greatly improves speed of
code.
Disabled by default. Enabled for unix and stmhal ports.
Going from MICROPY_ERROR_REPORTING_NORMAL to
MICROPY_ERROR_REPORTING_TERSE now saves 2020 bytes ROM for ARM Thumb2,
and 2200 bytes ROM for 32-bit x86.
This is about a 2.5% code size reduction for bare-arm.
Uninitialised struct members get a default value of 0/false, so this is
not strictly needed. But it actually decreases code size because when
all members are initialised the compiler doesn't need to insert a call
to memset to clear everything. In other words, setting 1 extra member
to 0 uses less code than calling memset.
ROM savings in bytes: 32-bit unix: 100; bare-arm: 44; stmhal: 52.
Because (for Thumb) a function pointer has the LSB set, pointers to
dynamic functions in RAM (eg native, viper or asm functions) were not
being traced by the GC. This patch is a comprehensive fix for this.
Addresses issue #820.
Now schedule is: for native types, we call ->make_new() C-level method, which
should perform actions of __new__ and __init__ (note that this is not
compliant, but is efficient), but for user types, __new__ and __init__ are
called as expected.
Also, make sure we convert scalar attribute value to a bound-pair tight in
mp_obj_class_lookup() method, which avoids converting it again and again in
its callers.