Commit Graph

3132 Commits

Author SHA1 Message Date
Damien George 6c3faf6c17 py/objdeque: Allow to compile without warnings by disabling deque_clear. 2018-02-21 22:52:58 +11:00
Paul Sokolovsky 970eedce8f py/objdeque: Implement ucollections.deque type with fixed size.
So far, implements just append() and popleft() methods, required for
a normal queue. Constructor doesn't accept an arbitarry sequence to
initialize from (am empty deque is always created), so an empty tuple
must be passed as such. Only fixed-size deques are supported, so 2nd
argument (size) is required.

There's also an extension to CPython - if True is passed as 3rd argument,
append(), instead of silently overwriting the oldest item on queue
overflow, will throw IndexError. This behavior is desired in many
cases, where queues should store information reliably, instead of
silently losing some items.
2018-02-21 22:39:25 +11:00
Damien George fe3e17b026 py/objint: Use MP_OBJ_IS_STR_OR_BYTES macro instead of 2 separate ones. 2018-02-21 00:20:46 +11:00
Damien George 8769049e93 py/objstr: Remove unnecessary check for positive splits variable.
At this point in the code the variable "splits" is guaranteed to be
positive due to the check for "splits == 0" above it.
2018-02-20 19:19:02 +11:00
Damien George 7e2a48858c py/modmicropython: Allow to have stack_use() func without mem_info().
The micropython.stack_use() function is useful to query the current C stack
usage, and it's inclusion in the micropython module doesn't need to be tied
to the inclusion of mem_info()/qstr_info() because it doesn't rely on any
of the code from these functions.  So this patch introduces the config
option MICROPY_PY_MICROPYTHON_STACK_USE which can be used to independently
control the inclusion of stack_use().  By default it is enabled if
MICROPY_PY_MICROPYTHON_MEM_INFO is enabled (thus not changing any of the
existing ports).
2018-02-20 18:30:22 +11:00
Damien George 209936880d py/builtinimport: Add compile-time option to disable external imports.
The new option is MICROPY_ENABLE_EXTERNAL_IMPORT and is enabled by default
so that the default behaviour is the same as before.  With it disabled
import is only supported for built-in modules, not for external files nor
frozen modules.  This allows to support targets that have no filesystem of
any kind and that only have access to pre-supplied built-in modules
implemented natively.
2018-02-20 18:00:44 +11:00
Damien George 6e7819ee2e py/objmodule: Factor common code for calling __init__ on builtin module. 2018-02-20 17:56:58 +11:00
Damien George 4e469085c1 py/objstr: Protect against creating bytes(n) with n negative.
Prior to this patch uPy (on a 32-bit arch) would have severe issues when
calling bytes(-1): such a call would call vstr_init_len(vstr, -1) which
would then +1 on the len and call vstr_init(vstr, 0), which would then
round this up and allocate a small amount of memory for the vstr.  The
bytes constructor would then attempt to zero out all this memory, thinking
it had allocated 2^32-1 bytes.
2018-02-19 16:25:30 +11:00
Damien George 165aab12a3 py/repl: Generalise REPL autocomplete to use qstr probing.
This patch changes the way REPL autocomplete finds matches.  It now probes
the target object for all qstrs via mp_load_method_maybe to look for a
match with the given input string.  Similar to how the builtin dir()
function works, this new algorithm now find all methods and instances of
user-defined classes including attributes of their parent classes.  This
helps a lot at the REPL prompt for user-discovery and to autocomplete names
even for classes that are derived.

The downside is that this new algorithm is slower than the previous one,
and in particular will be slower the more qstrs there are in the system.
But because REPL autocomplete is primarily used in an interactive way it is
not that important to make it fast, as long as it is "fast enough" compared
to human reaction.

On a slow microcontroller (CPU running at 16MHz) the autocomplete time for
a list of 35 names in the outer namespace (pressing tab at a bare prompt)
takes about 160ms with this algorithm, compared to about 40ms for the
previous implementation (this time includes the actual printing of the
names as well).  This time of 160ms is very reasonable especially given the
new functionality of listing all the names.

This patch also decreases code size by:

   bare-arm:    +0
minimal x86:  -128
   unix x64:  -128
unix nanbox:  -224
      stm32:   -88
     cc3200:   -80
    esp8266:   -92
      esp32:   -84
2018-02-19 16:12:44 +11:00
Damien George 98647e83c7 py/modbuiltins: Simplify and generalise dir() by probing qstrs.
This patch improves the builtin dir() function by probing the target object
with all possible qstrs via mp_load_method_maybe.  This is very simple (in
terms of implementation), doesn't require recursion, and allows to list all
methods of user-defined classes (without duplicates) even if they have
multiple inheritance with a common parent.  The downside is that it can be
slow because it has to iterate through all the qstrs in the system, but
the "dir()" function is anyway mostly used for testing frameworks and user
introspection of types, so speed is not considered a priority.

In addition to providing a more complete implementation of dir(), this
patch is simpler than the previous implementation and saves some code
space:

   bare-arm:   -80
minimal x86:   -80
   unix x64:   -56
unix nanbox:   -48
      stm32:   -80
     cc3200:   -80
    esp8266:  -104
      esp32:   -64
2018-02-19 16:12:44 +11:00
Damien George a8775aaeb0 py/qstr: Add QSTR_TOTAL() macro to get number of qstrs. 2018-02-19 16:12:44 +11:00
Damien George 2a0cbc0d38 py/gc: Update comment now that gc_drain_stack is called gc_mark_subtree. 2018-02-19 16:08:20 +11:00
Ayke van Laethem 736faef223 py/gc: Make GC stack pointer a local variable.
This saves a bit in code size, and saves some precious .bss RAM:

                 .text  .bss
minimal CROSS=1: -28    -4
unix (64-bit):   -64    -8
2018-02-19 16:05:46 +11:00
Ayke van Laethem 5c9e5618e0 py/gc: Rename gc_drain_stack to gc_mark_subtree and pass it first block.
This saves a bit in code size:

minimal CROSS=1: -44
unix:            -96
2018-02-19 16:00:59 +11:00
Ayke van Laethem ea7cf2b738 py/gc: Reduce code size by specialising VERIFY_MARK_AND_PUSH macro.
This macro is written out explicitly in the two locations that it is used
and then the code is optimised, opening possibilities for further
optimisations and reducing code size:

unix:            -48
minimal CROSS=1: -32
stm32:           -32
2018-02-19 15:58:49 +11:00
Mike Wadsten a3e01d3642 py/objdict: Disallow possible modifications to fixed dicts. 2018-02-18 21:51:04 -06:00
Damien George 7b2a9b059a py/pystack: Use "pystack exhausted" as error msg for out of pystack mem.
Using the message "maximum recursion depth exceeded" for when the pystack
runs out of memory can be misleading because the pystack can run out for
reasons other than deep recursion (although in most cases pystack
exhaustion is probably indirectly related to deep recursion).  And it's
important to give the user more precise feedback as to the reason for the
error: if they know precisely that the pystack was exhausted then they have
a chance to increase the amount of memory available to the pystack (as
opposed to not knowing if it was the C stack or pystack that ran out).

Also, C stack exhaustion is more serious than pystack exhaustion because it
could have been that the C stack overflowed and overwrote/corrupted some
data and so the system must be restarted.  The pystack can never corrupt
data in this way so pystack exhaustion does not require a system restart.
Knowing the difference between these two cases is therefore important.

The actual exception type for pystack exhaustion remains as RuntimeError so
that programatically it behaves the same as a C stack exhaustion.
2018-02-19 00:26:14 +11:00
Ayke van Laethem 5591bd237a py/nlrthumb: Do not mark nlr_push as not returning anything.
By adding __builtin_unreachable() at the end of nlr_push, we're
essentially telling the compiler that this function will never return.
When GCC LTO is in use, this means that any time nlr_push() is called
(which is often), the compiler thinks this function will never return
and thus eliminates all code following the call.

Note: I've added a 'return 0' for older GCC versions like 4.6 which
complain about not returning anything (which doesn't make sense in a
naked function). Newer GCC versions (tested 4.8, 5.4 and some others)
don't complain about this.
2018-02-18 01:35:27 +01:00
Damien George 73d1d20b46 py/objexcept: Remove long-obsolete mp_const_MemoryError_obj.
This constant exception instance was once used by m_malloc_fail() to raise
a MemoryError without allocating memory, but it was made obsolete long ago
by 3556e45711.  The functionality is now
replaced by the use of mp_emergency_exception_obj which lives in the global
uPy state, and which can handle any exception type, not just MemoryError.
2018-02-15 16:50:02 +11:00
Damien George d77da83d55 py/objrange: Implement (in)equality comparison between range objects.
This feature is not often used so is guarded by the config option
MICROPY_PY_BUILTINS_RANGE_BINOP which is disabled by default.  With this
option disabled MicroPython will always return false when comparing two
range objects for equality (unless they are exactly the same object
instance).  This does not match CPython so if (in)equality between range
objects is needed then this option should be enabled.

Enabling this option costs between 100 and 200 bytes of code space
depending on the machine architecture.
2018-02-14 23:17:06 +11:00
Damien George 5604b710c2 py/emitglue: When assigning bytecode only pass bytecode len if needed.
Most embedded targets will have this bit of the code disabled, saving a
small amount of code space.
2018-02-14 18:41:17 +11:00
Damien George e98ff40604 py/modbuiltins: Simplify casts from char to byte ptr in builtin ord. 2018-02-14 18:27:14 +11:00
Damien George 19aee9438a py/unicode: Clean up utf8 funcs and provide non-utf8 inline versions.
This patch provides inline versions of the utf8 helper functions for the
case when unicode is disabled (MICROPY_PY_BUILTINS_STR_UNICODE set to 0).
This saves code size.

The unichar_charlen function is also renamed to utf8_charlen to match the
other utf8 helper functions, and the signature of this function is adjusted
for consistency (const char* -> const byte*, mp_uint_t -> size_t).
2018-02-14 18:19:22 +11:00
Damien George bbb08431f3 py/objfloat: Fix case of raising 0 to -infinity.
It was raising an exception but it should return infinity.
2018-02-08 14:35:43 +11:00
Damien George b75cb8392b py/parsenum: Fix parsing of floats that are close to subnormal.
Prior to this patch, a float literal that was close to subnormal would
have a loss of precision when parsed.  The worst case was something like
float('10000000000000000000e-326') which returned 0.0.
2018-02-08 14:02:50 +11:00
Damien George 0c650d4276 py/vm: Simplify stack sentinel values for unwind return and jump.
This patch simplifies how sentinel values are stored on the stack when
doing an unwind return or jump.  Instead of storing two values on the stack
for an unwind jump it now stores only one: a negative small integer means
unwind-return and a non-negative small integer means unwind-jump with the
value being the number of exceptions to unwind.  The savings in code size
are:

   bare-arm:   -56
minimal x86:   -68
   unix x64:   -80
unix nanbox:    -4
      stm32:   -56
     cc3200:   -64
    esp8266:   -76
      esp32:  -156
2018-02-08 13:30:33 +11:00
Damien George 771dfb0826 py/modbuiltins: For builtin_chr, use uint8_t instead of char for array.
The array should be of type unsigned byte because that is the type of the
values being stored.  And changing to uint8_t helps to prevent warnings
from some static analysers.
2018-02-07 16:13:02 +11:00
Damien George b45c8c17f0 py/objtype: Check and prevent delete/store on a fixed locals map.
Note that the check for elem!=NULL is removed for the
MP_MAP_LOOKUP_ADD_IF_NOT_FOUND case because mp_map_lookup will always
return non-NULL for such a case.
2018-02-07 15:44:29 +11:00
Damien George 253f2bd7be py/compile: Combine compiler-opt of 2 and 3 tuple-to-tuple assignment.
This patch combines the compiler optimisation code for double and triple
tuple-to-tuple assignment, taking it from two separate if-blocks to one
combined if-block.  This can be done because the code for both of these
optimisations has a lot in common.  Combining them together reduces code
size for ports that have the triple-tuple optimisation enabled (and doesn't
change code size for ports that have it disabled).
2018-02-04 13:35:21 +11:00
stijn 42c4dd09a1 py/nlr: Fix missing trailing characters in comments in nlr.c 2017-12-29 22:24:53 +11:00
stijn b184b6ae53 py/nlr: Fix nlr functions for 64bit ports built with gcc on Windows
The number of registers used should be 10, not 12, to match the assembly
code in nlrx64.c. With this change the 64bit mingw builds don't need to
use the setjmp implementation, and this fixes miscellaneous crashes and
assertion failures as reported in #1751 for instance.

To avoid mistakes in the future where something gcc-related for Windows
only gets fixed for one particular compiler/environment combination,
make use of a MICROPY_NLR_OS_WINDOWS macro.

To make sure everything nlr-related is now ok when built with gcc this
has been verified with:
- unix port built with gcc on Cygwin (i686-pc-cygwin-gcc and
  x86_64-pc-cygwin-gcc, version 6.4.0)
- windows port built with mingw-w64's gcc from Cygwin
 (i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc, version 6.4.0)
 and MSYS2 (like the ones on Cygwin but version 7.2.0)
2017-12-29 22:24:46 +11:00
Damien George e784274430 py/mpz: In mpz_as_str_inpl, convert always-false checks to assertions.
There are two checks that are always false so can be converted to (negated)
assertions to save code space and execution time.  They are:

1. The check of the str parameter, which is required to be non-NULL as per
   the original comment that it has enough space in it as calculated by
   mp_int_format_size.  And for all uses of this function str is indeed
   non-NULL.

2. The check of the base parameter, which is already required to be between
   2 and 16 (inclusive) via the assertion in mp_int_format_size.
2017-12-29 14:17:55 +11:00
Damien George 9766fddcdc py/mpz: Simplify handling of borrow and quo adjustment in mpn_div.
The motivation behind this patch is to remove unreachable code in mpn_div.
This unreachable code was added some time ago in
9a21d2e070, when a loop in mpn_div was copied
and adjusted to work when mpz_dig_t was exactly half of the size of
mpz_dbl_dig_t (a common case).  The loop was copied correctly but it wasn't
noticed at the time that the final part of the calculation of num-quo*den
could be optimised, and hence unreachable code was left for a case that
never occurred.

The observation for the optimisation is that the initial value of quo in
mpn_div is either exact or too large (never too small), and therefore the
subtraction of quo*den from num may subtract exactly enough or too much
(but never too little).  Using this observation the part of the algorithm
that handles the borrow value can be simplified, and most importantly this
eliminates the unreachable code.

The new code has been tested with DIG_SIZE=3 and DIG_SIZE=4 by dividing all
possible combinations of non-negative integers with between 0 and 3
(inclusive) mpz digits.
2017-12-29 14:05:48 +11:00
Damien George c7cb1dfcb9 py/parse: Fix macro evaluation by avoiding empty __VA_ARGS__.
Empty __VA_ARGS__ are not allowed in the C preprocessor so adjust the rule
arg offset calculation to not use them.  Also, some compilers (eg MSVC)
require an extra layer of macro expansion.
2017-12-29 13:44:26 +11:00
Damien George d3fbfa491f py/parse: Update debugging code to compile on 64-bit arch. 2017-12-29 00:13:36 +11:00
Damien George 0016a45368 py/parse: Compress rule pointer table to table of offsets.
This is the sixth and final patch in a series of patches to the parser that
aims to reduce code size by compressing the data corresponding to the rules
of the grammar.

Prior to this set of patches the rules were stored as rule_t structs with
rule_id, act and arg members.  And then there was a big table of pointers
which allowed to lookup the address of a rule_t struct given the id of that
rule.

The changes that have been made are:
- Breaking up of the rule_t struct into individual components, with each
  component in a separate array.
- Removal of the rule_id part of the struct because it's not needed.
- Put all the rule arg data in a big array.
- Change the table of pointers to rules to a table of offsets within the
  array of rule arg data.

The last point is what is done in this patch here and brings about the
biggest decreases in code size, because an array of pointers is now an
array of bytes.

Code size changes for the six patches combined is:

   bare-arm:  -644
minimal x86: -1856
   unix x64: -5408
unix nanbox: -2080
      stm32:  -720
    esp8266:  -812
     cc3200:  -712

For the change in parser performance: it was measured on pyboard that these
six patches combined gave an increase in script parse time of about 0.4%.
This is due to the slightly more complicated way of looking up the data for
a rule (since the 9th bit of the offset into the rule arg data table is
calculated with an if statement).  This is an acceptable increase in parse
time considering that parsing is only done once per script (if compiled on
the target).
2017-12-29 00:13:36 +11:00
Damien George c2c92ceefc py/parse: Remove rule_t struct because it's no longer needed. 2017-12-28 23:15:36 +11:00
Damien George 66d8885d85 py/parse: Pass rule_id to push_result_token, instead of passing rule_t*. 2017-12-28 23:12:10 +11:00
Damien George 815a8cd1ae py/parse: Pass rule_id to push_result_rule, instead of passing rule_t*.
Reduces code size by eliminating quite a few pointer dereferences.
2017-12-28 23:11:43 +11:00
Damien George 845511af25 py/parse: Break rule data into separate act and arg arrays.
Instead of each rule being stored in ROM as a struct with rule_id, act and
arg, the act and arg parts are now in separate arrays and the rule_id part
is removed because it's not needed.  This reduces code size, by roughly one
byte per grammar rule, around 150 bytes.
2017-12-28 23:09:49 +11:00
Damien George 1039c5e699 py/parse: Split out rule name from rule struct into separate array.
The rule name is only used for debugging, and this patch makes things a bit
cleaner by completely separating out the rule name from the rest of the
rule data.
2017-12-28 23:08:00 +11:00
Damien George b25f92160b py/nlr: Factor out common NLR code to macro and generic funcs in nlr.c.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers.  This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c, along with a helper macro in nlr.h.  This eliminates duplicated
code.
2017-12-28 16:46:30 +11:00
Damien George 5bf8e85fc8 py/nlr: Clean up selection and config of NLR implementation.
If MICROPY_NLR_SETJMP is not enabled and the machine is auto-detected then
nlr.h now defines some convenience macros for the individual NLR
implementations to use (eg MICROPY_NLR_THUMB).  This keeps nlr.h and the
implementation in sync, and also makes the nlr_buf_t struct easier to read.
2017-12-28 16:18:39 +11:00
Damien George 97cc485538 py/nlrthumb: Fix use of naked funcs, must only contain basic asm code.
A function with a naked attribute must only contain basic inline asm
statements and no C code.

For nlr_push this means removing the "return 0" statement.  But for some
gcc versions this induces a compiler warning so the __builtin_unreachable()
line needs to be added.

For nlr_jump, this function contains a combination of C code and inline asm
so cannot be naked.
2017-12-28 15:59:09 +11:00
Paul Sokolovsky 096e967aad Revert "py/nlr: Factor out common NLR code to generic functions."
This reverts commit 6a3a742a6c.

The above commit has number of faults starting from the motivation down
to the actual implementation.

1. Faulty implementation.

The original code contained functions like:

NORETURN void nlr_jump(void *val) {
    nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top);
    nlr_buf_t *top = *top_ptr;
...
     __asm volatile (
    "mov    %0, %%edx           \n" // %edx points to nlr_buf
    "mov    28(%%edx), %%esi    \n" // load saved %esi
    "mov    24(%%edx), %%edi    \n" // load saved %edi
    "mov    20(%%edx), %%ebx    \n" // load saved %ebx
    "mov    16(%%edx), %%esp    \n" // load saved %esp
    "mov    12(%%edx), %%ebp    \n" // load saved %ebp
    "mov    8(%%edx), %%eax     \n" // load saved %eip
    "mov    %%eax, (%%esp)      \n" // store saved %eip to stack
    "xor    %%eax, %%eax        \n" // clear return register
    "inc    %%al                \n" // increase to make 1, non-local return
     "ret                        \n" // return
    :                               // output operands
    : "r"(top)                      // input operands
    :                               // clobbered registers
     );
}

Which clearly stated that C-level variable should be a parameter of the
assembly, whcih then moved it into correct register.

Whereas now it's:

NORETURN void nlr_jump_tail(nlr_buf_t *top) {
    (void)top;

    __asm volatile (
    "mov    28(%edx), %esi      \n" // load saved %esi
    "mov    24(%edx), %edi      \n" // load saved %edi
    "mov    20(%edx), %ebx      \n" // load saved %ebx
    "mov    16(%edx), %esp      \n" // load saved %esp
    "mov    12(%edx), %ebp      \n" // load saved %ebp
    "mov    8(%edx), %eax       \n" // load saved %eip
    "mov    %eax, (%esp)        \n" // store saved %eip to stack
    "xor    %eax, %eax          \n" // clear return register
    "inc    %al                 \n" // increase to make 1, non-local return
    "ret                        \n" // return
    );

    for (;;); // needed to silence compiler warning
}

Which just tries to perform operations on a completely random register (edx
in this case). The outcome is the expected: saving the pure random luck of
the compiler putting the right value in the random register above, there's
a crash.

2. Non-critical assessment.

The original commit message says "There is a small overhead introduced
(typically 1 machine instruction)". That machine instruction is a call
if a compiler doesn't perform tail optimization (happens regularly), and
it's 1 instruction only with the broken code shown above, fixing it
requires adding more. With inefficiencies already presented in the NLR
code, the overhead becomes "considerable" (several times more than 1%),
not "small".

The commit message also says "This eliminates duplicated code.". An
obvious way to eliminate duplication would be to factor out common code
to macros, not introduce overhead and breakage like above.

3. Faulty motivation.

All this started with a report of warnings/errors happening for a niche
compiler. It could have been solved in one the direct ways: a) fixing it
just for affected compiler(s); b) rewriting it in proper assembly (like
it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP
exists exactly to address minor-impact cases like thar (where a) or b) are
not applicable). Instead, a backwards "solution" was put forward, leading
to all the issues above.

The best action thus appears to be revert and rework, not trying to work
around what went haywire in the first place.
2017-12-26 19:27:58 +02:00
Damien George 26d4a6fa45 py/malloc: Remove unneeded code checking m_malloc return value.
m_malloc already checks for a failed allocation so there's no need to check
for it in m_malloc0.
2017-12-20 16:55:42 +11:00
Damien George 6a3a742a6c py/nlr: Factor out common NLR code to generic functions.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers.  This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c.  This eliminates duplicated code.

The factoring also allows to make the machine-specific NLR code pure
assembler code, thus allowing nlrthumb.c to use naked function attributes
in the correct way (naked functions can only have basic inline assembler
code in them).

There is a small overhead introduced (typically 1 machine instruction)
because now the generic nlr_jump() must call nlr_jump_tail() rather than
them being one combined function.
2017-12-20 15:42:06 +11:00
Damien George 304a3bcc1c py/modio: Use correct config macro to enable resource_stream function. 2017-12-19 16:59:08 +11:00
Damien George ae1be76d40 py/mpz: Apply a small code-size optimisation. 2017-12-19 15:45:56 +11:00
Damien George 374eaf5271 py/mpz: Fix pow3 function so it handles the case when 3rd arg is 1.
In this case the result should always be 0, even if 2nd arg is 0.
2017-12-19 15:42:58 +11:00