micropython/py/emitnx86.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

71 lines
1.9 KiB
C
Raw Normal View History

// x86 specific stuff
#include "py/mpconfig.h"
#include "py/nativeglue.h"
#if MICROPY_EMIT_X86
// This is defined so that the assembler exports generic assembler API macros
#define GENERIC_ASM_API (1)
#include "py/asmx86.h"
// Word indices of REG_LOCAL_x in nlr_buf_t
#define NLR_BUF_IDX_LOCAL_1 (5) // ebx
py/emitnative: Optimise and improve exception handling in native code. Prior to this patch, native code would use a full nlr_buf_t for each exception handler (try-except, try-finally, with). For nested exception handlers this would use a lot of C stack and be rather inefficient. This patch changes how exceptions are handled in native code by setting up only a single nlr_buf_t context for the entire function, and then manages a state machine (using the PC) to work out which exception handler to run when an exception is raised by an nlr_jump. This keeps the C stack usage at a constant level regardless of the depth of Python exception blocks. The patch also fixes an existing bug when local variables are written to within an exception handler, then their value was incorrectly restored if an exception was raised (since the nlr_jump would restore register values, back to the point of the nlr_push). And it also gets nested try-finally+with working with the viper emitter. Broadly speaking, efficiency of executing native code that doesn't use any exception blocks is unchanged, and emitted code size is only slightly increased for such function. C stack usage of all native functions is either equal or less than before. Emitted code size for native functions that use exception blocks is increased by roughly 10% (due in part to fixing of above-mentioned bugs). But, most importantly, this patch allows to implement more Python features in native code, like unwind jumps and yielding from within nested exception blocks.
2018-08-16 04:56:36 +01:00
// x86 needs a table to know how many args a given function has
STATIC byte mp_f_n_args[MP_F_NUMBER_OF] = {
[MP_F_CONVERT_OBJ_TO_NATIVE] = 2,
[MP_F_CONVERT_NATIVE_TO_OBJ] = 2,
py: Fix native functions so they run with their correct globals context. Prior to this commit a function compiled with the native decorator @micropython.native would not work correctly when accessing global variables, because the globals dict was not being set upon function entry. This commit fixes this problem by, upon function entry, setting as the current globals dict the globals dict context the function was defined within, as per normal Python semantics, and as bytecode does. Upon function exit the original globals dict is restored. In order to restore the globals dict when an exception is raised the native function must guard its internals with an nlr_push/nlr_pop pair. Because this push/pop is relatively expensive, in both C stack usage for the nlr_buf_t and CPU execution time, the implementation here optimises things as much as possible. First, the compiler keeps track of whether a function even needs to access global variables. Using this information the native emitter then generates three different kinds of code: 1. no globals used, no exception handlers: no nlr handling code and no setting of the globals dict. 2. globals used, no exception handlers: an nlr_buf_t is allocated on the C stack but it is not used if the globals dict is unchanged, saving execution time because nlr_push/nlr_pop don't need to run. 3. function has exception handlers, may use globals: an nlr_buf_t is allocated and nlr_push/nlr_pop are always called. In the end, native functions that don't access globals and don't have exception handlers will run more efficiently than those that do. Fixes issue #1573.
2018-09-13 13:03:48 +01:00
[MP_F_NATIVE_SWAP_GLOBALS] = 1,
[MP_F_LOAD_NAME] = 1,
[MP_F_LOAD_GLOBAL] = 1,
[MP_F_LOAD_BUILD_CLASS] = 0,
[MP_F_LOAD_ATTR] = 2,
[MP_F_LOAD_METHOD] = 3,
[MP_F_LOAD_SUPER_METHOD] = 2,
[MP_F_STORE_NAME] = 2,
[MP_F_STORE_GLOBAL] = 2,
[MP_F_STORE_ATTR] = 3,
[MP_F_OBJ_SUBSCR] = 3,
[MP_F_OBJ_IS_TRUE] = 1,
[MP_F_UNARY_OP] = 2,
[MP_F_BINARY_OP] = 3,
[MP_F_BUILD_TUPLE] = 2,
[MP_F_BUILD_LIST] = 2,
[MP_F_BUILD_MAP] = 1,
[MP_F_BUILD_SET] = 2,
[MP_F_STORE_SET] = 2,
[MP_F_LIST_APPEND] = 2,
[MP_F_STORE_MAP] = 3,
[MP_F_MAKE_FUNCTION_FROM_RAW_CODE] = 3,
[MP_F_NATIVE_CALL_FUNCTION_N_KW] = 3,
[MP_F_CALL_METHOD_N_KW] = 3,
[MP_F_CALL_METHOD_N_KW_VAR] = 3,
[MP_F_NATIVE_GETITER] = 2,
[MP_F_NATIVE_ITERNEXT] = 1,
[MP_F_NLR_PUSH] = 1,
[MP_F_NLR_POP] = 0,
[MP_F_NATIVE_RAISE] = 1,
[MP_F_IMPORT_NAME] = 3,
[MP_F_IMPORT_FROM] = 2,
[MP_F_IMPORT_ALL] = 1,
[MP_F_NEW_SLICE] = 3,
[MP_F_UNPACK_SEQUENCE] = 3,
[MP_F_UNPACK_EX] = 3,
[MP_F_DELETE_NAME] = 1,
[MP_F_DELETE_GLOBAL] = 1,
py: Rework bytecode and .mpy file format to be mostly static data. Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>
2021-10-22 12:22:47 +01:00
[MP_F_NEW_CLOSURE] = 3,
[MP_F_ARG_CHECK_NUM_SIG] = 3,
[MP_F_SETUP_CODE_STATE] = 4,
[MP_F_SMALL_INT_FLOOR_DIVIDE] = 2,
[MP_F_SMALL_INT_MODULO] = 2,
[MP_F_NATIVE_YIELD_FROM] = 3,
[MP_F_SETJMP] = 1,
};
#define N_X86 (1)
#define EXPORT_FUN(name) emit_native_x86_##name
#include "py/emitnative.c"
#endif