2014-05-03 23:27:38 +01:00
|
|
|
/*
|
2017-06-30 08:22:17 +01:00
|
|
|
* This file is part of the MicroPython project, http://micropython.org/
|
2014-05-03 23:27:38 +01:00
|
|
|
*
|
|
|
|
* The MIT License (MIT)
|
|
|
|
*
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
* Copyright (c) 2013-2018 Damien P. George
|
2016-11-21 22:33:55 +00:00
|
|
|
* Copyright (c) 2014-2016 Paul Sokolovsky
|
2014-05-03 23:27:38 +01:00
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
|
|
* in the Software without restriction, including without limitation the rights
|
|
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
|
|
* furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
|
|
* THE SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
2014-04-29 00:42:28 +01:00
|
|
|
#include <stdio.h>
|
|
|
|
#include <stddef.h>
|
2014-01-09 20:57:50 +00:00
|
|
|
#include <string.h>
|
|
|
|
#include <assert.h>
|
2013-12-21 18:17:45 +00:00
|
|
|
|
2015-01-01 20:27:54 +00:00
|
|
|
#include "py/objtype.h"
|
|
|
|
#include "py/runtime.h"
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2017-07-24 17:55:14 +01:00
|
|
|
#if MICROPY_DEBUG_VERBOSE // print debugging info
|
2014-04-29 00:42:28 +01:00
|
|
|
#define DEBUG_PRINT (1)
|
|
|
|
#define DEBUG_printf DEBUG_printf
|
|
|
|
#else // don't print debugging info
|
2014-11-05 21:16:41 +00:00
|
|
|
#define DEBUG_PRINT (0)
|
2014-04-29 00:42:28 +01:00
|
|
|
#define DEBUG_printf(...) (void)0
|
|
|
|
#endif
|
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#define ENABLE_SPECIAL_ACCESSORS \
|
|
|
|
(MICROPY_PY_DESCRIPTORS || MICROPY_PY_DELATTR_SETATTR || MICROPY_PY_BUILTINS_PROPERTY)
|
|
|
|
|
|
|
|
#define TYPE_FLAG_IS_SUBCLASSED (0x0001)
|
|
|
|
#define TYPE_FLAG_HAS_SPECIAL_ACCESSORS (0x0002)
|
|
|
|
|
2016-01-03 15:55:55 +00:00
|
|
|
STATIC mp_obj_t static_class_method_make_new(const mp_obj_type_t *self_in, size_t n_args, size_t n_kw, const mp_obj_t *args);
|
2014-07-05 05:55:00 +01:00
|
|
|
|
2014-01-09 20:57:50 +00:00
|
|
|
/******************************************************************************/
|
2014-05-02 00:13:04 +01:00
|
|
|
// instance object
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2014-05-03 17:19:35 +01:00
|
|
|
STATIC int instance_count_native_bases(const mp_obj_type_t *type, const mp_obj_type_t **last_native_base) {
|
2014-04-29 00:42:28 +01:00
|
|
|
int count = 0;
|
2017-04-06 03:09:01 +01:00
|
|
|
for (;;) {
|
|
|
|
if (type == &mp_type_object) {
|
|
|
|
// Not a "real" type, end search here.
|
|
|
|
return count;
|
|
|
|
} else if (mp_obj_is_native_type(type)) {
|
|
|
|
// Native types don't have parents (at least not from our perspective) so end.
|
|
|
|
*last_native_base = type;
|
|
|
|
return count + 1;
|
|
|
|
} else if (type->parent == NULL) {
|
|
|
|
// No parents so end search here.
|
|
|
|
return count;
|
2017-04-01 13:52:24 +01:00
|
|
|
#if MICROPY_MULTIPLE_INHERITANCE
|
2017-04-06 03:09:01 +01:00
|
|
|
} else if (((mp_obj_base_t*)type->parent)->type == &mp_type_tuple) {
|
|
|
|
// Multiple parents, search through them all recursively.
|
|
|
|
const mp_obj_tuple_t *parent_tuple = type->parent;
|
|
|
|
const mp_obj_t *item = parent_tuple->items;
|
|
|
|
const mp_obj_t *top = item + parent_tuple->len;
|
|
|
|
for (; item < top; ++item) {
|
|
|
|
assert(MP_OBJ_IS_TYPE(*item, &mp_type_type));
|
|
|
|
const mp_obj_type_t *bt = (const mp_obj_type_t *)MP_OBJ_TO_PTR(*item);
|
|
|
|
count += instance_count_native_bases(bt, last_native_base);
|
|
|
|
}
|
|
|
|
return count;
|
2017-04-01 13:52:24 +01:00
|
|
|
#endif
|
2014-04-29 00:42:28 +01:00
|
|
|
} else {
|
2017-04-06 03:09:01 +01:00
|
|
|
// A single parent, use iteration to continue the search.
|
|
|
|
type = type->parent;
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-11-29 23:31:42 +00:00
|
|
|
// This wrapper function is allows a subclass of a native type to call the
|
|
|
|
// __init__() method (corresponding to type->make_new) of the native type.
|
|
|
|
STATIC mp_obj_t native_base_init_wrapper(size_t n_args, const mp_obj_t *args) {
|
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(args[0]);
|
|
|
|
const mp_obj_type_t *native_base = NULL;
|
|
|
|
instance_count_native_bases(self->base.type, &native_base);
|
|
|
|
self->subobj[0] = native_base->make_new(native_base, n_args - 1, 0, args + 1);
|
|
|
|
return mp_const_none;
|
|
|
|
}
|
|
|
|
STATIC MP_DEFINE_CONST_FUN_OBJ_VAR_BETWEEN(native_base_init_wrapper_obj, 1, MP_OBJ_FUN_ARGS_MAX, native_base_init_wrapper);
|
|
|
|
|
2017-12-12 04:22:03 +00:00
|
|
|
#if !MICROPY_CPYTHON_COMPAT
|
|
|
|
STATIC
|
|
|
|
#endif
|
|
|
|
mp_obj_instance_t *mp_obj_new_instance(const mp_obj_type_t *class, const mp_obj_type_t **native_base) {
|
|
|
|
size_t num_native_bases = instance_count_native_bases(class, native_base);
|
|
|
|
assert(num_native_bases < 2);
|
|
|
|
mp_obj_instance_t *o = m_new_obj_var(mp_obj_instance_t, mp_obj_t, num_native_bases);
|
2017-11-29 23:31:42 +00:00
|
|
|
o->base.type = class;
|
|
|
|
mp_map_init(&o->members, 0);
|
|
|
|
// Initialise the native base-class slot (should be 1 at most) with a valid
|
|
|
|
// object. It doesn't matter which object, so long as it can be uniquely
|
|
|
|
// distinguished from a native class that is initialised.
|
2017-12-12 04:22:03 +00:00
|
|
|
if (num_native_bases != 0) {
|
2017-11-29 23:31:42 +00:00
|
|
|
o->subobj[0] = MP_OBJ_FROM_PTR(&native_base_init_wrapper_obj);
|
|
|
|
}
|
2017-12-12 04:22:03 +00:00
|
|
|
return o;
|
2017-11-29 23:31:42 +00:00
|
|
|
}
|
|
|
|
|
2014-04-29 00:42:28 +01:00
|
|
|
// TODO
|
|
|
|
// This implements depth-first left-to-right MRO, which is not compliant with Python3 MRO
|
|
|
|
// http://python-history.blogspot.com/2010/06/method-resolution-order.html
|
|
|
|
// https://www.python.org/download/releases/2.3/mro/
|
|
|
|
//
|
2015-03-16 11:32:03 +00:00
|
|
|
// will keep lookup->dest[0]'s value (should be MP_OBJ_NULL on invocation) if attribute
|
|
|
|
// is not found
|
|
|
|
// will set lookup->dest[0] to MP_OBJ_SENTINEL if special method was found in a native
|
|
|
|
// type base via slot id (as specified by lookup->meth_offset). As there can be only one
|
|
|
|
// native base, it's known that it applies to instance->subobj[0]. In most cases, we also
|
|
|
|
// don't need to know which type it was - because instance->subobj[0] is of that type.
|
|
|
|
// The only exception is when object is not yet constructed, then we need to know base
|
|
|
|
// native type to construct its instance->subobj[0] from. But this case is handled via
|
|
|
|
// instance_count_native_bases(), which returns a native base which it saw.
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data {
|
|
|
|
mp_obj_instance_t *obj;
|
|
|
|
qstr attr;
|
2017-03-24 05:58:13 +00:00
|
|
|
size_t meth_offset;
|
2014-06-08 18:50:12 +01:00
|
|
|
mp_obj_t *dest;
|
2014-06-08 20:28:44 +01:00
|
|
|
bool is_type;
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
STATIC void mp_obj_class_lookup(struct class_lookup_data *lookup, const mp_obj_type_t *type) {
|
2015-11-27 17:01:44 +00:00
|
|
|
assert(lookup->dest[0] == MP_OBJ_NULL);
|
|
|
|
assert(lookup->dest[1] == MP_OBJ_NULL);
|
2014-01-09 21:43:51 +00:00
|
|
|
for (;;) {
|
2017-08-30 22:44:51 +01:00
|
|
|
DEBUG_printf("mp_obj_class_lookup: Looking up %s in %s\n", qstr_str(lookup->attr), qstr_str(type->name));
|
2014-04-29 00:42:28 +01:00
|
|
|
// Optimize special method lookup for native types
|
|
|
|
// This avoids extra method_name => slot lookup. On the other hand,
|
|
|
|
// this should not be applied to class types, as will result in extra
|
|
|
|
// lookup either.
|
2015-05-03 23:23:18 +01:00
|
|
|
if (lookup->meth_offset != 0 && mp_obj_is_native_type(type)) {
|
2014-06-08 18:50:12 +01:00
|
|
|
if (*(void**)((char*)type + lookup->meth_offset) != NULL) {
|
2017-08-30 22:44:51 +01:00
|
|
|
DEBUG_printf("mp_obj_class_lookup: Matched special meth slot (off=%d) for %s\n",
|
|
|
|
lookup->meth_offset, qstr_str(lookup->attr));
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup->dest[0] = MP_OBJ_SENTINEL;
|
2014-04-30 00:14:30 +01:00
|
|
|
return;
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-01-18 17:52:41 +00:00
|
|
|
if (type->locals_dict != NULL) {
|
2014-03-26 22:35:10 +00:00
|
|
|
// search locals_dict (the set of methods/attributes)
|
2015-11-27 17:01:44 +00:00
|
|
|
assert(type->locals_dict->base.type == &mp_type_dict); // MicroPython restriction, for now
|
|
|
|
mp_map_t *locals_map = &type->locals_dict->map;
|
2014-06-08 18:50:12 +01:00
|
|
|
mp_map_elem_t *elem = mp_map_lookup(locals_map, MP_OBJ_NEW_QSTR(lookup->attr), MP_MAP_LOOKUP);
|
2014-01-18 17:52:41 +00:00
|
|
|
if (elem != NULL) {
|
2014-06-08 20:28:44 +01:00
|
|
|
if (lookup->is_type) {
|
2015-03-19 22:51:26 +00:00
|
|
|
// If we look up a class method, we need to return original type for which we
|
|
|
|
// do a lookup, not a (base) type in which we found the class method.
|
2014-06-08 20:28:44 +01:00
|
|
|
const mp_obj_type_t *org_type = (const mp_obj_type_t*)lookup->obj;
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_convert_member_lookup(MP_OBJ_NULL, org_type, elem->value, lookup->dest);
|
2014-05-18 18:37:18 +01:00
|
|
|
} else {
|
2015-03-16 11:58:08 +00:00
|
|
|
mp_obj_instance_t *obj = lookup->obj;
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_t obj_obj;
|
|
|
|
if (obj != NULL && mp_obj_is_native_type(type) && type != &mp_type_object /* object is not a real type */) {
|
2015-03-16 11:58:08 +00:00
|
|
|
// If we're dealing with native base class, then it applies to native sub-object
|
2015-11-27 17:01:44 +00:00
|
|
|
obj_obj = obj->subobj[0];
|
|
|
|
} else {
|
|
|
|
obj_obj = MP_OBJ_FROM_PTR(obj);
|
2015-03-16 11:58:08 +00:00
|
|
|
}
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_convert_member_lookup(obj_obj, type, elem->value, lookup->dest);
|
2014-04-30 00:14:30 +01:00
|
|
|
}
|
2014-06-08 20:28:44 +01:00
|
|
|
#if DEBUG_PRINT
|
2018-08-02 05:17:24 +01:00
|
|
|
DEBUG_printf("mp_obj_class_lookup: Returning: ");
|
|
|
|
mp_obj_print_helper(MICROPY_DEBUG_PRINTER, lookup->dest[0], PRINT_REPR);
|
|
|
|
if (lookup->dest[1] != MP_OBJ_NULL) {
|
|
|
|
// Don't try to repr() lookup->dest[1], as we can be called recursively
|
|
|
|
DEBUG_printf(" <%s @%p>", mp_obj_get_type_str(lookup->dest[1]), MP_OBJ_TO_PTR(lookup->dest[1]));
|
|
|
|
}
|
|
|
|
DEBUG_printf("\n");
|
2014-06-08 20:28:44 +01:00
|
|
|
#endif
|
2014-04-30 00:14:30 +01:00
|
|
|
return;
|
2014-01-18 17:52:41 +00:00
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 00:07:00 +00:00
|
|
|
// Previous code block takes care about attributes defined in .locals_dict,
|
|
|
|
// but some attributes of native types may be handled using .load_attr method,
|
|
|
|
// so make sure we try to lookup those too.
|
2015-11-27 17:01:44 +00:00
|
|
|
if (lookup->obj != NULL && !lookup->is_type && mp_obj_is_native_type(type) && type != &mp_type_object /* object is not a real type */) {
|
2014-06-08 18:50:12 +01:00
|
|
|
mp_load_method_maybe(lookup->obj->subobj[0], lookup->attr, lookup->dest);
|
|
|
|
if (lookup->dest[0] != MP_OBJ_NULL) {
|
2014-04-30 00:14:30 +01:00
|
|
|
return;
|
2014-04-29 01:16:38 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-01-09 21:43:51 +00:00
|
|
|
// attribute not found, keep searching base classes
|
|
|
|
|
2017-04-06 03:09:01 +01:00
|
|
|
if (type->parent == NULL) {
|
2017-08-30 22:44:51 +01:00
|
|
|
DEBUG_printf("mp_obj_class_lookup: No more parents\n");
|
2014-04-30 00:14:30 +01:00
|
|
|
return;
|
2017-04-01 13:52:24 +01:00
|
|
|
#if MICROPY_MULTIPLE_INHERITANCE
|
2017-04-06 03:09:01 +01:00
|
|
|
} else if (((mp_obj_base_t*)type->parent)->type == &mp_type_tuple) {
|
|
|
|
const mp_obj_tuple_t *parent_tuple = type->parent;
|
|
|
|
const mp_obj_t *item = parent_tuple->items;
|
|
|
|
const mp_obj_t *top = item + parent_tuple->len - 1;
|
|
|
|
for (; item < top; ++item) {
|
|
|
|
assert(MP_OBJ_IS_TYPE(*item, &mp_type_type));
|
|
|
|
mp_obj_type_t *bt = (mp_obj_type_t*)MP_OBJ_TO_PTR(*item);
|
|
|
|
if (bt == &mp_type_object) {
|
|
|
|
// Not a "real" type
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mp_obj_class_lookup(lookup, bt);
|
|
|
|
if (lookup->dest[0] != MP_OBJ_NULL) {
|
|
|
|
return;
|
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
|
2017-04-06 03:09:01 +01:00
|
|
|
// search last base (simple tail recursion elimination)
|
|
|
|
assert(MP_OBJ_IS_TYPE(*item, &mp_type_type));
|
|
|
|
type = (mp_obj_type_t*)MP_OBJ_TO_PTR(*item);
|
2017-04-01 13:52:24 +01:00
|
|
|
#endif
|
2017-04-06 03:09:01 +01:00
|
|
|
} else {
|
|
|
|
type = type->parent;
|
|
|
|
}
|
2014-05-10 00:03:43 +01:00
|
|
|
if (type == &mp_type_object) {
|
|
|
|
// Not a "real" type
|
|
|
|
return;
|
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-09 23:56:15 +01:00
|
|
|
STATIC void instance_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-03-16 13:16:54 +00:00
|
|
|
qstr meth = (kind == PRINT_STR) ? MP_QSTR___str__ : MP_QSTR___repr__;
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = meth,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, print),
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-30 00:14:30 +01:00
|
|
|
if (member[0] == MP_OBJ_NULL && kind == PRINT_STR) {
|
2014-03-16 13:16:54 +00:00
|
|
|
// If there's no __str__, fall back to __repr__
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup.attr = MP_QSTR___repr__;
|
|
|
|
lookup.meth_offset = 0;
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
|
|
|
|
2014-04-30 00:14:30 +01:00
|
|
|
if (member[0] == MP_OBJ_SENTINEL) {
|
2014-05-01 23:51:25 +01:00
|
|
|
// Handle Exception subclasses specially
|
|
|
|
if (mp_obj_is_native_exception_instance(self->subobj[0])) {
|
|
|
|
if (kind != PRINT_STR) {
|
2015-04-09 23:56:15 +01:00
|
|
|
mp_print_str(print, qstr_str(self->base.type->name));
|
2014-05-01 23:51:25 +01:00
|
|
|
}
|
2015-04-09 23:56:15 +01:00
|
|
|
mp_obj_print_helper(print, self->subobj[0], kind | PRINT_EXC_SUBCLASS);
|
2014-05-01 23:51:25 +01:00
|
|
|
} else {
|
2015-04-09 23:56:15 +01:00
|
|
|
mp_obj_print_helper(print, self->subobj[0], kind);
|
2014-05-01 23:51:25 +01:00
|
|
|
}
|
2014-04-29 00:42:28 +01:00
|
|
|
return;
|
2014-03-16 13:16:54 +00:00
|
|
|
}
|
|
|
|
|
2014-04-30 00:14:30 +01:00
|
|
|
if (member[0] != MP_OBJ_NULL) {
|
|
|
|
mp_obj_t r = mp_call_function_1(member[0], self_in);
|
2015-04-09 23:56:15 +01:00
|
|
|
mp_obj_print_helper(print, r, PRINT_STR);
|
2014-03-16 13:16:54 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: CPython prints fully-qualified type name
|
2016-03-14 22:41:14 +00:00
|
|
|
mp_printf(print, "<%s object at %p>", mp_obj_get_type_str(self_in), self);
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
|
|
|
|
2016-01-03 15:55:55 +00:00
|
|
|
mp_obj_t mp_obj_instance_make_new(const mp_obj_type_t *self, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2015-05-03 23:23:18 +01:00
|
|
|
assert(mp_obj_is_instance_type(self));
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2014-05-18 18:37:18 +01:00
|
|
|
// look for __new__ function
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t init_fn[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = NULL,
|
|
|
|
.attr = MP_QSTR___new__,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, make_new),
|
|
|
|
.dest = init_fn,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self);
|
2014-04-29 00:42:28 +01:00
|
|
|
|
2017-12-12 04:22:03 +00:00
|
|
|
const mp_obj_type_t *native_base = NULL;
|
|
|
|
mp_obj_instance_t *o;
|
|
|
|
if (init_fn[0] == MP_OBJ_NULL || init_fn[0] == MP_OBJ_SENTINEL) {
|
|
|
|
// Either there is no __new__() method defined or there is a native
|
|
|
|
// constructor. In both cases create a blank instance.
|
|
|
|
o = mp_obj_new_instance(self, &native_base);
|
2017-11-29 23:31:42 +00:00
|
|
|
|
|
|
|
// Since type->make_new() implements both __new__() and __init__() in
|
|
|
|
// one go, of which the latter may be overridden by the Python subclass,
|
|
|
|
// we defer (see the end of this function) the call of the native
|
|
|
|
// constructor to give a chance for the Python __init__() method to call
|
|
|
|
// said native constructor.
|
|
|
|
|
2017-12-12 04:22:03 +00:00
|
|
|
} else {
|
|
|
|
// Call Python class __new__ function with all args to create an instance
|
|
|
|
mp_obj_t new_ret;
|
2014-01-18 14:10:48 +00:00
|
|
|
if (n_args == 0 && n_kw == 0) {
|
2016-01-03 15:55:55 +00:00
|
|
|
mp_obj_t args2[1] = {MP_OBJ_FROM_PTR(self)};
|
|
|
|
new_ret = mp_call_function_n_kw(init_fn[0], 1, 0, args2);
|
2014-01-09 20:57:50 +00:00
|
|
|
} else {
|
2014-01-18 14:10:48 +00:00
|
|
|
mp_obj_t *args2 = m_new(mp_obj_t, 1 + n_args + 2 * n_kw);
|
2016-01-03 15:55:55 +00:00
|
|
|
args2[0] = MP_OBJ_FROM_PTR(self);
|
2014-01-18 14:10:48 +00:00
|
|
|
memcpy(args2 + 1, args, (n_args + 2 * n_kw) * sizeof(mp_obj_t));
|
2014-05-18 18:37:18 +01:00
|
|
|
new_ret = mp_call_function_n_kw(init_fn[0], n_args + 1, n_kw, args2);
|
2014-01-18 14:10:48 +00:00
|
|
|
m_del(mp_obj_t, args2, 1 + n_args + 2 * n_kw);
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
2014-05-18 18:37:18 +01:00
|
|
|
|
2017-12-12 04:22:03 +00:00
|
|
|
// https://docs.python.org/3.4/reference/datamodel.html#object.__new__
|
|
|
|
// "If __new__() does not return an instance of cls, then the new
|
|
|
|
// instance's __init__() method will not be invoked."
|
|
|
|
if (mp_obj_get_type(new_ret) != self) {
|
|
|
|
return new_ret;
|
|
|
|
}
|
2014-05-18 18:37:18 +01:00
|
|
|
|
2017-12-12 04:22:03 +00:00
|
|
|
// The instance returned by __new__() becomes the new object
|
|
|
|
o = MP_OBJ_TO_PTR(new_ret);
|
2014-05-18 18:37:18 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// now call Python class __init__ function with all args
|
2017-11-29 23:31:42 +00:00
|
|
|
// This method has a chance to call super().__init__() to construct a
|
|
|
|
// possible native base class.
|
2014-10-23 12:06:53 +01:00
|
|
|
init_fn[0] = init_fn[1] = MP_OBJ_NULL;
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup.obj = o;
|
|
|
|
lookup.attr = MP_QSTR___init__;
|
|
|
|
lookup.meth_offset = 0;
|
|
|
|
mp_obj_class_lookup(&lookup, self);
|
2014-05-18 18:37:18 +01:00
|
|
|
if (init_fn[0] != MP_OBJ_NULL) {
|
|
|
|
mp_obj_t init_ret;
|
|
|
|
if (n_args == 0 && n_kw == 0) {
|
|
|
|
init_ret = mp_call_method_n_kw(0, 0, init_fn);
|
|
|
|
} else {
|
|
|
|
mp_obj_t *args2 = m_new(mp_obj_t, 2 + n_args + 2 * n_kw);
|
|
|
|
args2[0] = init_fn[0];
|
|
|
|
args2[1] = init_fn[1];
|
|
|
|
memcpy(args2 + 2, args, (n_args + 2 * n_kw) * sizeof(mp_obj_t));
|
|
|
|
init_ret = mp_call_method_n_kw(n_args, n_kw, args2);
|
|
|
|
m_del(mp_obj_t, args2, 2 + n_args + 2 * n_kw);
|
|
|
|
}
|
2014-01-09 20:57:50 +00:00
|
|
|
if (init_ret != mp_const_none) {
|
2014-11-06 17:36:16 +00:00
|
|
|
if (MICROPY_ERROR_REPORTING == MICROPY_ERROR_REPORTING_TERSE) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("__init__() should return None");
|
2014-11-06 17:36:16 +00:00
|
|
|
} else {
|
|
|
|
nlr_raise(mp_obj_new_exception_msg_varg(&mp_type_TypeError,
|
|
|
|
"__init__() should return None, not '%s'", mp_obj_get_type_str(init_ret)));
|
|
|
|
}
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2017-11-29 23:31:42 +00:00
|
|
|
// If the type had a native base that was not explicitly initialised
|
|
|
|
// (constructed) by the Python __init__() method then construct it now.
|
|
|
|
if (native_base != NULL && o->subobj[0] == MP_OBJ_FROM_PTR(&native_base_init_wrapper_obj)) {
|
|
|
|
o->subobj[0] = native_base->make_new(native_base, n_args, n_kw, args);
|
|
|
|
}
|
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
return MP_OBJ_FROM_PTR(o);
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
|
|
|
|
2017-10-21 09:06:32 +01:00
|
|
|
// Qstrs for special methods are guaranteed to have a small value, so we use byte
|
|
|
|
// type to represent them.
|
|
|
|
const byte mp_unary_op_method_name[MP_UNARY_OP_NUM_RUNTIME] = {
|
2014-03-30 13:35:08 +01:00
|
|
|
[MP_UNARY_OP_BOOL] = MP_QSTR___bool__,
|
|
|
|
[MP_UNARY_OP_LEN] = MP_QSTR___len__,
|
2015-05-11 13:25:19 +01:00
|
|
|
[MP_UNARY_OP_HASH] = MP_QSTR___hash__,
|
2015-03-30 00:29:59 +01:00
|
|
|
#if MICROPY_PY_ALL_SPECIAL_METHODS
|
|
|
|
[MP_UNARY_OP_POSITIVE] = MP_QSTR___pos__,
|
|
|
|
[MP_UNARY_OP_NEGATIVE] = MP_QSTR___neg__,
|
|
|
|
[MP_UNARY_OP_INVERT] = MP_QSTR___invert__,
|
2017-10-21 11:55:02 +01:00
|
|
|
[MP_UNARY_OP_ABS] = MP_QSTR___abs__,
|
2015-03-30 00:29:59 +01:00
|
|
|
#endif
|
2017-08-11 07:42:39 +01:00
|
|
|
#if MICROPY_PY_SYS_GETSIZEOF
|
2017-10-19 10:40:41 +01:00
|
|
|
[MP_UNARY_OP_SIZEOF] = MP_QSTR___sizeof__,
|
2017-08-11 07:42:39 +01:00
|
|
|
#endif
|
2014-01-30 10:05:33 +00:00
|
|
|
};
|
|
|
|
|
2017-08-29 04:04:01 +01:00
|
|
|
STATIC mp_obj_t instance_unary_op(mp_unary_op_t op, mp_obj_t self_in) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2017-08-11 07:42:39 +01:00
|
|
|
|
|
|
|
#if MICROPY_PY_SYS_GETSIZEOF
|
|
|
|
if (MP_UNLIKELY(op == MP_UNARY_OP_SIZEOF)) {
|
|
|
|
// TODO: This doesn't count inherited objects (self->subobj)
|
|
|
|
const mp_obj_type_t *native_base;
|
|
|
|
size_t num_native_bases = instance_count_native_bases(mp_obj_get_type(self_in), &native_base);
|
|
|
|
|
|
|
|
size_t sz = sizeof(*self) + sizeof(*self->subobj) * num_native_bases
|
|
|
|
+ sizeof(*self->members.table) * self->members.alloc;
|
|
|
|
return MP_OBJ_NEW_SMALL_INT(sz);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-01-27 18:02:25 +00:00
|
|
|
qstr op_name = mp_unary_op_method_name[op];
|
2014-04-29 00:42:28 +01:00
|
|
|
/* Still try to lookup native slot
|
2014-01-30 10:05:33 +00:00
|
|
|
if (op_name == 0) {
|
2014-05-21 19:42:43 +01:00
|
|
|
return MP_OBJ_NULL;
|
2014-01-30 10:05:33 +00:00
|
|
|
}
|
2014-04-29 00:42:28 +01:00
|
|
|
*/
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = op_name,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, unary_op),
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-30 00:14:30 +01:00
|
|
|
if (member[0] == MP_OBJ_SENTINEL) {
|
2014-04-29 00:42:28 +01:00
|
|
|
return mp_unary_op(op, self->subobj[0]);
|
2014-04-30 00:14:30 +01:00
|
|
|
} else if (member[0] != MP_OBJ_NULL) {
|
2015-05-11 13:25:19 +01:00
|
|
|
mp_obj_t val = mp_call_function_1(member[0], self_in);
|
|
|
|
// __hash__ must return a small int
|
|
|
|
if (op == MP_UNARY_OP_HASH) {
|
2015-05-12 23:05:53 +01:00
|
|
|
val = MP_OBJ_NEW_SMALL_INT(mp_obj_get_int_truncated(val));
|
2015-05-11 13:25:19 +01:00
|
|
|
}
|
|
|
|
return val;
|
2014-01-30 10:05:33 +00:00
|
|
|
} else {
|
2015-05-11 13:25:19 +01:00
|
|
|
if (op == MP_UNARY_OP_HASH) {
|
|
|
|
lookup.attr = MP_QSTR___eq__;
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
|
|
|
if (member[0] == MP_OBJ_NULL) {
|
|
|
|
// https://docs.python.org/3/reference/datamodel.html#object.__hash__
|
|
|
|
// "User-defined classes have __eq__() and __hash__() methods by default;
|
2017-07-19 04:12:10 +01:00
|
|
|
// with them, all objects compare unequal (except with themselves) and
|
2015-05-11 13:25:19 +01:00
|
|
|
// x.__hash__() returns an appropriate value such that x == y implies
|
|
|
|
// both that x is y and hash(x) == hash(y)."
|
|
|
|
return MP_OBJ_NEW_SMALL_INT((mp_uint_t)self_in);
|
|
|
|
}
|
|
|
|
// "A class that overrides __eq__() and does not define __hash__() will have its __hash__() implicitly set to None.
|
|
|
|
// When the __hash__() method of a class is None, instances of the class will raise an appropriate TypeError"
|
|
|
|
}
|
|
|
|
|
2014-05-21 19:42:43 +01:00
|
|
|
return MP_OBJ_NULL; // op not supported
|
2014-01-30 10:05:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-10-05 00:47:22 +01:00
|
|
|
// Binary-op enum values not listed here will have the default value of 0 in the
|
2017-10-21 09:06:32 +01:00
|
|
|
// table, corresponding to MP_QSTR_NULL, and are therefore unsupported (a lookup will
|
2017-10-05 00:47:22 +01:00
|
|
|
// fail). They can be added at the expense of code size for the qstr.
|
2017-10-21 09:06:32 +01:00
|
|
|
// Qstrs for special methods are guaranteed to have a small value, so we use byte
|
|
|
|
// type to represent them.
|
|
|
|
const byte mp_binary_op_method_name[MP_BINARY_OP_NUM_RUNTIME] = {
|
2017-10-05 00:47:22 +01:00
|
|
|
[MP_BINARY_OP_LESS] = MP_QSTR___lt__,
|
|
|
|
[MP_BINARY_OP_MORE] = MP_QSTR___gt__,
|
|
|
|
[MP_BINARY_OP_EQUAL] = MP_QSTR___eq__,
|
|
|
|
[MP_BINARY_OP_LESS_EQUAL] = MP_QSTR___le__,
|
|
|
|
[MP_BINARY_OP_MORE_EQUAL] = MP_QSTR___ge__,
|
|
|
|
// MP_BINARY_OP_NOT_EQUAL, // a != b calls a == b and inverts result
|
2017-11-24 02:04:24 +00:00
|
|
|
[MP_BINARY_OP_CONTAINS] = MP_QSTR___contains__,
|
2017-10-05 00:47:22 +01:00
|
|
|
|
2017-10-21 11:55:02 +01:00
|
|
|
// All inplace methods are optional, and normal methods will be used
|
|
|
|
// as a fallback.
|
2017-10-05 00:47:22 +01:00
|
|
|
[MP_BINARY_OP_INPLACE_ADD] = MP_QSTR___iadd__,
|
|
|
|
[MP_BINARY_OP_INPLACE_SUBTRACT] = MP_QSTR___isub__,
|
2017-10-27 20:29:15 +01:00
|
|
|
#if MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS
|
2017-10-21 11:55:02 +01:00
|
|
|
[MP_BINARY_OP_INPLACE_MULTIPLY] = MP_QSTR___imul__,
|
|
|
|
[MP_BINARY_OP_INPLACE_FLOOR_DIVIDE] = MP_QSTR___ifloordiv__,
|
|
|
|
[MP_BINARY_OP_INPLACE_TRUE_DIVIDE] = MP_QSTR___itruediv__,
|
|
|
|
[MP_BINARY_OP_INPLACE_MODULO] = MP_QSTR___imod__,
|
|
|
|
[MP_BINARY_OP_INPLACE_POWER] = MP_QSTR___ipow__,
|
|
|
|
[MP_BINARY_OP_INPLACE_OR] = MP_QSTR___ior__,
|
|
|
|
[MP_BINARY_OP_INPLACE_XOR] = MP_QSTR___ixor__,
|
|
|
|
[MP_BINARY_OP_INPLACE_AND] = MP_QSTR___iand__,
|
|
|
|
[MP_BINARY_OP_INPLACE_LSHIFT] = MP_QSTR___ilshift__,
|
|
|
|
[MP_BINARY_OP_INPLACE_RSHIFT] = MP_QSTR___irshift__,
|
2017-10-05 00:47:22 +01:00
|
|
|
#endif
|
|
|
|
|
2014-03-30 13:35:08 +01:00
|
|
|
[MP_BINARY_OP_ADD] = MP_QSTR___add__,
|
|
|
|
[MP_BINARY_OP_SUBTRACT] = MP_QSTR___sub__,
|
2015-02-21 17:35:09 +00:00
|
|
|
#if MICROPY_PY_ALL_SPECIAL_METHODS
|
|
|
|
[MP_BINARY_OP_MULTIPLY] = MP_QSTR___mul__,
|
|
|
|
[MP_BINARY_OP_FLOOR_DIVIDE] = MP_QSTR___floordiv__,
|
|
|
|
[MP_BINARY_OP_TRUE_DIVIDE] = MP_QSTR___truediv__,
|
2017-10-21 11:55:02 +01:00
|
|
|
[MP_BINARY_OP_MODULO] = MP_QSTR___mod__,
|
|
|
|
[MP_BINARY_OP_DIVMOD] = MP_QSTR___divmod__,
|
|
|
|
[MP_BINARY_OP_POWER] = MP_QSTR___pow__,
|
|
|
|
[MP_BINARY_OP_OR] = MP_QSTR___or__,
|
|
|
|
[MP_BINARY_OP_XOR] = MP_QSTR___xor__,
|
|
|
|
[MP_BINARY_OP_AND] = MP_QSTR___and__,
|
|
|
|
[MP_BINARY_OP_LSHIFT] = MP_QSTR___lshift__,
|
|
|
|
[MP_BINARY_OP_RSHIFT] = MP_QSTR___rshift__,
|
2015-02-21 17:35:09 +00:00
|
|
|
#endif
|
2017-09-10 15:05:20 +01:00
|
|
|
|
|
|
|
#if MICROPY_PY_REVERSE_SPECIAL_METHODS
|
|
|
|
[MP_BINARY_OP_REVERSE_ADD] = MP_QSTR___radd__,
|
|
|
|
[MP_BINARY_OP_REVERSE_SUBTRACT] = MP_QSTR___rsub__,
|
2017-10-21 11:55:02 +01:00
|
|
|
#if MICROPY_PY_ALL_SPECIAL_METHODS
|
2017-09-10 15:05:20 +01:00
|
|
|
[MP_BINARY_OP_REVERSE_MULTIPLY] = MP_QSTR___rmul__,
|
2017-10-21 11:55:02 +01:00
|
|
|
[MP_BINARY_OP_REVERSE_FLOOR_DIVIDE] = MP_QSTR___rfloordiv__,
|
|
|
|
[MP_BINARY_OP_REVERSE_TRUE_DIVIDE] = MP_QSTR___rtruediv__,
|
|
|
|
[MP_BINARY_OP_REVERSE_MODULO] = MP_QSTR___rmod__,
|
|
|
|
[MP_BINARY_OP_REVERSE_POWER] = MP_QSTR___rpow__,
|
|
|
|
[MP_BINARY_OP_REVERSE_OR] = MP_QSTR___ror__,
|
|
|
|
[MP_BINARY_OP_REVERSE_XOR] = MP_QSTR___rxor__,
|
|
|
|
[MP_BINARY_OP_REVERSE_AND] = MP_QSTR___rand__,
|
|
|
|
[MP_BINARY_OP_REVERSE_LSHIFT] = MP_QSTR___rlshift__,
|
|
|
|
[MP_BINARY_OP_REVERSE_RSHIFT] = MP_QSTR___rrshift__,
|
|
|
|
#endif
|
2017-09-10 15:05:20 +01:00
|
|
|
#endif
|
2014-01-18 15:31:13 +00:00
|
|
|
};
|
|
|
|
|
2017-08-29 04:04:01 +01:00
|
|
|
STATIC mp_obj_t instance_binary_op(mp_binary_op_t op, mp_obj_t lhs_in, mp_obj_t rhs_in) {
|
2014-04-02 14:13:26 +01:00
|
|
|
// Note: For ducktyping, CPython does not look in the instance members or use
|
|
|
|
// __getattr__ or __getattribute__. It only looks in the class dictionary.
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *lhs = MP_OBJ_TO_PTR(lhs_in);
|
2017-09-02 00:39:27 +01:00
|
|
|
retry:;
|
2014-12-28 19:41:58 +00:00
|
|
|
qstr op_name = mp_binary_op_method_name[op];
|
2014-04-29 00:42:28 +01:00
|
|
|
/* Still try to lookup native slot
|
2014-01-25 00:17:36 +00:00
|
|
|
if (op_name == 0) {
|
2014-05-21 19:42:43 +01:00
|
|
|
return MP_OBJ_NULL;
|
2014-01-18 15:31:13 +00:00
|
|
|
}
|
2014-04-29 00:42:28 +01:00
|
|
|
*/
|
2014-05-18 18:37:18 +01:00
|
|
|
mp_obj_t dest[3] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = lhs,
|
|
|
|
.attr = op_name,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, binary_op),
|
|
|
|
.dest = dest,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, lhs->base.type);
|
2017-08-29 23:35:48 +01:00
|
|
|
|
|
|
|
mp_obj_t res;
|
2014-05-18 18:37:18 +01:00
|
|
|
if (dest[0] == MP_OBJ_SENTINEL) {
|
2017-08-29 23:35:48 +01:00
|
|
|
res = mp_binary_op(op, lhs->subobj[0], rhs_in);
|
2014-05-18 18:37:18 +01:00
|
|
|
} else if (dest[0] != MP_OBJ_NULL) {
|
2014-04-02 14:13:26 +01:00
|
|
|
dest[2] = rhs_in;
|
2017-08-29 23:35:48 +01:00
|
|
|
res = mp_call_method_n_kw(1, 0, dest);
|
2014-01-18 15:31:13 +00:00
|
|
|
} else {
|
2017-09-02 00:39:27 +01:00
|
|
|
// If this was an inplace method, fallback to normal method
|
|
|
|
// https://docs.python.org/3/reference/datamodel.html#object.__iadd__ :
|
|
|
|
// "If a specific method is not defined, the augmented assignment
|
|
|
|
// falls back to the normal methods."
|
|
|
|
if (op >= MP_BINARY_OP_INPLACE_OR && op <= MP_BINARY_OP_INPLACE_POWER) {
|
|
|
|
op -= MP_BINARY_OP_INPLACE_OR - MP_BINARY_OP_OR;
|
|
|
|
goto retry;
|
|
|
|
}
|
2014-05-21 19:42:43 +01:00
|
|
|
return MP_OBJ_NULL; // op not supported
|
2014-01-18 15:31:13 +00:00
|
|
|
}
|
2017-08-29 23:35:48 +01:00
|
|
|
|
|
|
|
#if MICROPY_PY_BUILTINS_NOTIMPLEMENTED
|
|
|
|
// NotImplemented means "try other fallbacks (like calling __rop__
|
|
|
|
// instead of __op__) and if nothing works, raise TypeError". As
|
|
|
|
// MicroPython doesn't implement any fallbacks, signal to raise
|
|
|
|
// TypeError right away.
|
|
|
|
if (res == mp_const_notimplemented) {
|
|
|
|
return MP_OBJ_NULL; // op not supported
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return res;
|
2014-01-18 15:31:13 +00:00
|
|
|
}
|
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
STATIC void mp_obj_instance_load_attr(mp_obj_t self_in, qstr attr, mp_obj_t *dest) {
|
2015-01-08 17:48:44 +00:00
|
|
|
// logic: look in instance members then class locals
|
2015-05-03 23:23:18 +01:00
|
|
|
assert(mp_obj_is_instance_type(mp_obj_get_type(self_in)));
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-04-13 18:59:45 +01:00
|
|
|
|
2014-01-09 20:57:50 +00:00
|
|
|
mp_map_elem_t *elem = mp_map_lookup(&self->members, MP_OBJ_NEW_QSTR(attr), MP_MAP_LOOKUP);
|
|
|
|
if (elem != NULL) {
|
|
|
|
// object member, always treated as a value
|
2014-01-18 14:10:48 +00:00
|
|
|
dest[0] = elem->value;
|
2014-01-09 20:57:50 +00:00
|
|
|
return;
|
|
|
|
}
|
2014-12-23 13:06:55 +00:00
|
|
|
#if MICROPY_CPYTHON_COMPAT
|
|
|
|
if (attr == MP_QSTR___dict__) {
|
|
|
|
// Create a new dict with a copy of the instance's map items.
|
|
|
|
// This creates, unlike CPython, a 'read-only' __dict__: modifying
|
|
|
|
// it will not result in modifications to the actual instance members.
|
|
|
|
mp_map_t *map = &self->members;
|
|
|
|
mp_obj_t attr_dict = mp_obj_new_dict(map->used);
|
2017-03-24 05:58:13 +00:00
|
|
|
for (size_t i = 0; i < map->alloc; ++i) {
|
2014-12-23 13:06:55 +00:00
|
|
|
if (MP_MAP_SLOT_IS_FILLED(map, i)) {
|
|
|
|
mp_obj_dict_store(attr_dict, map->table[i].key, map->table[i].value);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
dest[0] = attr_dict;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = attr,
|
|
|
|
.meth_offset = 0,
|
|
|
|
.dest = dest,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t member = dest[0];
|
2014-01-18 17:52:41 +00:00
|
|
|
if (member != MP_OBJ_NULL) {
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
if (!(self->base.type->flags & TYPE_FLAG_HAS_SPECIAL_ACCESSORS)) {
|
|
|
|
// Class doesn't have any special accessors to check so return straightaway
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-02-14 17:43:54 +00:00
|
|
|
#if MICROPY_PY_BUILTINS_PROPERTY
|
2014-05-18 18:37:18 +01:00
|
|
|
if (MP_OBJ_IS_TYPE(member, &mp_type_property)) {
|
2015-02-14 17:43:54 +00:00
|
|
|
// object member is a property; delegate the load to the property
|
|
|
|
// Note: This is an optimisation for code size and execution time.
|
|
|
|
// The proper way to do it is have the functionality just below
|
|
|
|
// in a __get__ method of the property object, and then it would
|
|
|
|
// be called by the descriptor code down below. But that way
|
|
|
|
// requires overhead for the nested mp_call's and overhead for
|
|
|
|
// the code.
|
2014-04-13 18:59:45 +01:00
|
|
|
const mp_obj_t *proxy = mp_obj_property_get(member);
|
|
|
|
if (proxy[0] == mp_const_none) {
|
2016-10-17 02:17:37 +01:00
|
|
|
mp_raise_msg(&mp_type_AttributeError, "unreadable attribute");
|
2014-04-13 18:59:45 +01:00
|
|
|
} else {
|
|
|
|
dest[0] = mp_call_function_n_kw(proxy[0], 1, 0, &self_in);
|
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
return;
|
2014-04-13 18:59:45 +01:00
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#if MICROPY_PY_DESCRIPTORS
|
|
|
|
// found a class attribute; if it has a __get__ method then call it with the
|
|
|
|
// class instance and class as arguments and return the result
|
|
|
|
// Note that this is functionally correct but very slow: each load_attr
|
|
|
|
// requires an extra mp_load_method_maybe to check for the __get__.
|
|
|
|
mp_obj_t attr_get_method[4];
|
|
|
|
mp_load_method_maybe(member, MP_QSTR___get__, attr_get_method);
|
|
|
|
if (attr_get_method[0] != MP_OBJ_NULL) {
|
|
|
|
attr_get_method[2] = self_in;
|
2015-11-27 17:01:44 +00:00
|
|
|
attr_get_method[3] = MP_OBJ_FROM_PTR(mp_obj_get_type(self_in));
|
2015-02-14 17:43:54 +00:00
|
|
|
dest[0] = mp_call_method_n_kw(2, 0, attr_get_method);
|
|
|
|
}
|
|
|
|
#endif
|
2014-03-31 22:57:56 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// try __getattr__
|
|
|
|
if (attr != MP_QSTR___getattr__) {
|
2017-01-03 10:00:12 +00:00
|
|
|
#if MICROPY_PY_DELATTR_SETATTR
|
|
|
|
// If the requested attr is __setattr__/__delattr__ then don't delegate the lookup
|
|
|
|
// to __getattr__. If we followed CPython's behaviour then __setattr__/__delattr__
|
|
|
|
// would have already been found in the "object" base class.
|
|
|
|
if (attr == MP_QSTR___setattr__ || attr == MP_QSTR___delattr__) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-03-31 22:57:56 +01:00
|
|
|
mp_obj_t dest2[3];
|
|
|
|
mp_load_method_maybe(self_in, MP_QSTR___getattr__, dest2);
|
|
|
|
if (dest2[0] != MP_OBJ_NULL) {
|
|
|
|
// __getattr__ exists, call it and return its result
|
|
|
|
// XXX if this fails to load the requested attr, should we catch the attribute error and return silently?
|
|
|
|
dest2[2] = MP_OBJ_NEW_QSTR(attr);
|
|
|
|
dest[0] = mp_call_method_n_kw(1, 0, dest2);
|
2014-01-09 20:57:50 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
STATIC bool mp_obj_instance_store_attr(mp_obj_t self_in, qstr attr, mp_obj_t value) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-04-13 18:59:45 +01:00
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
if (!(self->base.type->flags & TYPE_FLAG_HAS_SPECIAL_ACCESSORS)) {
|
|
|
|
// Class doesn't have any special accessors so skip their checks
|
|
|
|
goto skip_special_accessors;
|
|
|
|
}
|
|
|
|
|
2015-02-14 17:43:54 +00:00
|
|
|
#if MICROPY_PY_BUILTINS_PROPERTY || MICROPY_PY_DESCRIPTORS
|
|
|
|
// With property and/or descriptors enabled we need to do a lookup
|
|
|
|
// first in the class dict for the attribute to see if the store should
|
|
|
|
// be delegated.
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = attr,
|
|
|
|
.meth_offset = 0,
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2015-02-14 17:43:54 +00:00
|
|
|
|
|
|
|
if (member[0] != MP_OBJ_NULL) {
|
|
|
|
#if MICROPY_PY_BUILTINS_PROPERTY
|
|
|
|
if (MP_OBJ_IS_TYPE(member[0], &mp_type_property)) {
|
2015-04-04 20:15:31 +01:00
|
|
|
// attribute exists and is a property; delegate the store/delete
|
2015-02-14 17:43:54 +00:00
|
|
|
// Note: This is an optimisation for code size and execution time.
|
2015-04-04 20:15:31 +01:00
|
|
|
// The proper way to do it is have the functionality just below in
|
|
|
|
// a __set__/__delete__ method of the property object, and then it
|
|
|
|
// would be called by the descriptor code down below. But that way
|
2015-02-14 17:43:54 +00:00
|
|
|
// requires overhead for the nested mp_call's and overhead for
|
|
|
|
// the code.
|
|
|
|
const mp_obj_t *proxy = mp_obj_property_get(member[0]);
|
2015-04-04 20:15:31 +01:00
|
|
|
mp_obj_t dest[2] = {self_in, value};
|
|
|
|
if (value == MP_OBJ_NULL) {
|
|
|
|
// delete attribute
|
|
|
|
if (proxy[2] == mp_const_none) {
|
|
|
|
// TODO better error message?
|
|
|
|
return false;
|
|
|
|
} else {
|
|
|
|
mp_call_function_n_kw(proxy[2], 1, 0, dest);
|
|
|
|
return true;
|
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
} else {
|
2015-04-04 20:15:31 +01:00
|
|
|
// store attribute
|
|
|
|
if (proxy[1] == mp_const_none) {
|
|
|
|
// TODO better error message?
|
|
|
|
return false;
|
|
|
|
} else {
|
|
|
|
mp_call_function_n_kw(proxy[1], 2, 0, dest);
|
|
|
|
return true;
|
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if MICROPY_PY_DESCRIPTORS
|
2015-04-04 20:15:31 +01:00
|
|
|
// found a class attribute; if it has a __set__/__delete__ method then
|
|
|
|
// call it with the class instance (and value) as arguments
|
|
|
|
if (value == MP_OBJ_NULL) {
|
|
|
|
// delete attribute
|
|
|
|
mp_obj_t attr_delete_method[3];
|
|
|
|
mp_load_method_maybe(member[0], MP_QSTR___delete__, attr_delete_method);
|
|
|
|
if (attr_delete_method[0] != MP_OBJ_NULL) {
|
|
|
|
attr_delete_method[2] = self_in;
|
|
|
|
mp_call_method_n_kw(1, 0, attr_delete_method);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// store attribute
|
|
|
|
mp_obj_t attr_set_method[4];
|
|
|
|
mp_load_method_maybe(member[0], MP_QSTR___set__, attr_set_method);
|
|
|
|
if (attr_set_method[0] != MP_OBJ_NULL) {
|
|
|
|
attr_set_method[2] = self_in;
|
|
|
|
attr_set_method[3] = value;
|
|
|
|
mp_call_method_n_kw(2, 0, attr_set_method);
|
|
|
|
return true;
|
|
|
|
}
|
2014-04-13 18:59:45 +01:00
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
#endif
|
2014-04-13 18:59:45 +01:00
|
|
|
}
|
2015-02-14 17:43:54 +00:00
|
|
|
#endif
|
2014-04-13 18:59:45 +01:00
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#if MICROPY_PY_DELATTR_SETATTR
|
2014-04-08 21:11:49 +01:00
|
|
|
if (value == MP_OBJ_NULL) {
|
|
|
|
// delete attribute
|
2017-01-03 10:00:12 +00:00
|
|
|
// try __delattr__ first
|
|
|
|
mp_obj_t attr_delattr_method[3];
|
|
|
|
mp_load_method_maybe(self_in, MP_QSTR___delattr__, attr_delattr_method);
|
|
|
|
if (attr_delattr_method[0] != MP_OBJ_NULL) {
|
|
|
|
// __delattr__ exists, so call it
|
|
|
|
attr_delattr_method[2] = MP_OBJ_NEW_QSTR(attr);
|
|
|
|
mp_call_method_n_kw(1, 0, attr_delattr_method);
|
|
|
|
return true;
|
|
|
|
}
|
2014-04-08 21:11:49 +01:00
|
|
|
} else {
|
|
|
|
// store attribute
|
2017-01-03 10:00:12 +00:00
|
|
|
// try __setattr__ first
|
|
|
|
mp_obj_t attr_setattr_method[4];
|
|
|
|
mp_load_method_maybe(self_in, MP_QSTR___setattr__, attr_setattr_method);
|
|
|
|
if (attr_setattr_method[0] != MP_OBJ_NULL) {
|
|
|
|
// __setattr__ exists, so call it
|
|
|
|
attr_setattr_method[2] = MP_OBJ_NEW_QSTR(attr);
|
|
|
|
attr_setattr_method[3] = value;
|
|
|
|
mp_call_method_n_kw(2, 0, attr_setattr_method);
|
|
|
|
return true;
|
|
|
|
}
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
skip_special_accessors:
|
2017-01-03 10:00:12 +00:00
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
if (value == MP_OBJ_NULL) {
|
|
|
|
// delete attribute
|
|
|
|
mp_map_elem_t *elem = mp_map_lookup(&self->members, MP_OBJ_NEW_QSTR(attr), MP_MAP_LOOKUP_REMOVE_IF_FOUND);
|
|
|
|
return elem != NULL;
|
|
|
|
} else {
|
|
|
|
// store attribute
|
2014-04-08 21:11:49 +01:00
|
|
|
mp_map_lookup(&self->members, MP_OBJ_NEW_QSTR(attr), MP_MAP_LOOKUP_ADD_IF_NOT_FOUND)->value = value;
|
|
|
|
return true;
|
|
|
|
}
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
|
|
|
|
2018-05-25 08:08:09 +01:00
|
|
|
STATIC void mp_obj_instance_attr(mp_obj_t self_in, qstr attr, mp_obj_t *dest) {
|
2015-04-01 15:10:50 +01:00
|
|
|
if (dest[0] == MP_OBJ_NULL) {
|
|
|
|
mp_obj_instance_load_attr(self_in, attr, dest);
|
|
|
|
} else {
|
|
|
|
if (mp_obj_instance_store_attr(self_in, attr, dest[1])) {
|
|
|
|
dest[0] = MP_OBJ_NULL; // indicate success
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-05-03 17:19:35 +01:00
|
|
|
STATIC mp_obj_t instance_subscr(mp_obj_t self_in, mp_obj_t index, mp_obj_t value) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, subscr),
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
2017-03-24 05:58:13 +00:00
|
|
|
size_t meth_args;
|
2014-04-08 21:32:29 +01:00
|
|
|
if (value == MP_OBJ_NULL) {
|
|
|
|
// delete item
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup.attr = MP_QSTR___delitem__;
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-14 23:20:52 +01:00
|
|
|
meth_args = 2;
|
2014-04-17 22:10:53 +01:00
|
|
|
} else if (value == MP_OBJ_SENTINEL) {
|
|
|
|
// load item
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup.attr = MP_QSTR___getitem__;
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-17 22:10:53 +01:00
|
|
|
meth_args = 2;
|
2014-04-14 23:20:52 +01:00
|
|
|
} else {
|
2014-04-17 22:10:53 +01:00
|
|
|
// store item
|
2014-06-08 18:50:12 +01:00
|
|
|
lookup.attr = MP_QSTR___setitem__;
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-04-14 23:20:52 +01:00
|
|
|
meth_args = 3;
|
2014-04-08 21:32:29 +01:00
|
|
|
}
|
2014-04-30 00:14:30 +01:00
|
|
|
if (member[0] == MP_OBJ_SENTINEL) {
|
2014-04-29 00:42:28 +01:00
|
|
|
return mp_obj_subscr(self->subobj[0], index, value);
|
2014-04-30 00:14:30 +01:00
|
|
|
} else if (member[0] != MP_OBJ_NULL) {
|
2014-01-18 15:31:13 +00:00
|
|
|
mp_obj_t args[3] = {self_in, index, value};
|
2015-03-21 14:21:54 +00:00
|
|
|
// TODO probably need to call mp_convert_member_lookup, and use mp_call_method_n_kw
|
2014-04-30 00:14:30 +01:00
|
|
|
mp_obj_t ret = mp_call_function_n_kw(member[0], meth_args, 0, args);
|
2014-04-17 22:10:53 +01:00
|
|
|
if (value == MP_OBJ_SENTINEL) {
|
|
|
|
return ret;
|
|
|
|
} else {
|
|
|
|
return mp_const_none;
|
|
|
|
}
|
2014-01-18 15:31:13 +00:00
|
|
|
} else {
|
2014-05-21 19:42:43 +01:00
|
|
|
return MP_OBJ_NULL; // op not supported
|
2014-01-18 15:31:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-21 22:33:55 +00:00
|
|
|
STATIC mp_obj_t mp_obj_instance_get_call(mp_obj_t self_in, mp_obj_t *member) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = MP_QSTR___call__,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, call),
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2015-02-14 17:41:15 +00:00
|
|
|
return member[0];
|
|
|
|
}
|
|
|
|
|
|
|
|
bool mp_obj_instance_is_callable(mp_obj_t self_in) {
|
2016-11-21 22:33:55 +00:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL, MP_OBJ_NULL};
|
|
|
|
return mp_obj_instance_get_call(self_in, member) != MP_OBJ_NULL;
|
2014-11-03 16:09:39 +00:00
|
|
|
}
|
|
|
|
|
2016-01-03 09:59:18 +00:00
|
|
|
mp_obj_t mp_obj_instance_call(mp_obj_t self_in, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2016-11-21 22:33:55 +00:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL, MP_OBJ_NULL};
|
|
|
|
mp_obj_t call = mp_obj_instance_get_call(self_in, member);
|
2015-02-14 17:41:15 +00:00
|
|
|
if (call == MP_OBJ_NULL) {
|
2014-11-06 17:36:16 +00:00
|
|
|
if (MICROPY_ERROR_REPORTING == MICROPY_ERROR_REPORTING_TERSE) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("object not callable");
|
2014-11-06 17:36:16 +00:00
|
|
|
} else {
|
|
|
|
nlr_raise(mp_obj_new_exception_msg_varg(&mp_type_TypeError,
|
|
|
|
"'%s' object is not callable", mp_obj_get_type_str(self_in)));
|
|
|
|
}
|
2014-04-25 19:15:16 +01:00
|
|
|
}
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2015-02-14 17:41:15 +00:00
|
|
|
if (call == MP_OBJ_SENTINEL) {
|
2014-04-29 00:42:28 +01:00
|
|
|
return mp_call_function_n_kw(self->subobj[0], n_args, n_kw, args);
|
|
|
|
}
|
2016-11-21 22:33:55 +00:00
|
|
|
|
|
|
|
return mp_call_method_self_n_kw(member[0], member[1], n_args, n_kw, args);
|
2014-04-25 19:15:16 +01:00
|
|
|
}
|
|
|
|
|
2016-01-09 23:14:54 +00:00
|
|
|
STATIC mp_obj_t instance_getiter(mp_obj_t self_in, mp_obj_iter_buf_t *iter_buf) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-05-10 19:09:18 +01:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = MP_QSTR___iter__,
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, getiter),
|
|
|
|
.dest = member,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
2014-05-10 19:09:18 +01:00
|
|
|
if (member[0] == MP_OBJ_NULL) {
|
|
|
|
return MP_OBJ_NULL;
|
2015-02-15 01:10:13 +00:00
|
|
|
} else if (member[0] == MP_OBJ_SENTINEL) {
|
2014-05-10 19:09:18 +01:00
|
|
|
mp_obj_type_t *type = mp_obj_get_type(self->subobj[0]);
|
2016-01-09 23:14:54 +00:00
|
|
|
return type->getiter(self->subobj[0], iter_buf);
|
2015-02-15 01:10:13 +00:00
|
|
|
} else {
|
|
|
|
return mp_call_method_n_kw(0, 0, member);
|
2014-05-10 19:09:18 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-02-09 15:08:00 +00:00
|
|
|
STATIC mp_int_t instance_get_buffer(mp_obj_t self_in, mp_buffer_info_t *bufinfo, mp_uint_t flags) {
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = MP_OBJ_TO_PTR(self_in);
|
2015-02-09 15:08:00 +00:00
|
|
|
mp_obj_t member[2] = {MP_OBJ_NULL};
|
|
|
|
struct class_lookup_data lookup = {
|
|
|
|
.obj = self,
|
|
|
|
.attr = MP_QSTR_, // don't actually look for a method
|
|
|
|
.meth_offset = offsetof(mp_obj_type_t, buffer_p.get_buffer),
|
|
|
|
.dest = member,
|
|
|
|
.is_type = false,
|
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self->base.type);
|
|
|
|
if (member[0] == MP_OBJ_SENTINEL) {
|
|
|
|
mp_obj_type_t *type = mp_obj_get_type(self->subobj[0]);
|
|
|
|
return type->buffer_p.get_buffer(self->subobj[0], bufinfo, flags);
|
|
|
|
} else {
|
|
|
|
return 1; // object does not support buffer protocol
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-01-09 20:57:50 +00:00
|
|
|
/******************************************************************************/
|
|
|
|
// type object
|
|
|
|
// - the struct is mp_obj_type_t and is defined in obj.h so const types can be made
|
2014-02-15 16:10:44 +00:00
|
|
|
// - there is a constant mp_obj_type_t (called mp_type_type) for the 'type' object
|
2014-01-09 20:57:50 +00:00
|
|
|
// - creating a new class (a new type) creates a new mp_obj_type_t
|
2013-12-21 18:17:45 +00:00
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#if ENABLE_SPECIAL_ACCESSORS
|
|
|
|
STATIC bool check_for_special_accessors(mp_obj_t key, mp_obj_t value) {
|
|
|
|
#if MICROPY_PY_DELATTR_SETATTR
|
|
|
|
if (key == MP_OBJ_NEW_QSTR(MP_QSTR___setattr__) || key == MP_OBJ_NEW_QSTR(MP_QSTR___delattr__)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#if MICROPY_PY_BUILTINS_PROPERTY
|
|
|
|
if (MP_OBJ_IS_TYPE(value, &mp_type_property)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#if MICROPY_PY_DESCRIPTORS
|
|
|
|
static const uint8_t to_check[] = {
|
|
|
|
MP_QSTR___get__, MP_QSTR___set__, MP_QSTR___delete__,
|
|
|
|
};
|
|
|
|
for (size_t i = 0; i < MP_ARRAY_SIZE(to_check); ++i) {
|
|
|
|
mp_obj_t dest_temp[2];
|
|
|
|
mp_load_method_protected(value, to_check[i], dest_temp, true);
|
|
|
|
if (dest_temp[0] != MP_OBJ_NULL) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-04-09 23:56:15 +01:00
|
|
|
STATIC void type_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
|
2015-01-20 12:47:20 +00:00
|
|
|
(void)kind;
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_type_t *self = MP_OBJ_TO_PTR(self_in);
|
2015-04-11 13:03:37 +01:00
|
|
|
mp_printf(print, "<class '%q'>", self->name);
|
2014-01-04 20:21:15 +00:00
|
|
|
}
|
|
|
|
|
2016-01-03 15:55:55 +00:00
|
|
|
STATIC mp_obj_t type_make_new(const mp_obj_type_t *type_in, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2015-01-20 12:47:20 +00:00
|
|
|
(void)type_in;
|
|
|
|
|
2014-05-11 18:37:21 +01:00
|
|
|
mp_arg_check_num(n_args, n_kw, 1, 3, false);
|
2014-01-18 14:10:48 +00:00
|
|
|
|
2014-01-08 18:48:12 +00:00
|
|
|
switch (n_args) {
|
|
|
|
case 1:
|
2015-11-27 17:01:44 +00:00
|
|
|
return MP_OBJ_FROM_PTR(mp_obj_get_type(args[0]));
|
2014-01-08 18:48:12 +00:00
|
|
|
|
|
|
|
case 3:
|
2014-01-18 14:10:48 +00:00
|
|
|
// args[0] = name
|
2014-01-08 18:48:12 +00:00
|
|
|
// args[1] = bases tuple
|
2014-01-18 14:10:48 +00:00
|
|
|
// args[2] = locals dict
|
2014-02-15 11:34:50 +00:00
|
|
|
return mp_obj_new_type(mp_obj_str_get_qstr(args[0]), args[1], args[2]);
|
2014-01-08 18:48:12 +00:00
|
|
|
|
|
|
|
default:
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("type takes 1 or 3 arguments");
|
2014-01-08 18:48:12 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-03 09:59:18 +00:00
|
|
|
STATIC mp_obj_t type_call(mp_obj_t self_in, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2014-01-09 20:57:50 +00:00
|
|
|
// instantiate an instance of a class
|
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_type_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-01-09 20:57:50 +00:00
|
|
|
|
|
|
|
if (self->make_new == NULL) {
|
2014-11-06 17:36:16 +00:00
|
|
|
if (MICROPY_ERROR_REPORTING == MICROPY_ERROR_REPORTING_TERSE) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("cannot create instance");
|
2014-11-06 17:36:16 +00:00
|
|
|
} else {
|
|
|
|
nlr_raise(mp_obj_new_exception_msg_varg(&mp_type_TypeError,
|
2015-04-11 13:03:37 +01:00
|
|
|
"cannot create '%q' instances", self->name));
|
2014-11-06 17:36:16 +00:00
|
|
|
}
|
2014-01-04 20:21:15 +00:00
|
|
|
}
|
2014-01-09 20:57:50 +00:00
|
|
|
|
|
|
|
// make new instance
|
2016-01-03 15:55:55 +00:00
|
|
|
mp_obj_t o = self->make_new(self, n_args, n_kw, args);
|
2014-01-09 20:57:50 +00:00
|
|
|
|
|
|
|
// return new instance
|
|
|
|
return o;
|
|
|
|
}
|
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
STATIC void type_attr(mp_obj_t self_in, qstr attr, mp_obj_t *dest) {
|
2014-02-15 16:10:44 +00:00
|
|
|
assert(MP_OBJ_IS_TYPE(self_in, &mp_type_type));
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_type_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
if (dest[0] == MP_OBJ_NULL) {
|
|
|
|
// load attribute
|
|
|
|
#if MICROPY_CPYTHON_COMPAT
|
|
|
|
if (attr == MP_QSTR___name__) {
|
|
|
|
dest[0] = MP_OBJ_NEW_QSTR(self->name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
struct class_lookup_data lookup = {
|
2015-11-27 17:01:44 +00:00
|
|
|
.obj = (mp_obj_instance_t*)self,
|
2015-04-01 15:10:50 +01:00
|
|
|
.attr = attr,
|
|
|
|
.meth_offset = 0,
|
|
|
|
.dest = dest,
|
|
|
|
.is_type = true,
|
|
|
|
};
|
|
|
|
mp_obj_class_lookup(&lookup, self);
|
|
|
|
} else {
|
|
|
|
// delete/store attribute
|
2014-01-11 19:22:29 +00:00
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
// TODO CPython allows STORE_ATTR to a class, but is this the correct implementation?
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
if (self->locals_dict != NULL) {
|
2015-11-27 17:01:44 +00:00
|
|
|
assert(self->locals_dict->base.type == &mp_type_dict); // MicroPython restriction, for now
|
|
|
|
mp_map_t *locals_map = &self->locals_dict->map;
|
2018-02-07 04:44:29 +00:00
|
|
|
if (locals_map->is_fixed) {
|
|
|
|
// can't apply delete/store to a fixed map
|
|
|
|
return;
|
|
|
|
}
|
2015-04-01 15:10:50 +01:00
|
|
|
if (dest[1] == MP_OBJ_NULL) {
|
|
|
|
// delete attribute
|
|
|
|
mp_map_elem_t *elem = mp_map_lookup(locals_map, MP_OBJ_NEW_QSTR(attr), MP_MAP_LOOKUP_REMOVE_IF_FOUND);
|
|
|
|
if (elem != NULL) {
|
|
|
|
dest[0] = MP_OBJ_NULL; // indicate success
|
|
|
|
}
|
|
|
|
} else {
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#if ENABLE_SPECIAL_ACCESSORS
|
|
|
|
// Check if we add any special accessor methods with this store
|
|
|
|
if (!(self->flags & TYPE_FLAG_HAS_SPECIAL_ACCESSORS)) {
|
|
|
|
if (check_for_special_accessors(MP_OBJ_NEW_QSTR(attr), dest[1])) {
|
|
|
|
if (self->flags & TYPE_FLAG_IS_SUBCLASSED) {
|
|
|
|
// This class is already subclassed so can't have special accessors added
|
|
|
|
mp_raise_msg(&mp_type_AttributeError, "can't add special method to already-subclassed class");
|
|
|
|
}
|
|
|
|
self->flags |= TYPE_FLAG_HAS_SPECIAL_ACCESSORS;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
// store attribute
|
|
|
|
mp_map_elem_t *elem = mp_map_lookup(locals_map, MP_OBJ_NEW_QSTR(attr), MP_MAP_LOOKUP_ADD_IF_NOT_FOUND);
|
2018-02-07 04:44:29 +00:00
|
|
|
elem->value = dest[1];
|
|
|
|
dest[0] = MP_OBJ_NULL; // indicate success
|
2014-04-08 21:11:49 +01:00
|
|
|
}
|
2014-03-27 09:32:26 +00:00
|
|
|
}
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
2013-12-21 18:17:45 +00:00
|
|
|
}
|
|
|
|
|
2014-02-15 16:10:44 +00:00
|
|
|
const mp_obj_type_t mp_type_type = {
|
|
|
|
{ &mp_type_type },
|
2014-02-15 11:34:50 +00:00
|
|
|
.name = MP_QSTR_type,
|
2014-01-05 20:34:09 +00:00
|
|
|
.print = type_print,
|
2014-01-08 18:48:12 +00:00
|
|
|
.make_new = type_make_new,
|
2014-01-18 14:10:48 +00:00
|
|
|
.call = type_call,
|
2015-05-11 13:25:19 +01:00
|
|
|
.unary_op = mp_generic_unary_op,
|
2015-04-01 15:10:50 +01:00
|
|
|
.attr = type_attr,
|
2013-12-21 18:17:45 +00:00
|
|
|
};
|
2014-01-09 20:57:50 +00:00
|
|
|
|
2014-02-15 11:34:50 +00:00
|
|
|
mp_obj_t mp_obj_new_type(qstr name, mp_obj_t bases_tuple, mp_obj_t locals_dict) {
|
2018-03-25 22:05:32 +01:00
|
|
|
// Verify input objects have expected type
|
|
|
|
if (!MP_OBJ_IS_TYPE(bases_tuple, &mp_type_tuple)) {
|
|
|
|
mp_raise_TypeError(NULL);
|
|
|
|
}
|
|
|
|
if (!MP_OBJ_IS_TYPE(locals_dict, &mp_type_dict)) {
|
|
|
|
mp_raise_TypeError(NULL);
|
|
|
|
}
|
2014-04-29 00:42:28 +01:00
|
|
|
|
2014-07-05 05:55:00 +01:00
|
|
|
// TODO might need to make a copy of locals_dict; at least that's how CPython does it
|
|
|
|
|
2014-04-29 00:42:28 +01:00
|
|
|
// Basic validation of base classes
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
uint16_t base_flags = 0;
|
2017-11-10 22:11:24 +00:00
|
|
|
size_t bases_len;
|
|
|
|
mp_obj_t *bases_items;
|
|
|
|
mp_obj_tuple_get(bases_tuple, &bases_len, &bases_items);
|
|
|
|
for (size_t i = 0; i < bases_len; i++) {
|
2018-03-25 22:05:32 +01:00
|
|
|
if (!MP_OBJ_IS_TYPE(bases_items[i], &mp_type_type)) {
|
|
|
|
mp_raise_TypeError(NULL);
|
|
|
|
}
|
2017-11-10 22:11:24 +00:00
|
|
|
mp_obj_type_t *t = MP_OBJ_TO_PTR(bases_items[i]);
|
2014-04-29 00:42:28 +01:00
|
|
|
// TODO: Verify with CPy, tested on function type
|
|
|
|
if (t->make_new == NULL) {
|
2014-11-06 17:36:16 +00:00
|
|
|
if (MICROPY_ERROR_REPORTING == MICROPY_ERROR_REPORTING_TERSE) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("type is not an acceptable base type");
|
2014-11-06 17:36:16 +00:00
|
|
|
} else {
|
|
|
|
nlr_raise(mp_obj_new_exception_msg_varg(&mp_type_TypeError,
|
2015-04-11 13:03:37 +01:00
|
|
|
"type '%q' is not an acceptable base type", t->name));
|
2014-11-06 17:36:16 +00:00
|
|
|
}
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#if ENABLE_SPECIAL_ACCESSORS
|
|
|
|
if (mp_obj_is_instance_type(t)) {
|
|
|
|
t->flags |= TYPE_FLAG_IS_SUBCLASSED;
|
|
|
|
base_flags |= t->flags & TYPE_FLAG_HAS_SPECIAL_ACCESSORS;
|
|
|
|
}
|
|
|
|
#endif
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
|
|
|
|
2014-01-09 20:57:50 +00:00
|
|
|
mp_obj_type_t *o = m_new0(mp_obj_type_t, 1);
|
2014-02-15 16:10:44 +00:00
|
|
|
o->base.type = &mp_type_type;
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
o->flags = base_flags;
|
2014-01-22 14:35:10 +00:00
|
|
|
o->name = name;
|
2014-05-03 17:19:35 +01:00
|
|
|
o->print = instance_print;
|
2015-05-03 23:23:18 +01:00
|
|
|
o->make_new = mp_obj_instance_make_new;
|
2015-02-09 15:08:00 +00:00
|
|
|
o->call = mp_obj_instance_call;
|
2014-05-03 17:19:35 +01:00
|
|
|
o->unary_op = instance_unary_op;
|
|
|
|
o->binary_op = instance_binary_op;
|
2015-04-01 15:10:50 +01:00
|
|
|
o->attr = mp_obj_instance_attr;
|
2014-05-03 17:19:35 +01:00
|
|
|
o->subscr = instance_subscr;
|
2014-05-10 19:09:18 +01:00
|
|
|
o->getiter = instance_getiter;
|
2015-02-09 15:08:00 +00:00
|
|
|
//o->iternext = ; not implemented
|
|
|
|
o->buffer_p.get_buffer = instance_get_buffer;
|
2017-04-06 03:09:01 +01:00
|
|
|
|
2017-11-10 22:11:24 +00:00
|
|
|
if (bases_len > 0) {
|
2017-04-06 03:09:01 +01:00
|
|
|
// Inherit protocol from a base class. This allows to define an
|
|
|
|
// abstract base class which would translate C-level protocol to
|
|
|
|
// Python method calls, and any subclass inheriting from it will
|
|
|
|
// support this feature.
|
2017-11-10 22:11:24 +00:00
|
|
|
o->protocol = ((mp_obj_type_t*)MP_OBJ_TO_PTR(bases_items[0]))->protocol;
|
2017-04-06 03:09:01 +01:00
|
|
|
|
2017-11-10 22:11:24 +00:00
|
|
|
if (bases_len >= 2) {
|
2017-04-01 13:52:24 +01:00
|
|
|
#if MICROPY_MULTIPLE_INHERITANCE
|
2017-04-06 03:09:01 +01:00
|
|
|
o->parent = MP_OBJ_TO_PTR(bases_tuple);
|
2017-04-01 13:52:24 +01:00
|
|
|
#else
|
|
|
|
mp_raise_NotImplementedError("multiple inheritance not supported");
|
|
|
|
#endif
|
2017-04-06 03:09:01 +01:00
|
|
|
} else {
|
2017-11-10 22:11:24 +00:00
|
|
|
o->parent = MP_OBJ_TO_PTR(bases_items[0]);
|
2017-04-06 03:09:01 +01:00
|
|
|
}
|
2016-06-18 22:56:06 +01:00
|
|
|
}
|
2017-04-06 03:09:01 +01:00
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
o->locals_dict = MP_OBJ_TO_PTR(locals_dict);
|
2014-04-29 00:42:28 +01:00
|
|
|
|
py/objtype: Optimise instance get/set/del by skipping special accessors.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
2018-05-25 08:09:54 +01:00
|
|
|
#if ENABLE_SPECIAL_ACCESSORS
|
|
|
|
// Check if the class has any special accessor methods
|
|
|
|
if (!(o->flags & TYPE_FLAG_HAS_SPECIAL_ACCESSORS)) {
|
|
|
|
for (size_t i = 0; i < o->locals_dict->map.alloc; i++) {
|
|
|
|
if (MP_MAP_SLOT_IS_FILLED(&o->locals_dict->map, i)) {
|
|
|
|
const mp_map_elem_t *elem = &o->locals_dict->map.table[i];
|
|
|
|
if (check_for_special_accessors(elem->key, elem->value)) {
|
|
|
|
o->flags |= TYPE_FLAG_HAS_SPECIAL_ACCESSORS;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-04-29 00:42:28 +01:00
|
|
|
const mp_obj_type_t *native_base;
|
2017-03-24 05:58:13 +00:00
|
|
|
size_t num_native_bases = instance_count_native_bases(o, &native_base);
|
2014-04-29 00:42:28 +01:00
|
|
|
if (num_native_bases > 1) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("multiple bases have instance lay-out conflict");
|
2014-04-29 00:42:28 +01:00
|
|
|
}
|
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_map_t *locals_map = &o->locals_dict->map;
|
2014-07-05 05:55:00 +01:00
|
|
|
mp_map_elem_t *elem = mp_map_lookup(locals_map, MP_OBJ_NEW_QSTR(MP_QSTR___new__), MP_MAP_LOOKUP);
|
|
|
|
if (elem != NULL) {
|
|
|
|
// __new__ slot exists; check if it is a function
|
2014-08-24 16:28:17 +01:00
|
|
|
if (MP_OBJ_IS_FUN(elem->value)) {
|
2014-07-05 05:55:00 +01:00
|
|
|
// __new__ is a function, wrap it in a staticmethod decorator
|
2016-01-03 15:55:55 +00:00
|
|
|
elem->value = static_class_method_make_new(&mp_type_staticmethod, 1, 0, &elem->value);
|
2014-07-05 05:55:00 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
return MP_OBJ_FROM_PTR(o);
|
2014-01-09 20:57:50 +00:00
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
|
|
|
|
/******************************************************************************/
|
2014-02-05 00:51:47 +00:00
|
|
|
// super object
|
|
|
|
|
|
|
|
typedef struct _mp_obj_super_t {
|
|
|
|
mp_obj_base_t base;
|
|
|
|
mp_obj_t type;
|
|
|
|
mp_obj_t obj;
|
|
|
|
} mp_obj_super_t;
|
|
|
|
|
2015-04-09 23:56:15 +01:00
|
|
|
STATIC void super_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
|
2015-01-20 12:47:20 +00:00
|
|
|
(void)kind;
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_super_t *self = MP_OBJ_TO_PTR(self_in);
|
2015-04-09 23:56:15 +01:00
|
|
|
mp_print_str(print, "<super: ");
|
|
|
|
mp_obj_print_helper(print, self->type, PRINT_STR);
|
|
|
|
mp_print_str(print, ", ");
|
|
|
|
mp_obj_print_helper(print, self->obj, PRINT_STR);
|
|
|
|
mp_print_str(print, ">");
|
2014-02-05 00:51:47 +00:00
|
|
|
}
|
|
|
|
|
2016-01-03 15:55:55 +00:00
|
|
|
STATIC mp_obj_t super_make_new(const mp_obj_type_t *type_in, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2015-01-20 12:47:20 +00:00
|
|
|
(void)type_in;
|
2015-01-20 14:11:27 +00:00
|
|
|
// 0 arguments are turned into 2 in the compiler
|
|
|
|
// 1 argument is not yet implemented
|
|
|
|
mp_arg_check_num(n_args, n_kw, 2, 2, false);
|
2018-03-25 22:13:49 +01:00
|
|
|
if (!MP_OBJ_IS_TYPE(args[0], &mp_type_type)) {
|
|
|
|
mp_raise_TypeError(NULL);
|
|
|
|
}
|
2017-04-22 03:14:04 +01:00
|
|
|
mp_obj_super_t *o = m_new_obj(mp_obj_super_t);
|
|
|
|
*o = (mp_obj_super_t){{type_in}, args[0], args[1]};
|
|
|
|
return MP_OBJ_FROM_PTR(o);
|
2014-02-05 00:51:47 +00:00
|
|
|
}
|
|
|
|
|
2015-04-01 15:10:50 +01:00
|
|
|
STATIC void super_attr(mp_obj_t self_in, qstr attr, mp_obj_t *dest) {
|
|
|
|
if (dest[0] != MP_OBJ_NULL) {
|
|
|
|
// not load attribute
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-03-29 13:43:38 +00:00
|
|
|
assert(MP_OBJ_IS_TYPE(self_in, &mp_type_super));
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_super_t *self = MP_OBJ_TO_PTR(self_in);
|
2014-02-05 00:51:47 +00:00
|
|
|
|
2014-02-15 16:10:44 +00:00
|
|
|
assert(MP_OBJ_IS_TYPE(self->type, &mp_type_type));
|
2014-02-05 00:51:47 +00:00
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_type_t *type = MP_OBJ_TO_PTR(self->type);
|
2014-02-05 00:51:47 +00:00
|
|
|
|
2014-06-08 18:50:12 +01:00
|
|
|
struct class_lookup_data lookup = {
|
2015-11-27 17:01:44 +00:00
|
|
|
.obj = MP_OBJ_TO_PTR(self->obj),
|
2014-06-08 18:50:12 +01:00
|
|
|
.attr = attr,
|
|
|
|
.meth_offset = 0,
|
|
|
|
.dest = dest,
|
2014-11-03 16:18:51 +00:00
|
|
|
.is_type = false,
|
2014-06-08 18:50:12 +01:00
|
|
|
};
|
2017-04-06 03:09:01 +01:00
|
|
|
|
2017-11-29 23:31:42 +00:00
|
|
|
// Allow a call super().__init__() to reach any native base classes
|
|
|
|
if (attr == MP_QSTR___init__) {
|
|
|
|
lookup.meth_offset = offsetof(mp_obj_type_t, make_new);
|
|
|
|
}
|
|
|
|
|
2017-04-06 03:09:01 +01:00
|
|
|
if (type->parent == NULL) {
|
|
|
|
// no parents, do nothing
|
2017-04-01 13:52:24 +01:00
|
|
|
#if MICROPY_MULTIPLE_INHERITANCE
|
2017-04-06 03:09:01 +01:00
|
|
|
} else if (((mp_obj_base_t*)type->parent)->type == &mp_type_tuple) {
|
|
|
|
const mp_obj_tuple_t *parent_tuple = type->parent;
|
|
|
|
size_t len = parent_tuple->len;
|
|
|
|
const mp_obj_t *items = parent_tuple->items;
|
|
|
|
for (size_t i = 0; i < len; i++) {
|
|
|
|
assert(MP_OBJ_IS_TYPE(items[i], &mp_type_type));
|
2017-11-29 23:31:42 +00:00
|
|
|
if (MP_OBJ_TO_PTR(items[i]) == &mp_type_object) {
|
|
|
|
// The "object" type will be searched at the end of this function,
|
|
|
|
// and we don't want to lookup native methods in object.
|
|
|
|
continue;
|
|
|
|
}
|
2017-04-06 03:09:01 +01:00
|
|
|
mp_obj_class_lookup(&lookup, (mp_obj_type_t*)MP_OBJ_TO_PTR(items[i]));
|
|
|
|
if (dest[0] != MP_OBJ_NULL) {
|
2017-11-29 23:31:42 +00:00
|
|
|
break;
|
2017-04-06 03:09:01 +01:00
|
|
|
}
|
|
|
|
}
|
2017-04-01 13:52:24 +01:00
|
|
|
#endif
|
2017-11-29 23:31:42 +00:00
|
|
|
} else if (type->parent != &mp_type_object) {
|
2017-04-06 03:09:01 +01:00
|
|
|
mp_obj_class_lookup(&lookup, type->parent);
|
2017-11-29 23:31:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (dest[0] != MP_OBJ_NULL) {
|
|
|
|
if (dest[0] == MP_OBJ_SENTINEL) {
|
|
|
|
// Looked up native __init__ so defer to it
|
|
|
|
dest[0] = MP_OBJ_FROM_PTR(&native_base_init_wrapper_obj);
|
|
|
|
dest[1] = self->obj;
|
2014-05-20 22:54:46 +01:00
|
|
|
}
|
2017-11-29 23:31:42 +00:00
|
|
|
return;
|
2014-02-05 00:51:47 +00:00
|
|
|
}
|
2017-04-06 03:09:01 +01:00
|
|
|
|
2017-11-29 23:31:42 +00:00
|
|
|
// Reset meth_offset so we don't look up any native methods in object,
|
|
|
|
// because object never takes up the native base-class slot.
|
|
|
|
lookup.meth_offset = 0;
|
|
|
|
|
2014-06-08 18:50:12 +01:00
|
|
|
mp_obj_class_lookup(&lookup, &mp_type_object);
|
2014-02-05 00:51:47 +00:00
|
|
|
}
|
|
|
|
|
2014-03-29 13:43:38 +00:00
|
|
|
const mp_obj_type_t mp_type_super = {
|
2014-02-15 16:10:44 +00:00
|
|
|
{ &mp_type_type },
|
2014-02-15 11:34:50 +00:00
|
|
|
.name = MP_QSTR_super,
|
2014-02-05 00:51:47 +00:00
|
|
|
.print = super_print,
|
|
|
|
.make_new = super_make_new,
|
2015-04-01 15:10:50 +01:00
|
|
|
.attr = super_attr,
|
2014-02-05 00:51:47 +00:00
|
|
|
};
|
|
|
|
|
2017-04-19 00:45:59 +01:00
|
|
|
void mp_load_super_method(qstr attr, mp_obj_t *dest) {
|
|
|
|
mp_obj_super_t super = {{&mp_type_super}, dest[1], dest[2]};
|
|
|
|
mp_load_method(MP_OBJ_FROM_PTR(&super), attr, dest);
|
|
|
|
}
|
|
|
|
|
2014-02-05 00:51:47 +00:00
|
|
|
/******************************************************************************/
|
2014-02-15 16:10:44 +00:00
|
|
|
// subclassing and built-ins specific to types
|
2014-01-09 21:43:51 +00:00
|
|
|
|
2014-03-03 22:38:13 +00:00
|
|
|
// object and classinfo should be type objects
|
|
|
|
// (but the function will fail gracefully if they are not)
|
2014-03-26 18:37:06 +00:00
|
|
|
bool mp_obj_is_subclass_fast(mp_const_obj_t object, mp_const_obj_t classinfo) {
|
2014-01-09 21:43:51 +00:00
|
|
|
for (;;) {
|
|
|
|
if (object == classinfo) {
|
2014-02-15 16:10:44 +00:00
|
|
|
return true;
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// not equivalent classes, keep searching base classes
|
|
|
|
|
2014-03-03 22:38:13 +00:00
|
|
|
// object should always be a type object, but just return false if it's not
|
|
|
|
if (!MP_OBJ_IS_TYPE(object, &mp_type_type)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-11-27 17:01:44 +00:00
|
|
|
const mp_obj_type_t *self = MP_OBJ_TO_PTR(object);
|
2014-01-09 21:43:51 +00:00
|
|
|
|
2017-04-06 03:09:01 +01:00
|
|
|
if (self->parent == NULL) {
|
|
|
|
// type has no parents
|
2014-02-15 16:10:44 +00:00
|
|
|
return false;
|
2017-04-01 13:52:24 +01:00
|
|
|
#if MICROPY_MULTIPLE_INHERITANCE
|
2017-04-06 03:09:01 +01:00
|
|
|
} else if (((mp_obj_base_t*)self->parent)->type == &mp_type_tuple) {
|
|
|
|
// get the base objects (they should be type objects)
|
|
|
|
const mp_obj_tuple_t *parent_tuple = self->parent;
|
|
|
|
const mp_obj_t *item = parent_tuple->items;
|
|
|
|
const mp_obj_t *top = item + parent_tuple->len - 1;
|
|
|
|
|
|
|
|
// iterate through the base objects
|
|
|
|
for (; item < top; ++item) {
|
|
|
|
if (mp_obj_is_subclass_fast(*item, classinfo)) {
|
|
|
|
return true;
|
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
|
2017-04-06 03:09:01 +01:00
|
|
|
// search last base (simple tail recursion elimination)
|
|
|
|
object = *item;
|
2017-04-01 13:52:24 +01:00
|
|
|
#endif
|
2017-04-06 03:09:01 +01:00
|
|
|
} else {
|
|
|
|
// type has 1 parent
|
|
|
|
object = MP_OBJ_FROM_PTR(self->parent);
|
|
|
|
}
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-03 22:38:13 +00:00
|
|
|
STATIC mp_obj_t mp_obj_is_subclass(mp_obj_t object, mp_obj_t classinfo) {
|
2017-03-25 08:35:08 +00:00
|
|
|
size_t len;
|
2014-03-03 22:38:13 +00:00
|
|
|
mp_obj_t *items;
|
|
|
|
if (MP_OBJ_IS_TYPE(classinfo, &mp_type_type)) {
|
|
|
|
len = 1;
|
|
|
|
items = &classinfo;
|
2014-03-29 13:15:08 +00:00
|
|
|
} else if (MP_OBJ_IS_TYPE(classinfo, &mp_type_tuple)) {
|
2014-03-03 22:38:13 +00:00
|
|
|
mp_obj_tuple_get(classinfo, &len, &items);
|
|
|
|
} else {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("issubclass() arg 2 must be a class or a tuple of classes");
|
2014-02-27 20:48:25 +00:00
|
|
|
}
|
|
|
|
|
2017-03-24 05:58:13 +00:00
|
|
|
for (size_t i = 0; i < len; i++) {
|
2014-04-05 22:45:23 +01:00
|
|
|
// We explicitly check for 'object' here since no-one explicitly derives from it
|
2015-11-27 17:01:44 +00:00
|
|
|
if (items[i] == MP_OBJ_FROM_PTR(&mp_type_object) || mp_obj_is_subclass_fast(object, items[i])) {
|
2014-03-03 22:38:13 +00:00
|
|
|
return mp_const_true;
|
|
|
|
}
|
2014-02-27 20:48:25 +00:00
|
|
|
}
|
2014-03-03 22:38:13 +00:00
|
|
|
return mp_const_false;
|
2014-02-27 20:48:25 +00:00
|
|
|
}
|
|
|
|
|
2014-02-15 16:10:44 +00:00
|
|
|
STATIC mp_obj_t mp_builtin_issubclass(mp_obj_t object, mp_obj_t classinfo) {
|
2014-03-03 22:38:13 +00:00
|
|
|
if (!MP_OBJ_IS_TYPE(object, &mp_type_type)) {
|
2017-03-28 12:37:26 +01:00
|
|
|
mp_raise_TypeError("issubclass() arg 1 must be a class");
|
2014-03-03 22:38:13 +00:00
|
|
|
}
|
|
|
|
return mp_obj_is_subclass(object, classinfo);
|
2014-02-15 16:10:44 +00:00
|
|
|
}
|
|
|
|
|
2014-01-09 21:43:51 +00:00
|
|
|
MP_DEFINE_CONST_FUN_OBJ_2(mp_builtin_issubclass_obj, mp_builtin_issubclass);
|
|
|
|
|
2014-02-12 16:15:40 +00:00
|
|
|
STATIC mp_obj_t mp_builtin_isinstance(mp_obj_t object, mp_obj_t classinfo) {
|
2015-11-27 17:01:44 +00:00
|
|
|
return mp_obj_is_subclass(MP_OBJ_FROM_PTR(mp_obj_get_type(object)), classinfo);
|
2014-01-09 21:43:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
MP_DEFINE_CONST_FUN_OBJ_2(mp_builtin_isinstance_obj, mp_builtin_isinstance);
|
2014-01-11 19:22:29 +00:00
|
|
|
|
2014-05-11 01:16:04 +01:00
|
|
|
mp_obj_t mp_instance_cast_to_native_base(mp_const_obj_t self_in, mp_const_obj_t native_type) {
|
|
|
|
mp_obj_type_t *self_type = mp_obj_get_type(self_in);
|
2015-11-27 17:01:44 +00:00
|
|
|
if (!mp_obj_is_subclass_fast(MP_OBJ_FROM_PTR(self_type), native_type)) {
|
2014-05-11 01:16:04 +01:00
|
|
|
return MP_OBJ_NULL;
|
|
|
|
}
|
2015-11-27 17:01:44 +00:00
|
|
|
mp_obj_instance_t *self = (mp_obj_instance_t*)MP_OBJ_TO_PTR(self_in);
|
2014-05-11 01:16:04 +01:00
|
|
|
return self->subobj[0];
|
|
|
|
}
|
|
|
|
|
2014-01-11 19:22:29 +00:00
|
|
|
/******************************************************************************/
|
|
|
|
// staticmethod and classmethod types (probably should go in a different file)
|
|
|
|
|
2016-01-03 15:55:55 +00:00
|
|
|
STATIC mp_obj_t static_class_method_make_new(const mp_obj_type_t *self, size_t n_args, size_t n_kw, const mp_obj_t *args) {
|
2015-11-27 17:01:44 +00:00
|
|
|
assert(self == &mp_type_staticmethod || self == &mp_type_classmethod);
|
2014-02-06 20:31:44 +00:00
|
|
|
|
2014-11-06 17:36:16 +00:00
|
|
|
mp_arg_check_num(n_args, n_kw, 1, 1, false);
|
2014-02-06 20:31:44 +00:00
|
|
|
|
|
|
|
mp_obj_static_class_method_t *o = m_new_obj(mp_obj_static_class_method_t);
|
2015-11-27 17:01:44 +00:00
|
|
|
*o = (mp_obj_static_class_method_t){{self}, args[0]};
|
|
|
|
return MP_OBJ_FROM_PTR(o);
|
2014-02-06 20:31:44 +00:00
|
|
|
}
|
|
|
|
|
2014-01-11 19:22:29 +00:00
|
|
|
const mp_obj_type_t mp_type_staticmethod = {
|
2014-02-15 16:10:44 +00:00
|
|
|
{ &mp_type_type },
|
2014-02-15 11:34:50 +00:00
|
|
|
.name = MP_QSTR_staticmethod,
|
2015-04-04 15:53:11 +01:00
|
|
|
.make_new = static_class_method_make_new,
|
2014-01-11 19:22:29 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
const mp_obj_type_t mp_type_classmethod = {
|
2014-02-15 16:10:44 +00:00
|
|
|
{ &mp_type_type },
|
2014-02-15 11:34:50 +00:00
|
|
|
.name = MP_QSTR_classmethod,
|
2015-04-04 15:53:11 +01:00
|
|
|
.make_new = static_class_method_make_new,
|
2014-01-11 19:22:29 +00:00
|
|
|
};
|