This commit implements fairly complete support for the DMA controller in
the rp2 series of microcontrollers. It provides a class for accessing the
DMA channels through a high-level, Pythonic interface, and functions for
setting and manipulating the DMA channel configurations.
Creating an instance of the rp2.DMA class claims one of the processor's DMA
channels. A sensible, per-channel default value for the ctrl register can
be fetched from the DMA.pack_ctrl() function, and the components of this
register can be set via keyword arguments to pack_ctrl().
The read, write, count and ctrl attributes of the DMA class provide
read/write access to the respective registers of the DMA controller. The
config() method allows any or all of these values to be set simultaneously
and adds a trigger keyword argument to allow the setup to immediately be
triggered. The read and write attributes (or keywords in config()) accept
either actual addresses or any object that supports the buffer interface.
The active() method provides read/write control of the channel's activity,
allowing the user to start and stop the channel and test if it is running.
Standard MicroPython interrupt handlers are supported through the irq()
method and the channel can be released either by deleting it and allowing
it to be garbage-collected or with the explicit close() method.
Direct, unfettered access to the DMA controllers registers is provided
through a proxy memoryview() object returned by the DMA.registers attribute
that maps directly onto the memory-mapped registers. This is necessary for
more fine-grained control and is helpful for allowing chaining of DMA
channels.
As a simple example, using DMA to do a fast memory copy just needs:
src = bytearray(32*1024)
dest = bytearray(32*1024)
dma = rp2.DMA()
dma.config(read=src, write=dest, count=len(src) // 4,
ctrl=dma.pack_ctrl(), trigger=True)
# Wait for completion
while dma.active():
pass
This API aims to strike a balance between simplicity and comprehensiveness.
Signed-off-by: Nicko van Someren <nicko@nicko.org>
Signed-off-by: Damien George <damien@micropython.org>
This allows to follow good practice and have libraries live in the lib
folder which means they will be found by the runtime without adding this
path manually at runtime.
Signed-off-by: Sebastian Romero <s.romero@arduino.cc>
When compiling with distcc, it does not understand the -MD flag on its own.
This fixes the interaction by explicitly adding the -MF option.
The error in distcc is described here under "Problems with gcc -MD":
https://www.distcc.org/faq.html
Signed-off-by: Peter Züger <zueger.peter@icloud.com>
The calculation of the lfs2 cache_size was incorrect, the maximum allowed
size is block_size.
The cache size must be: "a multiple of the read and program sizes, and a
factor of the block size".
Signed-off-by: Peter Züger <zueger.peter@icloud.com>
MicroPython code may rely on the return value of sys.stdout.buffer.write()
to reflect the number of bytes actually written. While in most scenarios a
write() operation is successful, there are cases where it fails, leading to
data loss. This problem arises because, currently, write() merely returns
the number of bytes it was supposed to write, without indication of
failure.
One scenario where write() might fail, is where USB is used and the
receiving end doesn't read quickly enough to empty the receive buffer. In
that case, write() on the MicroPython side can timeout, resulting in the
loss of data without any indication, a behavior observed notably in
communication between a Pi Pico as a client and a Linux host using the ACM
driver.
A complex issue arises with mp_hal_stdout_tx_strn() when it involves
multiple outputs, such as USB, dupterm and hardware UART. The challenge is
in handling cases where writing to one output is successful, but another
fails, either fully or partially. This patch implements the following
solution:
mp_hal_stdout_tx_strn() attempts to write len bytes to all of the possible
destinations for that data, and returns the minimum successful write
length.
The implementation of this is complicated by several factors:
- multiple outputs may be enabled or disabled at compiled time
- multiple outputs may be enabled or disabled at runtime
- mp_os_dupterm_tx_strn() is one such output, optionally containing
multiple additional outputs
- each of these outputs may or may not be able to report success
- each of these outputs may or may not be able to report partial writes
As a result, there's no single strategy that fits all ports, necessitating
unique logic for each instance of mp_hal_stdout_tx_strn().
Note that addressing sys.stdout.write() is more complex due to its data
modification process ("cooked" output), and it remains unchanged in this
patch. Developers who are concerned about accurate return values from
write operations should use sys.stdout.buffer.write().
This patch might disrupt some existing code, but it's also expected to
resolve issues, considering that the peculiar return value behavior of
sys.stdout.buffer.write() is not well-documented and likely not widely
known. Therefore, it's improbable that much existing code relies on the
previous behavior.
Signed-off-by: Maarten van der Schrieck <maarten@thingsconnected.nl>
In case of multiple outputs, the minimum successful write length is
returned. In line with this, in case any output has a write error, zero is
returned.
In case of no outputs, -1 is returned.
The return value can be used to assess whether writes were attempted, and
if so, whether they succeeded.
Signed-off-by: Maarten van der Schrieck <maarten@thingsconnected.nl>
This adds a `add_library(name, path)` method for use in manifest.py that
allows registering an external path (e.g. to another repo) by name.
This name can then be passed to `require("package", library="name")` to
reference packages in that repo/library rather than micropython-lib.
Within the external library, `require()` continues to work as normal
(referencing micropython-lib) by default, but they can also specify the
library name to require another package from that repo/library.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
The poll_obj_t instances have their pollfd field point into this
allocation. So if re-allocating results in a move, we need to update the
existing poll_obj_t's.
Update the test to cover this case.
Fixes issue #12887.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This adds support to stm32's mboot for the Microsoft WCID USB 0xee string
and Compatible ID Feature Descriptor. This allows the USB device to
automatically set the default USB driver, so that when the device is
plugged in Windows will assign the winusb driver to it. This means that
USB DFU mode can be used without installing any drivers.
For example this page will work (allow the board to be updated over DFU)
with zero install: https://devanlai.github.io/webdfu/dfu-util/
Tested on Windows 10, Windows can read the 0xee string correctly, and
requests the second special descriptor, which then configures the USB
device to use the winusb driver.
Signed-off-by: Damien George <damien@micropython.org>
It looks like these never worked and there are no tests for this
functionality. Furthermore, CPython doesn't support this.
Fixes#12995.
Signed-off-by: Damien George <damien@micropython.org>
This gets back the old heap-size behaviour on ESP32, before auto-split-heap
was introduced: after the heap is grown one time the size is 111936 bytes,
with about 40k left for the IDF. That's enough to start WiFi and do a
HTTPS request.
Signed-off-by: Damien George <damien@micropython.org>
There are two main changes here to improve the calculation of the size of
the next heap area when automatically expanding the heap:
- Compute the existing total size by counting the total number of GC
blocks, and then using that to compute the corresponding number of bytes.
- Round the bytes value up to the nearest multiple of BYTES_PER_BLOCK.
This makes the calculation slightly simpler and more accurate, and makes
sure that, in the case of growing from one area to two areas, the number
of bytes allocated from the system for the second area is the same as the
first. For example on esp32 with an initial area size of 65536 bytes, the
subsequent allocation is also 65536 bytes. Previously it was a number that
was not even a multiple of 2.
Signed-off-by: Damien George <damien@micropython.org>
Add new board Silicognition RP2040-Shim, RP2040 with 4 MB of flash
and W5500 drivers included and configured by default for use with
the Silicognition PoE-FeatherWing.
Co-authored-by: Matt Trentini <matt.trentini@gmail.com>
Signed-off-by: Patrick Van Oosterwijck <patrick@silicognition.com>
Some boards have multiple options for these pins, and they don't want to
allow users to initialize a port without explicitly specifying pin numbers.
Signed-off-by: Paul Grayson <paul@pololu.com>
esp8266 doesn't need ets task because the notify is now scheduled (see
commits 7d57037906 and
c60caf1995 for relevant history).
Signed-off-by: Damien George <damien@micropython.org>
In "cat" mode, output was written to a file named "out", then moved to the
location of the real output file. There was no reason for this.
While makeqstrdefs.py does make an effort to not update the timestamp on an
existing output file that has not changed, the intermediate "out" file
isn't part of the that process.
Signed-off-by: Trent Piepho <tpiepho@gmail.com>
In "cat" mode a "$output_file.hash" file is checked to see if the hash of
the new output is the same as the existing, and if so the output file isn't
updated.
However, it's possible that the output file has been deleted but the hash
file has not. In this case the output file is not created.
Change the logic so that a hash file is considered stale if there is no
output file and still create the output.
Signed-off-by: Trent Piepho <tpiepho@gmail.com>
These are produced by the "cat" command to makeqstrdefs.py, to allow it to
not update unchanged files. cmake doesn't know about them and so they are
not removed on a "clean".
This triggered a bug in makeqstrdefs.py where it would not recreate a
deleted output file (which is removed by clean) if a stale hash file with a
valid hash still existed.
Listing them as byproducts will cause them to be deleted on clean.
Signed-off-by: Trent Piepho <tpiepho@gmail.com>
Add `load_cert_chain`, `load_verify_locations`, `get_ciphers` and
`set_ciphers` SSLContext methods in ssl library, and update asyncio
`open_connection` and `start_server` methods with ssl support.
Signed-off-by: Carlos Gil <carlosgilglez@gmail.com>
This adds asyncio ssl support with SSLContext and the corresponding
tests in `tests/net_inet` and `tests/multi_net`.
Note that not doing the handshake on connect will delegate the handshake to
the following `mbedtls_ssl_read/write` calls. However if the handshake
fails when a client certificate is required and not presented by the peer,
it needs to be notified of this handshake error (otherwise it will hang
until timeout if any). Finally at MicroPython side raise the proper
mbedtls error code and message.
Signed-off-by: Carlos Gil <carlosgilglez@gmail.com>
Fixes two issues:
- None should not be allowed in the list, otherwise the corresponding entry
in ciphersuites[i] will have an undefined value.
- The terminating 0 needs to be put in ciphersuites[len].
Signed-off-by: Damien George <damien@micropython.org>
Changes are:
- use ssl.SSLContext.wrap_socket instead of ssl.wrap_socket
- disable check_hostname and call load_default_certs() where appropriate,
to get CPython to run the tests correctly
- pass socket.AF_INET to getaddrinfo and socket.socket(), to force IPv4
- change tests to use github.com instead of google.com, because certificate
validation was failing with google.com
Signed-off-by: Damien George <damien@micropython.org>
And only enable this method when the relevant feature is available in
mbedtls. Otherwise, if mbedtls doesn't support getting the peer
certificate, this method always returns None and it's confusing why it does
that. It's better to remove the method altogether, so the error trying to
use it is more obvious.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds:
1) Methods to SSLContext class that match CPython signature:
- `SSLContext.load_cert_chain(certfile, keyfile)`
- `SSLContext.load_verify_locations(cafile=, cadata=)`
- `SSLContext.get_ciphers()` --> ["CIPHERSUITE"]
- `SSLContext.set_ciphers(["CIPHERSUITE"])`
2) `sslsocket.cipher()` to get current ciphersuite and protocol
version.
3) `ssl.MBEDTLS_VERSION` string constant.
4) Certificate verification errors info instead of
`MBEDTLS_ERR_X509_CERT_VERIFY_FAILED`.
5) Tests in `net_inet` and `multi_net` to test these new methods.
`SSLContext.load_cert_chain` method allows loading key and cert from disk
passing a filepath in `certfile` or `keyfile` options.
`SSLContext.load_verify_locations`'s `cafile` option enables the same
functionality for ca files.
Signed-off-by: Carlos Gil <carlosgilglez@gmail.com>
Also, IDF v5.1.2 is now supported, just not used by default.
IDF v5.0.2 still builds but we cannot guarantee continued support for this
version moving forward.
Signed-off-by: IhorNehrutsa <IhorNehrutsa@gmail.com>