pyRTOS is a real-time operating system (RTOS), written in Python.

Related tags

MiscellaneouspyRTOS
Overview

pyRTOS

Introduction

pyRTOS is a real-time operating system (RTOS), written in Python. The primary goal of pyRTOS is to provide a pure Python RTOS that will work in CircuitPython. The secondary goal is to provide an educational tool for advanced CircuitPython users who want to learn to use an RTOS. pyRTOS should also work in MicroPython, and it can be used in standard Python as well.

pyRTOS was modeled after FreeRTOS, with some critical differences. The biggest difference is that it uses a voluntary task preemption model, where FreeRTOS generally enforces preemption through timer interrupts. This means there is a greater onus on the user to ensure that all tasks are well behaved. pyRTOS also uses different naming conventions, and tasks have built in message passing.

To the best of my knowledge, aside from voluntary preemption, the task scheduling is identical to that found in FreeRTOS. Tasks are assigned numerical priorities, the lower the number the higher the priority, and the highest priority ready task is given CPU time, where ties favor the currently running task. Alternative scheduling algorithms may be added in the future.

Basic Usage

pyRTOS separates functionality into tasks. A task is similar to a thread in a desktop operating system, except that in pyRTOS tasks cannot be migrated to other processors or cores. This is due to limitations with CircuitPython. In theory, though, it should be possible to write a scheduler with thread migration, for MicroPython, which does support hardware multithreading.

A simple pyRTOS program will define some task functions, wrap them in Task objects, and then register them with the OS using the add_task() API function. Once all tasks are added, the start() function is used to start the RTOS.

Once started, the RTOS will schedule time for tasks, giving tasks CPU time based on a priority scheduling algorithm. When the tasks are well behaved, designed to work together, and given the right priorities, the operating system will orchestrate them so they work together to accomplish whatever goal the program was designed for.

See sample.py for an example task and usage.

Tasks

A pyRTOS task is composed of a Task object combined with a function containing the task code. A task function takes a single argument, a reference to the Task object containing it. Task functions are Python generators. Any code before the first yield is setup code. Anything returned by this yield will be ignored. The main task loop should follow this yield. This is the code that will be executed when the scheduler gives the task CPU time.

The main task loop is typically an infinite loop. If the task needs to terminate, a return call should be used, and any teardown that is necessary should be done directly before returning. Typically though, tasks never return.

Preemption in pyRTOS is completely voluntary. This means that all tasks must periodically yield control back to the OS, or no other task will get CPU time, messages cannot be passed between tasks, and other administrative duties of the OS will never get done. Yields have two functions in pyRTOS. One is merely to pass control back to the OS. This allows the OS to reevaluate task priorities and pass control to a higher priority ready task, and it allows the OS to take care of administration like message passing, lock handling, and such. Yields should be fairly frequent but not so frequent that the program spends more time in the OS than in tasks. For small tasks, once per main loop may be sufficient. For larger tasks, yields should be placed between significant subsections. If a task has a section of timing dependent code though, do not place yields in places where they could interrupt timing critical processes. There is no guarantee a yield will return within the required time.

Yields are also used to make certain blocking API calls. The most common will likely be delays. Higher priority processes need to be especially well behaved, because even frequent yields will not give lower priority processes CPU time. The default scheduler always gives the highest priority ready task the CPU time. The only way lower priority tasks ever get time, is if higher priority tasks block when they do not need the CPU time. Typically this means blocking delays, which are accomplished in pyRTOS by yielding with a timeout generator. When the timeout generator expires, the task will become ready again, but until then, lower priority tasks will be allowed to have CPU time. Tasks can also block when waiting for messages or mutual exclusion locks. In the future, more forgiving non-real-time schedulers may be available.

There are also some places tasks should always yield. Whenever a message is passed, it is placed on a local queue. Messages in the local task outgoing queue are delivered when that task yields. Other places where yielding is necessary for an action to resolve will be noted with the documentation on those actions.

Messages

Message passing mechanics are built directly into tasks in pyRTOS. Each task has its own incoming and outgoing mailbox. Messages are delivered when the currently running task yields. This message passing system is fairly simple. Each message has a single sender and a single recipient. Messages also have a type, which can be pyRTOS.QUIT or a user defined type (see sample.py). User defined types start with integer values of 128 and higher. Types below 128 are reserved for future use by the pyRTOS API. Messages can also contain a message, but this is not required. If the type field is sufficient to convey the necessary information, it is better to leave the message field empty, to save memory. The message field can contain anything, including objects and lists. If you need to pass arguments into a new task, one way to do this is to call deliver() on the newly created task object, with a list or tuple of arguments. This will add the arguments to the task's message queue, allowing it to access the arguments during initialization.

Checking messages is a critical part of any task that may receive messages. Unchecked message queues can accumulate so many messages that your system runs out of memory. If your task may receive messages, it is important to check the messages every loop. Also be careful not to send low priority tasks too many messages without periodically blocking all higher priority tasks, so they can have time to process their message queues. If a task that is receiving messages never gets CPU time, that is another way to run out of memory.

Messages can be addressed with a reference to the target task object or with the name of the object. Names can be any sort of comparable data, but numbers are the most efficient, while strings are the most readable. Object reference addressing must target an object that actually exists, otherwise the OS will crash. Also note that keeping references of terminated tasks will prevent those tasks from being garbage collected, creating a potential memory leak. Object references are the fastest message addressing method, and they may provide some benefits when debugging, but its up to the user to understand and avoid the associated hazards. Name addressing is much safer, however messages addressed to names that are not among the existing tasks will silently fail to be delivered, making certain bugs harder to find. In addition, because name addresses require finding the associated object, name addressed messages will consume significantly more CPU time to deliver.

sample.py has several examples of message passing.

pyRTOS API

Main API

add_task(task)

    This adds a task to the scheduler. Tasks that have been created but not added will never run. This can be useful, if you want to create a task and then add it at some time in the future, but in general, tasks are created and then added to the scheduler before the scheduler is started.

    task - a Task object

    Note that add_task() will automatically initialize any task that has not previously been initialized. This is important to keep in mind, because initializing a task manually after adding it to the scheduler may cause serious problems, if the initialization code cannot safely be run more than once.

start(scheduler=None)

    This begins execution. This function will only return when all tasks have terminated. In most cases, tasks will not terminate and this will never return.

    scheduler - When this argument is left with its default value, the default scheduler is used. Since no other schedulers currently exist, this is really only useful if you want to write your own scheduler. Otherwise, just call start() without an argument. This should be called only after you have added all tasks. Additional tasks can be added while the scheduler is running (within running tasks), but this should generally be avoided. (A better option, if you need to have a task that is only activated once some condition is met, is to create the task and then immediately suspend it. This will not prevent the initialization code from running though. If you need to prevent initialization code from running until the task is unsuspended, you can place the first yield in the task before initialization instead of after.)

class Mutex()

    This is a simple mutex with priority inheritance.

    Mutex.lock(task)

      This will attempt to acquire the lock on the mutex, with a blocking call. Note that because this is a blocking call, the returned generator must be passed to a yield in a list, eg. yield [mutex.lock(self)].

      task - The task requesting the lock.

    Mutex.nb_lock(task)

      This nonblocking lock will attempt to acquire the lock on the mutex. It will return True if the lock is successfully acquired, otherwise it will immediately return False.

      task - The task requesting the lock.

    Mutex.unlock()

      Use this to release the lock on the mutex. If the mutex is not locked, this will have no effect. Note that there is no guard to prevent a mutex from being unlocked by some task other than the one that acquired it, so it is up to the user to make sure a mutex locked in one task is not accidentally unlocked in some other task.

class BinarySemaphore()

    This is another simple mutex, but unlike Mutex(), it uses request order priority. Essentially, this is a first-come-first-served mutex.

    BinarySemaphore.lock(task)

      This will attempt to acquire the lock on the mutex, with a blocking call. Note that because this is a blocking call, the returned generator must be passed to a yield in a list, eg. yield [mutex.lock(task)].

      task - The task requesting the lock.

    BinarySemaphore.nb_lock(task)

      This nonblocking lock will attempt to acquire the lock on the mutex. It will return True if the lock is successfully acquired, otherwise it will immediately return False.

      task - The task requesting the lock.

    BinarySemaphore.unlock()

      Use this to release the lock on the mutex. If the mutex is not locked, this will have no effect. Note that there is no guard to prevent a BinarySemaphore() from being unlocked by some task other than the one that acquired it, so it is up to the user to make sure a binary semaphore locked in one task is not accidentally unlocked in some other task. When this is called, if there are other tasks waiting for this lock, the first of those to have requested it will acquire the lock.

Task API

class Task(func, priority=255, name=None)

    Task functions must be wrapped in Task objects that hold some context data. This object keeps track of task state, priority, name, blocking conditions, and ingoing and outgoing message queues. It also handles initialization, transition to blocking state, and message queues. The Task object also provides some utility functions for tasks.

    func - This is the actual task function. This function must have the signature func_name(self), and the function must be a generator. The self argument is a reference to the Task object wrapping the function, and it will be passed in when the task is initialized. See sample.py for an example task function.

    priority - This is the task priority. The lower the value, the higher priority the task. The range of possible values depends on the system, but typically priority values are generally kept between 0 and 8 to 32, depending on the number of tasks. The default of 255 is assumed to be far lower priority than any sane developer would ever use, making the default the lowest possible priority. Normally, each task should have a unique priority. If multiple tasks have the same priority, and no higher priority task is ready, whichever is already running will be treated as the higher priority task so long as it remains the running task. Tasks may be given the same priority, if this behavior is useful.

    name - Naming tasks can make message passing easier. See Basic Usage > Messages above for the pros and cons of using names. If you do need to use names, using integer values will use less memory and give better performance than strings, but strings can be used for readability, if memory and performance are not an issue.

    Task.initialize()

      This will initialize the task function, to obtain the generator and run any setup code (code before the first yield). Note that this passes self into the task function, to make the following methods of Task available to the task. This can be run explicitly. If it is not, it will be run when the task is added to the scheduler using add_task(). In most cases, it is not necessary to manually initialize tasks, but if there are strict ordering and timing constraints between several tasks, manual initialization can be used to guarantee that these constraints are met. If a task is manually initialized, add_task() will not attempt to initialize it again.

    Task.send(msg)

      Put a Message object in the outgoing message queue. Note that while it is possible to call this with any kind of data without an immediate exception, the message passing code in the OS will throw an exception if it cannot find a target member within the data, and well behaved tasks will throw an exception if there is no type member. Also note that sent messages will remain in the outgoing message queue until the next yield. Unless there is some good reason not to, it is probably a good idea to yield immediately after any message is sent. (The exception is, if the task needs to send out messages to multiple targets before giving up the CPU, send all of the messages, then yield.)

    Task.recv()

      This returns the incoming message queue and clears it. This should be called regularly by any task that messages may be sent to, to prevent the queue from accumulating so many messages that the devices runs out of memory. Note that because messages are distributed by the OS, once a task has called this, no new messages will be added to the incoming queue until a yield has allowed some other task to run. (This means that if this is the highest priority task, and it issues a non-blocking yield, no other task will have a chance to send a message. Thus high priority tasks should issue blocking yields, typically timeouts, periodically, to allow lower priority tasks some CPU time.)

    Task.message_count()

      This returns the number of messages in the incoming queue.

    Task.deliver(msg)

      This adds a message to the incoming queue. This should almost never be called directly. The one exception is that this can be used to pass arguments into a task, in the main thread, before the scheduler is started. Once the scheduler is started, messages should be passed exclusively through the OS, and this should never be called directly. Note also that a message passed this way does not need to be a message object. If you are using this to pass in arguments, use whatever sort of data structure you want, but make sure that the task expects it. (If you deliver your arguments to the task before initialization, you can use self.recv() in the initialization code to retrieve them.)

    Task.suspend()

      Puts the task into the suspended state. Suspended tasks do not run while they are suspended. Unlike blocked tasks, there are no conditions for resuming a suspended task. Suspended tasks are only returned to a ready state when they are explicitly resumed. Note that suspension is cheaper than blocking, because suspended tasks do not have conditions that need to be evaluated regularly. Also note that suspending a blocked task will clear all blocking conditions.

    Task.resume()

      Resumes the task from a suspended state. This can also be used to resume a blocked task. Note that using this on a blocked task will clear all blocking conditions. resume() should not be used on the running task. Doing so will change the state to ready, telling the OS that the task is not running when it is running. Under the default scheduler, this is unlikely to cause serious problems, but the behavior of a running task that is in the ready state is undefined and may cause issues with other schedulers.

Task Block Conditions

Task block conditions are generators that yield True if their conditions are met or False if they are not. When a block condition returns True, the task blocked by it is unblocked and put into the ready state.

A task is blocked when a yield returns a list of block conditions. When any condition in that list returns True, the task is unblocked. This allows any blocking condition to be paired with a timeout() condition, to unblock it when the timeout expires, even if the main condition is not met. For example, yield [wait_for_message(self), timeout(5)] will block until there is a message in the incoming message queue, but it will timeout after 5 seconds and return to ready state, even if no message arrives.

Note that blocking conditions must be returned as lists, even if there is only one condition. Thus, for a one second blocking delay, use yield [timeout(1)].

timeout(seconds)

    By itself, this blocks the current task for the specified amount of time. This does not guarantee that the task will begin execution as soon as the time has elapsed, but it does guarantee that it will not resume until that time has passed. If this task is higher priority than the running task and all other ready tasks, then this task will resume as soon as control is passed back to the scheduler and the OS has completed its maintenance.

    When combined with other blocking conditions, this will act as a timeout. Because only one condition must be met to unblock, when this evaluates to true, the task will unblock even if other blocking conditions are not met.

    seconds - The number of seconds, as a floating point value, to delay.

timeout_ns(nanoseconds)

    This is exactly like timeout(), except the argument specifies the delay in nanoseconds. Note that the precision of this condition is dependent on the clock speed of your CPU, in addition to the limitations affecting timeout().

    nanoseconds - The number of nanoseconds, as an integer value, to delay.

delay(cycles)

    This delay is based on OS cycles rather than time. This allows for delays that are guaranteed to allow a specific number of cycles for other tasks to run. This can be especially useful in cases where it is known that a specific task will take priority during the delay and that task is doing something that will require a known number of cycles to complete. (Note that a cycle lasts from one yield to the next, rather than going through the full loop of a task.)

wait_for_message(self)

    This blocks until a message is added to the incoming message queue for this task. self should be the Task object of the calling task.

UFunction

    It is also possible to create your own blocking conditions. User defined blocking conditions must follow the same pattern as API defined conditions. Blocking conditions are generator functions that yield True or False. They must be infinite loops, so they never throw a StopIteration exception. The initial call to the function can take one or more arguments. Subsequent calls to the generator may take arguments (using the generator send() function) but must not require arguments. The scheduler will never pass arguments when testing blocking conditions. In general, it is probably better to use global variables or passed in objects for tracking and controlling state than it is to create conditions that can take arguments in the generator calls.

    User defined blocking conditions are used exactly like API blocking conditions. They are passed into a yield, in a list.

Message API

class Message(type, source, target, message=None)

    The Message object is merely a container with some header data and a message. The message element is optional, as in many cases the type can be used to convey everything necessary.

    type - Currently only one built in type exists: QUIT. Types are used to convey what the message is about. In many cases, type may convey sufficient information to make the message element unnecessary. Type values from 0 to 127 are reserved for future use, while higher values are available for user defined types. Note that type can also be used to communicate the format of the data passed in the message element.

    source - This is the sender of the message. It is essentially a "from" field. This is critical in messages requesting data from another task, so that task will know where to send that data. When no response is expected, and the target task does not need to know the source, this is less important, but it is probably good practice to be honest about the source anyway, just in case it is eventually needed. This can be set to self or self.name.

    target - This specifies the target task. This is essentially the "to" field for the message. This can be a direct object reference or the name of the target object. See Basic Usage > Messages above for the pros and cons of using names versus objects.

    message - This is the message to be passed. By default this is None, because in many cases type is sufficient to convey the desired information. message can be any kind of data or data structure. If type is not empty, type may be used to communicate the structure or format of the data contained in message.

class MessageQueue(capacity=10)

    The MessageQueue object is a FIFO queue for tasks to communicate with each other. Any task with a reference to a MessageQueue can add messages to the queue and take messages from it. Both blocking and nonblocking calls are provided for these.

    capacity - By default, the maximum number of messages allowed on the queue is 10. If the queue is full and a task attempts to push another onto it, it will block if the blocking call is used, otherwise it will just fail. This can be used to limit how much memory is being used keeping track of messages.

    MessageQueue.send(msg)

      This is a blocking send. If the queue is full, this will block until the message can be added.

      msg - The message can be any kind of data. No destination or source needs to be specified, but messages can contain that information if necessary.

      Keep in mind that blocking functions return generators that must be passed into a yield in a list, thus a message would be sent with yield [queue.send(msg)].

    MessageQueue.nb_send(msg)

      This is nonblocking send. If the queue is full, this will return False. Otherwise the message will be added to the queue and this will return True.

      msg - The data to be put on the queue.

    MessageQueue.recv(out_buffer)

      This is a blocking receive. If the queue is empty it will block until a message is added. When a message is available, it will append that message to out_buffer.

      out_buffer - This should be a list or some list-like data container with an append() method. When this method unblocks, the message will be deposited in this buffer.

    MessageQueue.nb_recv()

      This is the nonblocking receive. It will return a message, if there is one in the queue, or it will return None otherwise.

Templates & Examples

Task Template

def task(self):

	# Uncomment this to get argument list passed in with Task.deliver()
	# (If you do this, it will crash if no arguments are passed in
	# prior to initialization.)
	# args = self.recv()[0]

	### Setup code here



	### End Setup code

	# Pass control back to RTOS
	yield

	# Main Task Loop
	while True:
		### Work code here



		### End Work code
		yield # (Do this at least once per loop)

Message Handling Example Template

msgs = self.recv()
for msg in msgs:
	if msg.type == pyRTOS.QUIT:
		# If your task should never return, remove this section
		### Tear Down code here



		### End Tear Down Code
		return
	elif msg.type == TEMP:
		# TEMP is a user defined integer constant larger than 127
		# Temperature data will be in msg.message
		### Code here



		### End Code

# This will silently throw away messages that are not
# one of the specified types, unless you add an else.

Timeout & Delay Examples

Delay for 0.5 seconds

yield [pyRTOS.timeout(0.5)]

Delay for 100 nanoseconds

yield [pyRTOS.timeout_ns(100)]

Delay for 10 OS cycles (other tasks must yield 10 times, unless all other tasks are suspended or blocked)

yield [pyRTOS.delay(10)]

Message Passing Examples

Send Message

Send temperature of 45 degrees to display task (TEMP constant is set to some value > 127)

self.send(pyRTOS.Message(TEMP, self, "display", 45))

This message will be delivered at the next yield.

Read Message

Instruct hum_read task to read the humidity sensor and send back the result, when wait for a message to arrive (READ_HUM constant is set to some value > 127)

self.send(pyRTOS.Message(READ_HUM, self, "hum_read"))
yield [wait_for_message(self)]

Message Queue Examples

Create MessageQueue

Create a MessageQueue and pass it into some newly created tasks, so it can be retrived during initialization of the tasks

display = pyRTOS.Task(display_task, priority=1, "display")
tsensor = pyRTOS.Task(tsensor_task, priority=2, "tsensor")

temp_queue = MessageQueue(capacity=4)

display.deliver(temp_queue)
tsensor.deliver(temp_queue)

pyRTOS.add_task(display)
pyRTOS.add_task(tsensor)

Write MessageQueue

Write the temperature to a MessageQueue (if the queue is full, this will block until it has room)

yield [temp_queue.send(current_temp)]

Read MessageQueue

Read the temperature from a MessageQueue (if the queue is empty, this will block until a message is added)

temp_buffer = []
yield [temp_queue.recv(temp_buffer)]

temp = temp_buffer.pop()

Mutex Examples

Create Mutex

Create a Mutex and pass it into some newly created tasks

temp_printer = pyRTOS.Task(temp_task, priority=3, "temp_printer")
hum_printer = pyRTOS.Task(hum_task, priority=3, "hum_printer")

print_mutex = pyRTOS.Mutex()

temp_printer.deliver(print_mutex)
hum_printer.deliver(print_mutex)

Use Mutex

Use a mutex to avoid collisions when printing multiple lines of data (Note that it should never be necessary to actually do this, since no preemption occurs without a yield. This should only be necessary when at least one task yields within the code that needs lock protection.)

yield [print_mutex.lock()]

print("The last five temperature readings were:")

for temp in temps:
	print(temp, "C")

print_mutex.unlock()

Future Additions

Mutual Exclusion

We currently have a Mutex object (with priority inheritance), but this isn't really a complete set of mutual exclusion tools. FreeRTOS has Binary Semaphores, Counting Semaphores, and Recursive Mutexes. Because this uses voluntary preemption, these are not terribly high priority, as tasks can just not yield during critical sections, rather than needing to use mutual exclusion. There are still cases where mutual exclusion is necessary though. This includes things like locking external hardware that has time consuming I/O, where we might want to yield for some time to allow the I/O to complete, without allowing other tasks to tamper with that hardware while we are waiting. In addition, some processors have vector processing and/or floating point units that are slow enough to warrant yielding while waiting, without giving up exclusive access to those units. The relevance of these is not clear in the context of Python, but we definitely want some kind of mutual exclusion.

In FreeRTOS, Mutexes have a priority inheritance mechanic. By default, this is also true in pyRTOS, because blocking conditions are checked in task priority order. Binary semaphores are effectively mutexes without priority inheritance. How would we handle request order based locks? I suppose we could have a queue in the semaphore that keeps track of who asked first and prioritizes in that order. This would be significantly more expensive than priority inheritance, but it shouldn't be too hard to do.

Would spinlocks be relevant/useful in a single threaded, voluntary preemption system?

FreeRTOS

We need to look through the FreeRTOS documentation, to see what other things a fully featured RTOS could have.

Size

Because this is intended for use on microcontrollers, size is a serious concern. The code is very well commented, but this means that comments take up a very significant fraction of the space. We are releasing in .mpy format for Circuit Python now, which is cutting the size down to around 5KB. Maybe we should include a source version with comments stripped out in future releases.

Notes

This needs more extensive testing. The Mutex class has not been tested. We also need more testing on block conditions. sample.py uses wait_for_message() twice, successfully. timeout() is also tested in sample.py.

What we really need is a handful of example problems, including some for actual CircuitPython devices. When the Trinkey RP2040 comes out, there will be some plenty of room for some solid CircuitPython RTOS example programs. I have a NeoKey Trinkey and a Rotary Trinkey. Neither of these have much going on, so they are really only suitable for very simple examples.

Comments
  • Using PyRTOS on windows. High CPU level

    Using PyRTOS on windows. High CPU level

    Hi, like I've written in the title I'm using PyRTOS on windows. I've been successful in this, I've created a basic task that prints hello world every 5 seconds.

    But even if the task is lightweight, the CPU consumption of the python process is about 7% in task manager, and the overall CPU consumption go from 2-3% (when the program is not running) to 17-20% (when PyRTOS is running).

    I know that PyRTOS is an embedded system, and its main use is for embedded platform, but I'd like to use it to create a collection of scripts that are always running on my systems, and at a certain event (for example at a certain hour) they trigger a certain action. Basically I'd like to use it to automate a few stuff on my pc. So I'd keep PyRTOS always running, executing it at system startup, and create a few tasks that runs with a certain frequency.

    I'd like to know if there's a way to decrease the impact of PyRTOS on the CPU, also because I guess that this drains a lot of battery over time. Can we do something, or that high CPU percentage is just due to the RTOS that is doing heavy stuff under the hood?

    opened by FraH90 6
  • Task Notifications

    Task Notifications

    FreeRTOS has task notifications: https://www.freertos.org/RTOS-task-notifications.html

    These are essentially triggers a task can block on, that another task can trigger to unblock the task. Task notifications in FreeRTOS have an associated 32 bit value that the triggering task can set or increment. A task can only block on one notification at a time, but tasks can have multiple notifications. (The pyRTOS blocking mechanism allows for blocking on multiple conditions, but you won't know which one you unblocked on without checking them all.) In FreeRTOS the number of tasks is set by a constant, but in Python we could set this globally or task-by-task, at runtime.

    What are the advantages of notifications over messages? I've been wanting to make light tasks that don't have a built in incoming message queue. Notifications could provide a lighter message passing mechanic for light tasks. It might even be worth making light, notification-only tasks the default, and have tasks with built in mailboxes as a heavier alternative. The big difference is that messages have a return address, while notifications do not.

    We might even be able to use an inheritance mechanism to allow users to create tasks with only the features they need.

    Anyhow, the question here is, is there really a need for task notifications? They shouldn't be hard to implement, but if there isn't much interest, I will prioritize other features.

    (Also, since this is intended for microcontrollers with limited resources, I don't want to add a lot of feature that will take up space but rarely see use. FreeRTOS can get away with it, because it is written in a compiled language, where the compiler/linker can skip parts that are not used in the immediate application. We can't do that with CircuitPython.)

    Next Major Version 
    opened by Rybec 5
  • Add CHANGELOG.md

    Add CHANGELOG.md

    CHANGELOG.md will be for documenting changes between major versions. Specifically, it will explain how to update code written in one major version to work in the next.

    The big things it needs for the v0 to v1 transition are the removal of anonymous Mutex locking, the demotion of Task Mailboxes to an opt-in feature, and the addition of the Task Notification opt-in feature. (Tasks are now lightweight by default.)

    Necessary changes to user code are passing in self when attempting to lock a Mutex and setting the mailbox keyword argument to True when creating Tasks that need to receive messages.

    Next Major Version 
    opened by Rybec 2
  • Service Routines

    Service Routines

    A major limitation of a voluntary preemption OS is the inability to handle interrupts in a reliable manner. If you have multiple top priority tasks, you have to carefully manage delays on them, to make sure they all get time, and ideally you want a strict task hierarchy, with only one task per priority level. So how to do we manage urgent responses, when we have these limitations?

    Service routines are micro-tasks that run every time the OS gets control. This means any time a task yields, all service routines will be run before the CPU is turned over to the next task. Like ISRs in a traditional RTOS, service routines should be very small and very fast. They can be used to check inputs or timers, to serve a similar role to ISRs, or they can handle I/O tasks that you don't want to block but that take some CPU time to complete. An example use might be handling network I/O in a system where multiple tasks need to generate and receive network traffic. A service routine could be used to route incoming traffic to the right tasks and handle things like task priority in outgoing traffic. Service routines are intended to be used to extend the OS, rather than provide a different kind of task.

    There is one major difference between tasks and service routines: Service routines are intended to run to completion then return, rather than yielding, so that context does not need to be retained. (There may be some exceptions...read on...)

    Design questions that need consideration: Should service routines be functions or objects? Functions are lighter weight, but services routines that need to retain context might be better as objects, since functions would need external data storage.

    How about this: Service routines must be directly callable. The default is functions to minimize weight, but if you really need a service routine that retains context, rather than using global variables, you are encouraged to use a generator that runs all of the loop code before yielding. I don't think Python will care about the difference. A callable is a callable. (But, service routine generators should never return, or a StopIteration exception will be thrown, and I am not going to add exception handling for stuff like this, because space is a premium.)

    opened by Rybec 2
  • Mutex() cannot be used reliably with other block conditions

    Mutex() cannot be used reliably with other block conditions

    When blocking on Mutex.lock() combined with other conditions, it is impossible to know which condition unblocked the task. This means that blocking mutex cannot be used reliably with other conditions.

    This can be fixed by changing Mutex() so that it knows who currently uses the lock. Unfortunately, the fix for this would break the API, so it must wait for a new major version.

    We should be able to add behavior allowing safe use of Mutex(), without breaking the API, but the old behavior will still be broken, as it provides no way of knowing who got the lock.

    Don't close this issue until the broken behavior is completely removed.

    Next Major Version 
    opened by Rybec 2
  • trying pyRTOS

    trying pyRTOS

    Not an issue, just here to say great project! I currently use https://github.com/cognitivegears/CircuitPython_uschedule and would like to make my blocking code (batches of slow I2C communications mostly) more granular. I will give pyRTOS a try.

    opened by durapensa 2
  • Documentation does not clearly explain that pyRTOS.start() does not normally return

    Documentation does not clearly explain that pyRTOS.start() does not normally return

        You're welcome, thanks to you too for helping me reducing CPU usage!
    

    I've tested it using the new version, but it still does the same. You need to start the task after you've defined the service routine, otherwise s.r. won't start. To me it's not a big issue, maybe you just need to write in documentation this behaviour, that service routine should be defined before starting the task. This is the code I'm executing, if you want to give a look:

    import pyRTOS
    import time
    
    
    def setup():
        pass
    
    
    def thread_loop():
    	print("Hello world")
    
    
    
    # self is the thread object this runs in
    def task(self):
    
    	### Setup code here
    
    	setup()
    
    	### End Setup code
    
    	# Pass control back to RTOS
    	yield
    
    	# Thread loop
    	while True:
    
    
    		# Remember to yield once in a while (to give control back to the OS)
    
    		### Start Work code
    		thread_loop()
    		### End Work code
    
    		# Adjust the timing here (in seconds) to fix the interval between each
        	# re-wake of the thread (the os will automatically wake it every tot time)
    		# THIS IS A BLOCKING DELAY! TASK EXECUTION IS BLOCKED FOR THIS TIME
    		yield [pyRTOS.timeout(5)]
    
    
    
    # Now we create the task
    # OSS: This is the entry point of the file. Execution starts here.
    # The name of the task you need to pass as first parameter is the name
    # of the function that implements the task. In this case, it's 
    # the "task()" function implemented above.
    # Mailboxes (for messages) are disabled
    pyRTOS.add_task(pyRTOS.Task(task, name="task1"))
    
    # Let's add a service routine that implements a 1ms delay every time the scheduler
    # is called, in order to slow down the execution, and having a minor impact on CPU
    pyRTOS.start()
    pyRTOS.add_service_routine(lambda: time.sleep(0.1))
    
    

    PS: Another hint I give you is that you should publish the code on pypi, so we can install this with pip.. It's true that this is an embedded platform, but could become useful on windows/mac/linux too.. I've put PyRTOS folder inito site-packages folder of my main python installation, so I can call PyRTOS when I want

    Originally posted by @FraH90 in https://github.com/Rybec/pyRTOS/issues/11#issuecomment-1264666757

    opened by Rybec 1
  • Update Rotary Trinkey Test/Example Code

    Update Rotary Trinkey Test/Example Code

    The Rotary Trinkey code is now broken due to changes to the default settings for Task objects. It needs updated to work with the new mechanics.

    Doing this will also provide some testing for the new mechanics, especially for Task Notifications. This is final step before building the 1.0.0 release! Once this is done, we can build the 1.0.0 .mpy for pyRTOS, test it, and then hopefully release it.

    Should perhaps add a CHANGELOG.md document with a section on how to update code written for one major version to work for the next...

    Next Major Version 
    opened by Rybec 1
  • question: How to introduce non-blocking delays on nested methods within main loop

    question: How to introduce non-blocking delays on nested methods within main loop

    Hi @Rybec !

    I apologize if I missed this on your documentation (and for asking this on a issue) but I've been running around in circles without finding any solution for this specific situation. If we are nesting code in methods and calling those methods on the main loop of a pyRTOS task, how can we yield into the RTOS within those nested methods?

    For eg:

    def test():
        print('test 1')
    
        # Do some delay here within this method but without blocking the whole CPU
        yield [pyRTOS.timeout(5)] # This doesn't work obviously, but just for the example
    
        printd('test 2 after x seconds')
    
    def task(self):
        # Pass control back to RTOS
        yield
    
        # Thread loop
        while True:
            test()
    
            yield [pyRTOS.timeout(0.5)]```
    opened by fred-cardoso 6
  • MicroPython doesn't have time.monotonic() or time.monotonic_ns()

    MicroPython doesn't have time.monotonic() or time.monotonic_ns()

    Hi!

    Not sure if you are supporting MicroPython or just targetting CircuitPython with this library. I found out that MicroPython does not have the monotonic() method on the time module. I changed the monotonic call to time and monotonic to time_ns and I was able to make it run. Not sure if these timers are good enough for this purpose, I haven't check that.

    Any tip?

    documentation 
    opened by fred-cardoso 3
  • Roadmap to pyRTOS 2.0

    Roadmap to pyRTOS 2.0

    I've looked through most of the FreeRTOS documentation, and it is pretty light. Most of the remaining features of FreeRTOS that pyRTOS either can already be achieved easily using Python built-ins or cannot be achieved at all in CircuitPython, due to lack of support for interrupts and parallelism. Python is an incredibly powerful language, that quite frankly already covers much of the peripheral functionality found in FreeRTOS. In fact, some pyRTOS functionality might be redundant, and should perhaps be removed in the next major version.

    There are a few places where there is potential for improvement. These are task blocking conditions, documentation, sample code, and synchronization/mutual exclusion.

    Task blocking for I2C and SPI isn't going work. Adafruit's libraries don't seem to provide any way to check if an I2C or SPI command has been fully sent. Further, most I2C and SPI communication is abstracted at least one additional level, with device driver libraries that also provide no way to tell if a command has finished sending. This is probably not terribly important though, because I2C and SPI are pretty fast, and if you are using Python, you probably don't care about performance at this resolution. There are other places where task blocking is valuable. Many are device or application dependent and thus should be implemented by the user, but if there are general cases with broad application, I would be happy to add them. To be clear, I don't have any right now, so this won't be going on the roadmap. Please open an issue if you have a general blocking condition that you feel should be part of the OS, but note that anything highly dependent on device or application will not be considered.

    There is significant potential for improvement in the documentation. Some places could benefit from suggested use cases for elements, and there is a ton of room for example code. Also, it would probably also be good to add a section explaining how to use Python built-ins to achieve functionality normally provided by the RTOS, especially if existing functionality is going to be removed in favor of using built-ins.

    We need more sample code. The one piece of official sample code is little more than a start. It has no practical value. The one upside it has is that it runs in Python 3 on normal personal computers, which makes it good for testing during development. We need a bit more generic Python 3 sample code, for more comprehensive testing. We also need a selection of sample code designed for Adafruit devices using CircuitPython. Currently, all we have is a sample program for the Rotary Trinkey, which merely changes LED colors, based on two touch inputs (one of which is a through hole pad for a rotary encoder). We can do better. I've got NeoKey Trinkeys and Neo Trinkeys, as well as a Trinkey QT2040. These are all valid targets for pyRTOS, so maybe a selection of a few programs for each would provide a solid suite of sample programs.

    Lastly, I haven't completed implementing all of the mutual exclusion devices found in FreeRTOS. Specifically, I have no implemented Counting Semaphores or Recursive Mutexes. I am not sure Recursive Mutexes are sufficiently valuable to be worth adding, but I'll look into applications for them and add them if I feel they are worth the cost in space and memory. It might also be worth adding Reader/Writer locks. I've implemented them before using Python's built in semaphores, and if they are sufficiently cheap, it might be worth adding them.

    Also, Python has built-in mutual exclusion devices. I'm not sure if CircuitPython includes these, however if it does, it might be a good idea to replace the current pyRTOS mechanics with Python's built-in mechanics, wrapped to provide blocking conditions.

    So here is the current roadmap:

    • [ ] Add Counting Sempahores and maybe Recursive Mutexes*
    • [ ] Look into adding Reader/Writer Locks*
    • [ ] Improve documentation with example use cases, example code, and a section on Python built-ins
    • [ ] Write more sample programs (which can also be used for testing)
    • [ ] Look for elements of pyRTOS that are redundant due to Python built-in elements, and remove them (pyRTOS 2.0)

    (* Possibly using Python built-in mutual exclusion devices and changing existing pyRTOS mutual exclusion to use Python built-ins.)

    I don't have any sort of time schedule for this. pyRTOS 1.0 is already pretty solid. Hopefully I can get all of this done in around a month, but I make no promises. If you would like to accelerate this process, consider donating, DONATIONS.md.

    opened by Rybec 0
  • Add more I/O blocking conditions

    Add more I/O blocking conditions

    Currently users who want slow, hardware managed I/O to block, so other tasks can run, have to write their own blocking conditions. Very common types of I/O should have blocking conditions built into pyRTOS.

    I need to look into how CircuitPython handles things like I2C and SPI. If they use similar APIs, it might be possible to make a single condition that works for both.

    What other kinds of I/O would benefit from blocking conditions? LCD and Matrix display rendering? Do the APIs for those provide any way of checking whether a rendering operation has finished? If not, how hard would it be to add that?

    These are the kinds of I/O I am aware of that might benefit from this:

    • SPI
    • I2C
    • UART
    • I2S
    • SDIO

    Would higher level communication, like rendering, be included in the lower level stuff above, or would it require its own blocking conditions? If it requires its own blocking conditions, perhaps this issue should be limited to low level I/O and we should have a separate issue for higher level I/O.

    opened by Rybec 0
Releases(v1.0)
Owner
Ben Williams
Ben Williams
Pomodoro timer by the Algodrip team!

PomoDrip 🍅 Pomodoro timer by the Algo Drip team! To-do: Create the script for the pomodoro timer Design the front-end of the program (Flask or Javasc

Algodrip 3 Sep 12, 2021
A(Sync) Interface for internal Audible API written in pure Python.

Audible Audible is a Python low-level interface to communicate with the non-publicly Audible API. It enables Python developers to create there own Aud

mkb79 192 Jan 03, 2023
rebalance is a simple Python 3.9+ library for rebalancing investment portfolios

rebalance rebalance is a simple Python 3.9+ library for rebalancing investment portfolios. It supports cash flow rebalancing with contributions and wi

Darik Harter 5 Feb 26, 2022
A Pythonic Data Catalog powered by Ray that brings exabyte-level scalability and fast, ACID-compliant, change-data-capture to your big data workloads.

DeltaCAT DeltaCAT is a Pythonic Data Catalog powered by Ray. Its data storage model allows you to define and manage fast, scalable, ACID-compliant dat

45 Oct 15, 2022
Framework To Ease Operating with Quantum Computers

QType Framework To Ease Operating with Quantum Computers Concept # define an array of 15 cubits:

Antonio Párraga Navarro 2 Jun 06, 2022
Cup Noodle Vending Maching Ordering Queue

Noodle-API Cup Noodle Vending Machine Ordering Queue Install dependencies in virtual environment python3 -m venv

Jonas Kazlauskas 1 Dec 09, 2021
InverterApi - This project has been designed to take monitoring data from Voltronic, Axpert, Mppsolar PIP, Voltacon, Effekta

InverterApi - This project has been designed to take monitoring data from Voltronic, Axpert, Mppsolar PIP, Voltacon, Effekta

Josep Escobar 2 Sep 03, 2022
An ultra fast cross-platform multiple screenshots module in pure Python using ctypes.

Python MSS from mss import mss # The simplest use, save a screen shot of the 1st monitor with mss() as sct: sct.shot() An ultra fast cross-platfo

Mickaël Schoentgen 799 Dec 30, 2022
Um pequeno painel de consulta

Spynel Um pequeno painel com consultas de: IP CEP PLACA CNPJ OBS: caso execute o script pelo termux, recomendo que use o da F-Droid por ser mais atual

Spyware 12 Oct 25, 2022
Python Script to add OpenGapps, Magisk, libhoudini translation library and libndk translation library to waydroid !

Waydroid Extras Script Script to add gapps and other stuff to waydroid ! Installation/Usage "lzip" is required for this script to work, install it usi

Casu Al Snek 331 Jan 02, 2023
An alternative site to emplea.do due to inconsistent service of the app.

feline a agile and fast alternative to emplea.do License: MIT Settings Moved to settings. Basic Commands Setting Up Your Users To create a normal user

Codetiger 8 Nov 10, 2021
This is a vscode extension with a Virtual Assistant that you can play with when you are bored or you need help..

VS Code Virtual Assistant This is a vscode extension with a Virtual Assistant that you can play with when you are bored or you need help. Its currentl

Soham Ghugare 6 Aug 22, 2021
Hands-on machine learning workshop

emb-ntua-workshop This workshop discusses introductory concepts of machine learning and data mining following a hands-on approach using popular tools

ISSEL Soft Eng Team 12 Oct 30, 2022
A tutorial presents several practical examples of how to build DAGs in Apache Airflow

Apache Airflow - Python Brasil 2021 Este tutorial apresenta vários exemplos práticos de como construir DAGs no Apache Airflow. Background Apache Airfl

Jusbrasil 14 Jun 03, 2022
Package pyVHR is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

Package pyVHR (short for Python framework for Virtual Heart Rate) is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

PHUSE Lab 261 Jan 03, 2023
GCP Scripts and API Client Toolss

GCP Scripts and API Client Toolss Script Authentication The scripts and CLI assume GCP Application Default Credentials are set. Credentials can be set

3 Feb 21, 2022
Tool to audit and fix Python project requirements.

Requirement Auditor Utility to revise and updated python requirement files.

Luis Carlos Berrocal 1 Nov 07, 2021
pyshell is a Linux subprocess module

pyshell A Linux subprocess module, An easier way to interact with the Linux shell pyshell should be cross platform but has only been tested with linux

4 Mar 02, 2022
A dog facts python module

A dog facts python module

Fayas Noushad 3 Nov 28, 2021
Calculatrix is a project where I'll create plenty of calculators in a lot of differents languages

Calculatrix What is Calculatrix ? Calculatrix is a project where I'll create plenty of calculators in a lot of differents languages. I know this sound

1 Jun 14, 2022