Overview
Multithreading is a technique that allows multiple threads to execute concurrently in software or hardware. In a program, multiple threads can execute simultaneously, each performing independent tasks. This allows the program to continue executing other tasks while one thread is blocked (e.g., I/O operations), improving the efficiency of the program.
The QuecPython thread module provides functions for creating and deleting threads, as well as interfaces for mutexes, semaphores, and other related operations. QuecPython also provides component modules such as queue, sysbus, and EventMesh to facilitate multithreaded business processing.
Implementation of Multithreading
QuecPython does not create thread resources itself. In QuecPython, one thread corresponds to one thread in the underlying RTOS system and relies on the underlying thread scheduling. So how does the underlying system schedule and execute multiple tasks?
Thread Creation
Python provides a convenient way to create threads, which ignores the configuration of underlying parameters such as stack size and priority, simplifying usage as much as possible. When creating a thread in Python, a task control block (TCB) is generated in the underlying RTOS system for task scheduling and thread resource control.
The default stack size for the protocol stack is 8k. QuecPython also provides the ability to configure the stack size, which can be queried and configured using the thread.stacksize() interface.
Thread States
A thread has its own lifecycle, from creation to termination, and is always in one of the following five states: creation, runnable, running, blocked, and dead.
Creation: The thread is created and initialized to the runnable state.
• Runnable: The thread is in the runnable pool, waiting for CPU usage.
• Running: The thread is running when it obtains CPU execution resources from the runnable state.
• Blocked: The running thread gives up CPU usage for some reason and enters the blocked state. The thread is suspended and does not execute until it enters the runnable state again. This blocking state can be caused by various reasons, such as sleep, semaphore, lock, etc.
• Dead: The thread enters the dead state when it completes execution or terminates abnormally.
Thread Scheduling Mechanism
The simultaneous execution of multiple threads is a false proposition. In reality, not all threads can run continuously and exclusively occupy the CPU, no matter how powerful the hardware resources are. For thousands of threads, a certain scheduling algorithm is needed to implement multithreading. So how is the scheduling mechanism of multithreading implemented?
In an RTOS system, common scheduling mechanisms include time-slice round-robin scheduling, priority-based cooperative scheduling, and preemptive scheduling. Generally, multiple scheduling algorithms are used in an RTOS system.
Time-Slice Round-Robin Scheduling
The round-robin scheduling strategy in RTOS allows multiple tasks to be assigned the same priority. The scheduler monitors task time based on the CPU clock. Tasks with the same priority are executed in the order they are assigned. When the time is up, even if the current task is not completed, the CPU time is passed to the next task. In the next allocated time slot, the task continues to execute from where it stopped.
As shown in the following figure, time is divided into time slices based on the CPU tick. After each time slice, the scheduler switches to the next task in the runnable state, and then executes tasks A, B, and C in order.
Priority-Based Cooperative Scheduling
Priority-based cooperative scheduling in RTOS is a non-preemptive scheduling method based on priorities. Tasks are sorted by priority and are event-driven. Once a running task completes or voluntarily gives up CPU usage, the highest priority task in the runnable state can obtain CPU usage.
As shown in the following figure, according to the priority task scheduling method, when executing task A, an interrupt event task occurs and the high-priority interrupt event task is executed immediately. After the high-priority interrupt completes or gives up the CPU, the scheduler switches to the higher priority task A. After the task A completes or gives up the CPU, the scheduler switches to the task B.
Preemptive Scheduling
RTOS ensures real-time performance through preemptive scheduling. In order to ensure task responsiveness, in preemptive scheduling, as long as a higher priority task becomes runnable, the currently running lower priority task will be switched out. Through preemption, the currently running task is forced to give up the CPU, even if the task is not completed.
As shown in the following figure, according to the preemptive scheduling method, when executing task A, a higher priority task C appears and immediately switches to task C. After the higher priority task C completes or gives up the CPU, the scheduler switches to the higher priority task B based on the current runnable state. After task B completes or gives up the CPU, task A is executed again.
Thread Context Switching
In actual multithreaded execution, multiple threads are kept running by constantly switching between them. So how does the thread scheduling and switching process work? How is the execution state restored after switching?
When the operating system needs to run other tasks, it first saves the contents of the registers related to the current task to the stack of the current task. Then, it retrieves the previously saved contents of all registers from the stack of the task to be loaded and loads them into the corresponding registers, so that the execution of the loaded task can continue. This process is called thread context switching.
Thread context switching incurs additional overhead, including the overhead of saving and restoring thread context information, the CPU time overhead for thread scheduling, and the CPU cache invalidation overhead.
Thread Cleanup
A thread is the smallest scheduling unit of the system, and the creation and release of system threads require corresponding resource creation and cleanup.
To simplify the usage for customers, QuecPython automatically releases thread resources after the thread finishes running, so customers do not need to worry about thread resource cleanup. If you need to control the closure of a thread by another thread, you can use the thread.stopthread(thread_id) interface to control the specified thread based on the thread ID.
By using thread.stopthread(thread_id) to forcefully close a thread and release thread resources, you need to pay attention to whether the corresponding thread has locks, memory allocations, and other related operations that need to be released by the user to prevent deadlocks or memory leaks.
QuecPython Multithreading
QuecPython multithreading relies on the underlying system’s scheduling method and adds GIL lock to implement multithreading.
Python is an interpreted language, and for multithreading processing, it needs to be executed in order to ensure that only one thread is executing at the same time in the process. Therefore, the concept of GIL global lock is introduced to prevent shared resource exceptions caused by multithreading conditions.
QuecPython threads define the main thread, Python sub-threads, and interrupt/callback threads on the basis of the system and fix their priorities. The priority of the main thread (repl interactive thread) is lower than that of Python sub-threads, and the priority of Python sub-threads is lower than that of interrupt/callback threads.
The QuecPython multithreading switching process is shown in the figure below, when executing task A, after releasing the GIL lock of task A, switch to the higher priority interrupt task C. After the high-priority task C releases the GIL lock, execute the higher-priority task B and acquire the GIL lock. After the task B releases the GIL lock, continue to execute task A. QuecPython avoids the GIL lock causing high-priority tasks to be unable to execute and has flexibility in multithreading scheduling. It will automatically release the GIL lock after a certain number of executions and be scheduled by the system.
Inter-thread Communication amp; Resource Sharing
Inter-thread communication refers to the techniques or methods used to transmit data or signals between at least two processes or threads. In multithreading, inter-thread communication is essential for controlling thread execution, resource sharing control, message passing, etc., to achieve program diversification.
Common Issues
1. Thread Creation Failure
In most cases, thread creation fails due to insufficient memory. You can use thread.getheap_size() to check the current memory size and adjust the thread stack space to minimize memory consumption. However, the stack space should still meet the usage requirements to avoid crashes.
2. How to Release Thread Resources to Prevent Memory Leaks
Threads can be terminated in two ways. You can use the thread.threadstop(thread_id) interface to interrupt the program externally. Alternatively, threads can automatically exit when they finish running, and the system will reclaim the thread resources.
3. Thread Deadlock Issues
Deadlock occurs when multiple threads are waiting for each other to release system resources, resulting in a deadlock when neither side exits first.
To prevent deadlocks, make sure that thread locks are used in pairs and avoid situations where locks are nested. Use thread.threadstop(thread_id) with caution to prevent deadlocks caused by program termination during the locking process.
4. How to Wake Up Blocked Threads
For threads that need to be blocked, you can use inter-thread communication to wake them up. Avoid using sleep or other methods that cannot be interrupted.
5. Thread Priority
QuecPython has fixed priorities: the main thread (REPL interactive thread) has the lowest priority, followed by Python sub-threads, and then interrupts/callback threads. User-created Python sub-threads have the same priority.
6. How to Keep Threads Alive
QuecPython currently does not provide daemon threads or thread status query interfaces. If you need to keep a thread alive, you can implement a custom mechanism. For example, you can count the usage of the thread that needs to be kept alive and ensure that the thread performs a certain action within a certain period of time. If the action is not completed, the thread is cleared and recreated.
7. Thread Infinite Loop
QuecPython is compatible with multiple platforms, and an infinite loop in a thread may prevent the program from running or cause a system watchdog timeout and crash.
8. Thread Stack Exhaustion
QuecPython has different default thread stack sizes on different platforms, typically 2KB or 8KB. When the thread workload is large, stack overflow may occur, leading to unpredictable crashes. Therefore, consider whether the default stack size meets the requirements and use the thread.threadsize() interface to check and set the stack size.
9. Thread Safety
Thread safety is a concept in computer program code for multithreading. In a program with multiple threads that share data, thread-safe code ensures that each thread can execute correctly and accurately through synchronization mechanisms, without data corruption or other unexpected issues. We support mutex locks, semaphores, and other methods for data protection. When sharing data in multiple threads, users can use these methods as needed.
10. Comparison of Interrupts, Callbacks, Python Sub-threads, and the Main Thread (REPL Interactive Thread) Priority
Interrupts/callbacks depend on their triggering threads, and different interrupts/callbacks have different trigger objects, so their priorities cannot be determined.
Main thread (REPL interactive thread) priority < Python sub-thread priority < Interrupt/callback thread priority.
11. Whether The Time Slice Round-Robin Scheduling is Supported for Threads with the Same Priority.
Time slice round-robin scheduling is supported, but it is limited. Due to the Python Global Interpreter Lock (GIL) mechanism, after a thread is scheduled, it acquires the GIL lock. Only when the GIL lock is available can the thread execute successfully. Other threads can only be executed when the Python thread finishes or releases the GIL lock.
12. Whether The Thread Priority Configuration is Support.
Thread priority configuration is not supported.
This article was originally published by DEV Community and written by QuecPython.
Read original article on DEV Community








