But is an RTOS always necessary? The answer is application-specific, so understanding what one will deliver is key to determining whether it becomes a requirement or an extravagance.
In general, an RTOS can be used anywhere a non-RTOS is employed. However, it’s rare to find an operating system with a matching RTOS that has exactly the same application programming interface (API). Many of them, though, embed an RTOS within a conventional operating system. For example, Lynux- Works LynxOS and Bluecat Linux share a Linux API. LynxOS is a hard RTOS, while Bluecat inherits its base from Linux.
Linux continues to improve its real-time performance, but its worst-case interrupt latency still doesn’t meet what would be considered hard real time for an RTOS. It all comes down to quality of service (QoS). Platforms like RTLinux Free augment Linux, providing hard real-time class QoS.
It’s important to note that this type of addition often incorporates an RTOS programming environment that’s distinct from the original operating system. An RTOS is typically small compared to a conventional desktop or server OS. They often target more smaller, resource-constrained microcontrollers. For instance, CMX’s CMX-RTX and CMX-Tiny+ can run on 8-bit MCUs up through 64-bit processors.
The increased power and memory capacity of 8-bit processors is making an RTOS more desirable for these platforms. But, an OS or RTOS is usually a requirement in 16-bit platforms and up with RTOS products like Express Logic’s ThreadX, Wind River’s VxWorks, Micrium’s uCOS-II, and Green Hills Software’s velOSity being common selections. Depending on requirements, MontaVista’s Linux meets 16- and 32-bit platform requirements in the low microsecond range.
THE RTOS CORE: SCHEDULING AND PARTITIONING
Most programmers aren’t familiar with RTOS constraints and requirements. Most usually opt for an RTOS due to its performance. Most RTOS products are small and fast, yet an RTOS also adds consistency. Beyond the fact that an RTOS gets the job done quickly, it can guarantee a job will get done.
In many applications, a late result can be catastrophic. Thus, a poor result within the proper timeframe is preferable. These applications are generally called hard real-time systems. Hard real time doesn’t indicate how fast the system may be or how quickly a system may respond. Rather, it refers to how reliably a system can meet the specified requirements.
A hard real-time system may have a fixed cycle time of one minute with a response time of one second. In theory, it’s something almost any operating system could handle. This isn’t always the case, though, as anyone can attest to when waiting for a desktop application to respond within a minute.
Hard real-time systems typically have shorter cycle times and tighter response requirements. Faster processors always help, and multicore platforms can improve response time, too. The trick for developers is to match system requirements to the hardware and software, hence the importance of an RTOS in embedded applications.
An RTOS can implement a range of scheduling policies, and the application will often restrict a programmer’s choices (see the table). Non-preemptive scheduling is trivial to implement but useful in some applications. On the other hand, non-preemptive scheduling within a task can be implemented on top of a preemptive system.
Non-preemptive should not be overlooked, especially in light of new multicore processors. Here, hardware may be tuned to handle an event-based operation in which a thread will wait for an external event to occur. This approach is usually unsuitable for a single-core processor handling multiple threads. On multicore systems with many cores, though, it’s often typical to dedicate one core to handle one peripheral. It then makes sense to have that core idle while waiting for an event to occur.
As a result, preemptive, interrupt-driven RTOS architectures make up the majority of platforms deployed. These platforms have a range of requirements, issues, and solutions (see the figure). Interrupt latency is always an issue, although hardware— multiple register sets, hardware scheduling and task switching, and hierarchical priority interrupt systems—can significantly reduce this overhead.
Several issues coincide with preemption. Most are timing-related, like race conditions, deadlock, starvation, and priority inversion, which occurs when a low-priority task A owns a synchronization resource of a higher-priority task B, and a task C with priority higher than A is running.
Without a feature like priority ceilings, task C can prevent task A and C from running. A priority-ceiling feature changes the priority of task A to that of task C, allowing it to run and eventually release the resource needed by C. At this point, task A’s priority returns to normal and task C can run.
The other timing-related issues, which the programmer must address, are often the sources of bugs that are difficult to locate and correct. Trace tools become valuable assets in locating these kinds of bugs, since symptoms such as blocked tasks are the only indication of the problem
No comments:
Post a Comment