首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 96 毫秒
1.
SystemJ is a programming language based on the Globally Asynchronous Locally Synchronous (GALS) Model of Computation (MoC) used to design safety critical hard real-time systems. SystemJ uses the Java programming language as the “host” language, for carrying out data computations, because Java provides clearly defined operational semantics, type and memory safety in the form of the Garbage Collector(GC), which help with formal functional verification. The same GC, which helps in functional verification, makes Worst Case Reaction Time (WCRT)1 analysis challenging. Any WCRT analysis framework for GALS programs needs to consider the operations performed by the host language. It has been shown that the worst case time estimates for garbage collection cycles are in seconds, whereas the program’s WCRT itself is in micro-seconds. These pessimistic estimates render the WCRT analysis framework ineffective. In order to overcome this problem, we develop a compiler assisted memory management technique for applications written in SystemJ. The SystemJ MoC plays the central role in the proposed technique. The SystemJ MoC allows clearly demarcating the state boundaries of the program, which in turn allows us to partition the heap, at compile time, into two distinct areas: (1) the memory area called the permanent heap, which holds objects that are alive throughout the life time of the application, and (2) the memory area used to hold all other objects, called the transient heap. The size of these memory areas are bounded statically. Furthermore, the memory allocation and reclaim procedures are simple load and pointer reset operations, respectively, which are guaranteed to complete within a bounded number of clock-cycles, thereby alleviating the need for large pessimistic WCRT bounds obtained due to the GC. Experimental results also show that the proposed approach is approximately three times faster, in terms of memory allocation times as compared to standard real-time GC approaches.  相似文献   

2.
Many time-critical applications require predictable performance and tasks in these applications have deadlines to be met. For tasks with hard deadlines, a deadline miss can be catastrophic while for Quality of Service (QoS) degradable tasks (soft real-time tasks) timely approximate results of poorer quality or occasional deadline misses are acceptable. Imprecise computation and (m,k)-firm guarantee are two workload models that quantify the trade-off between schedulability and result quality. In this paper, we propose dynamic scheduling algorithms for integrated scheduling of real-time tasks, represented by these workload models, in multiprocessor systems. The algorithms aim at improving the schedulability of tasks by exploiting the properties of these models in QoS degradation. We also show how the proposed algorithms can be adapted for integrated scheduling of multimedia streams and hard real-time tasks, and demonstrate their effectiveness in quantifying QoS degradation. Through simulation, we evaluate the performance of these algorithms using the metrics – success ratio (measure of schedulability) and quality. Our simulation results show that one of the proposed algorithms, multilevel degradation algorithm, outperforms the others in terms of both the performance metrics.  相似文献   

3.
Aggressive scaling in technology size has dramatically increased the power density and degraded the reliability of real-time embedded systems. In this paper, we study the problem of reliability-conscious energy minimization for scheduling fixed-priority real-time embedded systems with weakly hard QoS-constraint. The weakly hard QoS-constraint is modeled with (m, k)-constraint, which requires that at least m out of any k consecutive jobs of a task meet their deadlines. We first propose a technique that can balance the static and dynamic energy consumption for real-time jobs with better speed determination than the classical strategies during their feasible intervals. Then based on it, we propose an adaptive fixed-priority scheduling scheme to reduce the energy consumption for the system while preserving its reliability. Through extensive simulations, our experiment results demonstrate that the proposed techniques can significantly outperform the previous research in energy performance while satisfying the weakly hard QoS-constraint under the reliability requirement.  相似文献   

4.
Reliability-aware power management (RAPM) has been a recent research focus due to the negative effects of the popular power management technique dynamic voltage and frequency scaling (DVFS) on system reliability. As a result, several RAPM schemes have been studied for uniprocessor real-time systems. In this paper, for a set of frame-based independent real-time tasks running on multiprocessor systems, we study global scheduling based RAPM (G-RAPM) schemes. Depending on how recovery blocks are scheduled and utilized, both individual-recovery and shared-recovery based G-RAPM schemes are investigated. An important dimension of the G-RAPM problem is how to select the appropriate subset of tasks for energy and reliability management (i.e., scale down their executions while ensuring that they can be recovered from transient faults). We show that making such decision optimally (i.e., the static G-RAPM problem) is NP-hard. Then, for the individual-recovery based approach, we study two efficient heuristics, which rely on local and global task selections, respectively. For the shared-recovery based approach, a linear search based scheme is proposed. The schemes are shown to guarantee the timing constraints. Moreover, to reclaim the dynamic slack generated at runtime from early completion of tasks and unused recoveries, we also propose online G-RAPM schemes which exploit the slack-sharing idea studied in previous work. The proposed schemes are evaluated through extensive simulations. The results show the effectiveness of the proposed schemes in yielding energy savings while simultaneously preserving system reliability and timing constraints. For the static version of the problem, the shared-recovery based scheme is shown to provide better energy savings compared to the individual-recovery based scheme, in virtue of its ability to leave more slack for DVFS. Moreover, by reclaiming the dynamic slack generated at runtime, online G-RAPM schemes are shown to yield better energy savings.  相似文献   

5.
Memory resources are a serious bottleneck in many real-time multicore systems. Previous work has shown that, in the worst case, execution time of memory intensive tasks can grow linearly with the number of cores in the system. To improve hard real-time utilization, a real-time multicore system should be scheduled according to a memory-centric scheduling approach if its workload is dominated by memory intensive tasks. In this work, a memory-centric scheduling technique is proposed where (a)?core isolation is provided through a coarse-grained (high-level) Time Division Multiple Access (TDMA) memory schedule; and (b)?the scheduling policy of each core ??promotes?? the priority of its memory intensive computations above CPU-only computation when memory access is permitted by the high-level schedule. Our evaluation reveals that under high memory demand, our scheduling approach can improve hard real-time task utilization significantly compared to traditional multicore scheduling.  相似文献   

6.
Next-generation, hard real-time systems will require new, flexible functionality and guaranteed, predictable performance. This paper describes the UMass Spring threads package, designed specifically for multiprocessing in dynamic, hard real-time environments. This package is unique because of its support for new thread semantics for real-time processing. Predictable creation and execution of threads is achieved because of an underlying predictable kernel, the UMass Spring kernel. Design decisions and lessons learned while implementing the threads package are presented. Measurements affirm the predictability of this implementation on a representative multiprocessor platform. The adoption of the threads package in the UMass Spring kernel results in additional performance improvements, which include reduced context switching overhead and reduced average-case memory access durations  相似文献   

7.
The STRESS environment is a collection of CASE tools for analysing and simulating the behaviour of hard real-time safety-critical applications. It is primarily intended as a means by which various scheduling and resource management algorithms can be evaluated, but can also be used to study the general behaviour of applications and real-time kernels. This paper describes the structure of the STRESS language and its environment, and gives examples of its use.  相似文献   

8.
Process scheduling, an important issue in the design and maintenance of hard real-time systems, is discussed. A pre-run-time scheduling algorithm that addresses the problem of process sequencing is presented. The algorithm is designed for multiprocessor applications with preemptable processes having release times, computation times, deadlines and arbitrary precedence and exclusion constraints. The algorithm uses a branch-and-bound implicit enumeration technique to generate a feasible schedule for each processor. The set of feasible schedules ensures that the timing specifications of the processes are observed and that all the precedence and exclusion constraints between pairs of processes are satisfied. the algorithm was tested using a model derived from the F-18 mission computer operational flight program  相似文献   

9.
Distributed hard real-time systems are characterized by communication messages associated with timing constraints, typically in the form of deadlines. A message should be received at the destination before its deadline expires. Carrier sense multiple access with collision detection (CSMA/CD) appears to be one of the most common communication network access schemes that can be used in distributed hard real-time systems. In this paper, we propose a new real-time network access protocol which is based on the CSMA/CD scheme. The protocol classifies the messages into two classes as ‘critical’ and ‘noncritical’ messages. The messages close to their deadlines are considered to be critical. A critical message is given the right to access the network by preempting a noncritical message in transmission. Extensive simulation experiments have been conducted to evaluate the performance of the protocol. It is shown that the protocol can provide considerable improvement over the virtual time CSMA/CD protocol proposed for hard real-time communication by Zhao et al.1.  相似文献   

10.
Modern embedded systems that are integrated as multi-processor system on chips, are often characterized by the complex behaviors and dependencies between system components. Different events that trigger such systems may cause different execution demands, depending on their event type as well as on the task they are processed by, leading to complex workload correlations. For example in data processing systems, the size of an event's payload data will typically determine its execution demand on most or all system components, leading to highly correlated workloads. Performance analysis of such systems is often difficult, and conventional analysis methods have no means to capture the possible existence of workload correlations. This leads to overly pessimistic performance analysis results, and thus to expensive system designs with considerable performance reserves. We propose an abstract model to characterize and capture workload correlations present in a system architecture, and we show how the captured additional system information can be incorporated into an existing framework for modular performance analysis of embedded systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号