首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Real-time computer systems are often used in harsh environments, such as aerospace, and in industry. Such systems are subject to many transient faults while in operation. Checkpointing enables a reduction in the recovery time from a transient fault by saving intermediate states of a task in a reliable storage facility, and then, on detection of a fault, restoring from a previously stored state. The interval between checkpoints affects the execution time of the task. Whereas inserting more checkpoints and reducing the interval between them reduces the reprocessing time after faults, checkpoints have associated execution costs, and inserting extra checkpoints increases the overall task execution time. Thus, a trade-off between the reprocessing time and the checkpointing overhead leads to an optimal checkpoint placement strategy that optimizes certain performance measures. Real-time control systems are characterized by a timely, and correct, execution of iterative tasks within deadlines. The reliability is the probability that a system functions according to its specification over a period of time. This paper reports on the reliability of a checkpointed real-time control system, where any errors are detected at the checkpointing time. The reliability is used as a performance measure to find the optimal checkpointing strategy. For a single-task control system, the reliability equation over a mission time is derived using the Markov model. Detecting errors at the checkpointing time makes reliability jitter with the number of checkpoints. This forces the need to apply other search algorithms to find the optimal number of checkpoints. By considering the properties of the reliability jittering, a simple algorithm is provided to find the optimal checkpoints effectively. Finally, the reliability model is extended to include multiple tasks by a task allocation algorithm  相似文献   

2.
实时系统中任务的超时完成将会导致灾难性后果,因此实时系统必须具备实时性和可靠性保障。为了提升系统的容错能力,该文基于回卷恢复容错模型,提出了容错优先级降低策略,并基于此策略对系统进行可调度性分析,推导出了任务最坏响应时间的计算公式。为了快速确定一组最优的容错优先级降低配置,该文提出了一种有效的搜索算法,该算法能够将容错优先级降低配置的搜索空间由O(n!)降低为O(n2)。最后,仿真实验表明容错优先级降低策略可以显著提升系统的容错能力。  相似文献   

3.
针对相控阵雷达时间资源分配问题,该文提出一种基于价值优化的任务调度算法。首先建立任务调度属性参数,对跟踪任务队列进行可行性分析和筛选操作,确定跟踪任务调度属性。其次,根据任务最大价值及其变化斜率,建立关于实际执行时刻的动态任务价值函数,并基于此构建任务调度的价值优化模型,对跟踪任务执行时刻进行分配,以更好满足及时性原则。最后,利用执行跟踪任务间的空闲时间片对搜索任务进行调度。仿真结果表明,该文算法有效减小了时间偏移量,提升了实现价值率。  相似文献   

4.
The reliability of multiprocessor system-on-chips (MPSoCs) is nowadays threatened by high chip temperatures leading to long-term reliability concerns and short-term functional errors. High chip temperatures might not only cause potential deadline violations, but also increase cooling costs and leakage power. Pro-active thermal-aware allocation and scheduling techniques that avoid thermal emergencies are promising techniques to reduce the peak temperature of an MPSoC. However, calculating the peak temperature of hundreds of design alternatives during design space exploration is time-consuming, in particular for unknown input patterns and data. In this paper, we address this challenge and present a fast analytic method to calculate a non-trivial upper bound on the maximum temperature of a multi-core real-time system with non-deterministic workload. The considered thermal model is able to address various thermal effects like heat exchange between neighboring cores and temperature-dependent leakage power. Afterwards, we integrate the proposed thermal analysis method into a design-space exploration framework to optimize the task to processing component assignment. Finally, we apply the proposed method in various case studies to explore thermal hot spots and to optimize the task to processing component assignment.  相似文献   

5.
This paper presents the design of an improved task scheduler for real-time and safety-critical systems, where it is important to deal with real-time requirements and reliability requirements simultaneously. The proposed scheduler implements EDF algorithm for the optimal scheduling of hard real-time tasks, which is essential for real-time operating systems. The proposed task scheduler allows removing any task from the queue according to task ID and regardless of the actual position of the task within the queue, which is important for flexibility of the scheduler for its future extensions. Both operations of the scheduler, i.e. task adding and task killing take always constant time (two clock cycles) to execute regardless of the actual or the maximum number of tasks within the scheduler. The scheduler was verified using simplified version of UVM and applying millions of instructions with randomly generated sort values. The scheduler, implemented in a form of a coprocessor, was synthesized into Intel FPGA Cyclone V with 100 MHz clock frequency. There are two improvements proposed that can significantly reduce resource costs of the scheduler, which is achieved by replacing static deadlines with dynamic deadlines and using a new Rocket Queue architecture for sorting of the tasks according to their deadline values. When both improvements are applied simultaneously, the total ALM cost savings are in the range from 42,59% to 60,18% and the total amount of registers is reduced by 73,74% to 74,87%, depending on the scheduler capacity. The spared resources are then used for implementation of two different variations of TMR in order to increase fault tolerance of the scheduler. The resource cost reductions achieved also indirectly increase the reliability of such scheduler because of reduced probability that a fault occurs.  相似文献   

6.
建立了具有反馈型和非反馈型任务的物联网硬实时控制系统——混合控制任务系统(hybrid controlling task system,HCTS)的任务模型,以全面准确地描述系统中不同任务的结构、交互模式和运行特征,并提出了一种新的响应时间分析方法,用于验证系统是否满足实时性要求.实验结果表明,HCTS中的任务具有较小的平均最差情况下的响应时间,但是在反馈次数较多的情况下对任务的实时性具有不利影响,从而为HCTS的优化提供了支持.  相似文献   

7.
Real-time task scheduling system structure and task model were proposed aiming at the network real-time scheduling problem.The task degree of urgency was defined by considering the deadline of task,execution time and interval time between works.The task degree of tightness was proposed based on service-level assurance,according to functional importance of different tasks in the real-time task scheduling system.The thrashing limit for avoiding task switching frequently was acquired through dynamic regulation to task priorities by degree of urgency and degree of tightness,which guaranteed the success rate of tasks execution and utilization ratio of client execution.Test simulation results suggest that the multi-feature dynamic priority scheduling strategy improves the success rate of task scheduling and shorten the average response time,which suggests it has obvious superiority compared with BE and EDF scheduling algorithm.  相似文献   

8.
This paper presents a quantitative reliability analysis of a system designed to tolerate both hardware and software faults. The system achieves integrated fault tolerance by implementing N-version programming (NVP) on redundant hardware. The system analysis considers unrelated software faults, related software faults, transient hardware faults, permanent hardware faults, and imperfect coverage. The overall model is Markov in which the states of the Markov chain represent the long-term evolution of the system-structure. For each operational configuration, a fault-tree model captures the effects of software faults and transient hardware faults on the task computation. The software fault model is parameterized using experimental data associated with a recent implementation of an NVP system using the current design paradigm. The hardware model is parameterized by considering typical failure rates associated with hardware faults and coverage parameters. The authors results show that it is important to consider both hardware and software faults in the reliability analysis of an NVP system, since these estimates vary with time. Moreover, the function for error detection and recovery is extremely important to fault-tolerant software. Several orders of magnitude reduction in system unreliability can be observed if this function is provided promptly  相似文献   

9.
一个面向嵌入式系统实时性能优化的抢占模型   总被引:2,自引:1,他引:1  
温涛  王济勇  王晓霞  邹翔 《通信学报》2005,26(9):129-134
通过对采用RM调度策略的实时嵌入式系统抢占行为的分析,建立了一个周期性实时任务集的抢占模型,从数学上定量地刻画了因抢占而导致的额外开销与系统中各实时任务属性的关系,以及与整个实时任务集的可调度性的关系。依据该模型并借鉴生物学领域的寄生思想,提出了一个基于进化规划的性能优化方法,通过调整任务启动时间,以减少抢占次数或改变抢占关系,降低系统额外开销,提高系统实时性能;最后通过实验验证了建立在抢占模犁基础上的嵌入式系统件能优化方法的有效件。  相似文献   

10.
Stack Size Minimization for Embedded Real-Time Systems-on-a-Chip   总被引:1,自引:0,他引:1  
The primary goal for real-time kernel software for single and multiple-processor on a chip systems is to support the design of timely and cost effective systems. The kernel must provide time guarantees, in order to predict the timely behaviorof the application, an extremely fast response time, in order not to waste computing power outside of the application cycles and save as much RAM space as possible in order to reduce the overall cost of the chip. The research on real-time software systems has produced algorithms that allow to effectively schedule system resources while guaranteeing the deadlines of the application and to group tasks in a very small number of non-preemptive sets which require much less RAM memory for stack. Unfortunately, up to now the research focus has been on time guarantees rather than on the optimization of RAM usage.Furthermore, these techniques do not apply to multiprocessor architectures which are likely to be widely used in future microcontrollers. This paper presents innovative scheduling and optimization algorithms that effectively solve the problem of guaranteeing schedulability with an extremely little operating system overhead and minimizing RAM usage. We developed a fast and simple algorithm for sharing resources in multiprocessor systems, together with an innovative procedure for assigning a preemption threshold to tasks. These allow the use of a single user stack. The experimental part shows the effectiveness of a simulated annealing-based tool that allows to find a schedulable system configuration starting from the selection of a near-optimal task allocation. When used in conjunction with our preemption threshold assignment algorithm, our tool further reduces the RAM usage in multiprocessor systems.  相似文献   

11.
This paper addresses embedded multiprocessor implementation of iterative, real-time applications, such as digital signal and image processing, that are specified as dataflow graphs. Scheduling dataflow graphs on multiple processors involves assigning tasks to processors (processor assignment), ordering the execution of tasks within each processor (task ordering), and determining when each task must commence execution. We consider three scheduling strategies: fully-static, self-timed and ordered transactions, all of which perform the assignment and ordering steps at compile time. Run time costs are small for the fully-static strategy; however it is not robust with respect to changes or uncertainty in task execution times. The self-timed approach is tolerant of variations in task execution times, but pays the penalty of high run time costs, because processors need to explicitly synchronize whenever they communicate. The ordered transactions approach lies between the fully-static and self-timed strategies; in this approach the order in which processors communicate is determined at compile time and enforced at run time. The ordered transactions strategy retains some of the flexibility of self-timed schedules and at the same time has lower run time costs than the self-timed approach.In this paper we determine an order of processor transactions that is nearly optimal given information about task execution times at compile time, and for a given processor assignment and task ordering. The criterion for optimality is the average throughput achieved by the schedule. Our main result is that it is possible to choose a transaction order such that the resulting ordered transactions schedule incurs no performance penalty compared to the more flexible self-timed strategy, even when the higher run time costs implied by the self-timed strategy are ignored.  相似文献   

12.
Static Security Optimization for Real-Time Systems   总被引:2,自引:0,他引:2  
An increasing number of real-time applications like railway signaling control systems and medical electronics systems require high quality of security to assure confidentiality and integrity of information. Therefore, it is desirable and essential to fulfill security requirements in security-critical real-time systems. This paper addresses the issue of optimizing quality of security in real-time systems. To meet the needs of a wide variety of security requirements imposed by real-time systems, a group-based security service model is used in which the security services are partitioned into several groups depending on security types. While services within the same security group provide the identical type of security service, the services in the group can achieve different quality of security. Security services from a number of groups can be combined to deliver better quality of security. In this study, we seamlessly integrate the group-based security model with a traditional real-time scheduling algorithm, namely earliest deadline first (EDF). Moreover, we design and develop a security-aware EDF schedulability test. Given a set of real-time tasks with chosen security services, our scheduling scheme aims at optimizing the combined security value of the selected services while guaranteeing the schedulability of the real-time tasks. We study two approaches to solve the security-aware optimization problem. Experimental results show that the combined security values are substantially higher than those achieved by alternatives for real-time tasks without violating real-time constraints.   相似文献   

13.
Applying system-level fault-tolerant techniques such as active redundancy is a promising way to enhance the system reliability for safety-related applications. Embedded system design using active redundancy is a challenging task that involves solving two major problems, namely finding the optimal redundancy configuration and mapping/scheduling of the application (including the redundant components) to the platform under timing and reliability constraints. This paper presents a framework for automatic synthesis of fault-tolerant designs on multiprocessor platforms. The core of the framework consists of: (1) a reliability analysis, that computes the system-level reliability in the presence spatial and temporal redundancy, and (2) an optimization approach for reliability-aware design space exploration. The proposed approach considers both transient and permanent faults and is among the first to support system design using imperfect fault detectors. The framework takes an application model, a platform model and a set of application requirements as input, and generates the recommended design parameters, including task-to-processor binding, task schedule and the selection/placement of redundancy. The effectiveness of our approach is illustrated using several case studies.  相似文献   

14.
The problem of software fault content and reliability estimations is considered. Estimations that can be used to improve the control of a software projects are emphasized. A model of how to estimate the number of software faults is presented, which takes into account both the development process and the developed software product. A model of how to predict, before the testing has started, the occurrences of faults during the testing stage is also presented. These models, together with software reliability growth models, have been evaluated on a number of software projects  相似文献   

15.
The author proposes a software reliability model for a large real-time telecommunications software architecture. Some simple examples of the critical components of the software architecture and their dependencies are described. The component dependencies permit the propagation of faults from the component in which the fault originates to the other components. This propagation can cause failures in the chain (or in the tree) of components. Detection and failures depends on the tests executed or on the number and type of customer requests. An error can occur in any component. This error can be caused by a fault that propagated from another component or it can be a fault that originates in that component. The error can be traced through the component-dependency chain (or tree) to repair all the faults that are associated with that error. The software reliability model guides the design of the software architecture  相似文献   

16.
An efficient task scheduling approach shows promising way to achieve better resource utilization in cloud computing. Various task scheduling approaches with optimization and decision‐making techniques have been discussed up to now. These approaches ignored scheduling conflict among the similar tasks. The conflict often leads to miss the deadlines of the tasks. The work studies the implementation of the MCDM (multicriteria decision‐making) techniques in backfilling algorithm to execute deadline‐based tasks in cloud computing. In general, the tasks are selected as backfill tasks, whose role is to provide ideal resources to other tasks in the backfilling approach. The selection of the backfill task is challenging one, when there are similar tasks. It creates conflict in the scheduling. In cloud computing, the deadline‐based tasks have multiple parameters such as arrival time, number of VMs (virtual machines), start time, duration of execution, and deadline. In this work, we present the deadline‐based task scheduling algorithm as an MCDM problem and discuss the MCDM techniques: AHP (Analytical Hierarchy Process), VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje), and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) to avoid similar task scheduling conflicts. We simulate the backfilling algorithm along with three MCDM mechanisms to avoid scheduling conflicts among the similar tasks. The synthetic workloads are considered to study the performance of the proposed scheduling algorithm. The mechanism suggests an efficient VM allocation and its utilization for deadline‐based tasks in the cloud environment.  相似文献   

17.
硬实时系统中基于任务同步及节能的动态调度算法   总被引:1,自引:0,他引:1  
提出基于任务同步及节能的动态实时调度算法HDSA(hybrid dynamic scheduling algorithm),以有效地解决任务同步及节能的难题.HDSA 结合RM及EDF算法,在满足任务实时可调度性及任务同步的限制条件下,采用DVFS节省能耗.HDSA包含静态算法及动态算法两部分.静态算法在静态条件下,求出任务的静态速度.动态调度算法在实际运行中,固定临界区的运行速度,并充分回收、利用任务运行时的空闲执行时间,调节处理器的速度,以有效降低能耗并满足实时可调度性.同时避免高优先权任务被阻塞时,临界区继承高优先权任务的速度时所造成的处理器电压开关的频繁切换,因而能有效地降低实时任务调度的成本.实验测试表明,HDSA在调度性能上明显优于目前所知的有效算法.  相似文献   

18.
An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.  相似文献   

19.
The co-synthesis of hardware–software systems for complex embedded applications has been studied extensively with focus on various qualitative system objectives such as high speed performance and low power dissipation. One of the main challenges in the construction of multiprocessor systems for complex real time applications is provide high levels of system availability that satisfies the users’ expectations. Even though the area of hardware software cosynthesis has been studied extensively in the recent past, the issues that specifically relate to design exploration for highly available architectures need to be addressed more systematically and in a manner that supports active user participation. In this paper, we propose a user-centric co-synthesis mechanism for generating gracefully degrading, heterogeneous multiprocessor architectures that fulfills the dual objectives of achieving real-time performance as well as ensuring high levels of system availability at acceptable cost. A flexible interface allows the user to specify rules that effectively capture the users’ perceived availability expectations under different working conditions. We propose an algorithm to map these user requirements to the importance attached to the subset of services provided during any functional state. The system availability is evaluated on the basis of these user-driven importance values and a CTMC model of the underlying fail-repair process. We employ a stochastic timing model in which all the relevant performance parameters such as task execution times, data arrival times and data communication times are taken to be random variables. A stochastic scheduling algorithm assigns start and completion time distributions to tasks. A hierarchical genetic algorithm optimizes the selections of resources, i.e. processors and busses, and the task allocations. We report the results of a number of experiments performed with representative task graphs. Analysis shows that the co-synthesis tool we have developed is effectively driven by the user’s availability requirements as well as by the topological characteristics of the task graph to yield high quality architectures. We experimentally demonstrate the edge provided by a stochastic timing model in terms of performance assessment, resource utilization, system-availability and cost. An erratum to this article is available at .  相似文献   

20.
Semi-partitioned real-time scheduling algorithms extend partitioned ones by allowing a (usually small) subset of tasks to migrate. The first such algorithm to be proposed was directed at soft real-time (SRT) sporadic task systems where bounded deadline tardiness is acceptable. That algorithm, called EDF-fm, has the desirable property that migrations are boundary-limited, i.e., they can only occur at job boundaries. However, it is not optimal because per-task utilization restrictions are required. In this paper, a new optimal semi-partitioned scheduling algorithm for SRT sporadic task systems is proposed that eliminates such restrictions. This algorithm, called EDF-os, preserves the boundary-limited property. In overhead-aware schedulability experiments presented herein, EDF-os proved to be better than all other tested alternatives in terms of schedulability in almost all considered scenarios. It also proved capable of ensuring very low tardiness bounds, which were near zero in most considered scenarios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号