首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Most present-day reliability schemes using redundancy to mask the failure of individual logic modules employ majority voting with the assumption that the replicated modules have symmetrical failure characteristics. An analysis is presented of such schemes when the modules exhibit asymmetrical failure modes; that is, the probability that a module fails with a 0 output is not equal to the probability that it fails with a 1 output. A general expression is presented which gives the reliability of a network consisting of n identical modules feeding a k-out-of-n voter. It is shown that a simple majority element does not always represent the optimal choice. Plots illustrating the results are included.  相似文献   

2.
A Critical Review of Human Performance Reliability Predictive Methods   总被引:1,自引:0,他引:1  
Similarities and differences among 22 methods of quantitatively predicting operator and technician performance are described. Emphasis has been given to eight methods most fully developed and most likely to be used by system engineers. Two general techniques are employed: analysis of historical data and computer-simulation of behavioral processes. No general purpose methodology is available; each method deals with some types of tasks and systems more efficiently than others. In general, simulation-based methods are more powerful than nonsimulation methods. Most methods output probability estimates of successful task/system performance and completion time, but are relatively insensitive to equipment design parameters, manpower selection and training needs. With only one exception no operability method utilizes a formal data base as input and in most cases the parameters these input data describe are not specifically indicated. For most methods validation and/or system application data are either lacking or incomplete.  相似文献   

3.
TG (task graphs) are used to describe the execution of several tasks under some precedence constraints. Direct evaluation of TG provides an average completion time of the overall job, assuming no limits exist in the number of processing units and with no regard for allocation schemes. This paper presents a systematic approach for evaluating TG of jobs executed under predetermined allocation constraints. This extension of TG relies on GSPN (generalized stochastic Petri nets). A systematic mapping of a TG into a GSPN model is discussed. This GSPN model is extended to incorporate information about the static allocation of the set of tasks in the TG. An algorithm is implemented to evaluate static allocation schemes with or without task replication. However, for task replication, a homogeneous system is assumed because the execution time of those tasks does not change when allocated to various processing units. Also, under this assumption, task execution rates are modified by adding communication costs involved in sending data required by the next task, in turn, to execute. Thus, using a single model, TG are evaluated with constraints not only on where replicated and nonreplicated tasks are to be executed but on the number of processing units available, task allocation constraints, and the communication costs involved when they are remotely located  相似文献   

4.
The transmitted data rate is a function of the channel state and is varied in such a way as to keep the probability of bit error approximately constant. The performance measure of the system is the probability of successful message completion in a given time span TD. Optimum operational signal-to-noise ratios are found as well as practical limits on the maximum and minimum transmission data rates. It is shown that the adaptive data rate feature provides a significant improvement in system performance as compared to a system transmitting at a fixed data rate. The performance improvement which can be obtained by the use of forward error correction coding is also analyzed. The codes considered are Reed-Solomon codes with rates of 1/3, 1/2, and 2/3. A much simpler expression for the probability of successful completion of a message is derived and used in the optimization search  相似文献   

5.
Internet of vehicles (IoV) comprises connected vehicles and connected autonomous vehicles and offers numerous benefits for ensuring traffic and safety competence. Several IoV applications are delay-sensitive and need resources for computation and data storage that are not provided by vehicles. Therefore, these tasks are always offloaded to highly powerful nodes, namely, fog, which can bring resources nearer to the networking edges, reducing both traffic congestion and load. Besides, the mechanism of offloading the tasks to the fog nodes in terms of delay, computing power, and completion time remains still as an open concern. Hence, an efficient task offloading strategy, named Aquila Student Psychology Optimization Algorithm (ASPOA), is developed for offloading the IoV tasks in a fog setting in terms of the objectives, such as delay, computing power, and completion time. The devised optimization algorithm, known as ASPOA, is the incorporation of Aquila Optimizer (AO) and Student Psychology Based Optimization (SPBO). Task offloading in the IoV-fog system selects suitable resources for executing the tasks of the vehicles by considering several constraints and parameters to satisfy the user requirements. The simulation outcomes have shown that the devised ASPOA-based task offloading method has achieved better performance by achieving a minimum delay of 0.0009 s, minimum computing power of 8.884 W, and minimum completion time of 0.441 s.  相似文献   

6.
This paper addresses the issues of online scheduling for integrated single-wafer processing tools with temporal constraints. The integrated single-wafer processing tool is an integrated processing system consisting of single-wafer processing modules and transfer modules. Certain chemical processes require that the wafer flow satisfies temporal constraints, especially, postprocessing residency constraints. This paper proposes an online scheduling method that guarantees both logical and temporal correctness for the integrated single-wafer processing tools. First, mathematical formulation of the scheduling problem using temporal constraint sets is presented. Then, an online, noncyclic scheduling algorithm with polynomial complexity is developed. The proposed scheduling algorithm consists of two subalgorithms: FEASIBLE_SCHED_SPACE and OPTIMAL_SCHED. The former computes the feasible solution space in the continuous time domain, and the latter computes the optimal solution that minimizes the completion time of the last operation of a newly inserted wafer.  相似文献   

7.
The selection of an optimal checkpointing strategy has most often been considered in the transaction processing environment where systems are allowed unlimited repairs. In this environment an optimal strategy maximizes the time spent in the normal operating state and consequently the rate of transaction processing. This paper seeks a checkpoint strategy which maximizes the probability of critical-task completion on a system with limited repairs. These systems can undergo failure and repair only until a repair time exceeds a specified threshold, at which time the system is deemed to have failed completely. For such systems, a model is derived which yields the probability of completing the critical task when each checkpoint operation has fixed cost. The optimal number of checkpoints can increase as system reliability improves. The model is extended to include a constraint which enforces timely completion of the critical task  相似文献   

8.
In a personal communication service (PCS) network, the call completion probability and the effective call holding times for both complete and incomplete calls are central parameters in the network cost/performance evaluation. These quantities will depend on the distributions of call holding times and cell residence times. The classical assumptions made in the past that call holding times and cell residence times are exponentially distributed are not appropriate for the emerging PCS networks. This paper presents some systematic results on the probability of call completion and the effective call holding time distributions for complete and incomplete calls with general cell residence times and call holding times distributed with various distributions such as gamma, erlang, hyperexponential, hyper-erlang, and other staged distributions. These results provide a set of alternatives for PCS network modeling, which can be chosen to accommodate the measured data from PCS field trials. The application of these results in billing rate planning is also discussed  相似文献   

9.
For an embedded real-time process-control system incorporating artificial-intelligence programs, the system reliability is determined by both the software-driven response computation time and the hardware-driven response execution time. A general model, based on the probability that the system can accomplish its mission under a time constraint without incurring failure, is proposed to estimate the software/hardware reliability of such a system. The factors which influence the proposed reliability measure are identified, and the effects of mission time, heuristics and real-time constraints on the system reliability with artificial-intelligence planning procedures are illustrated. An optimal search procedure might not always yield a higher reliability than that of a nonoptimal search procedure. Hence, design parameters and conditions under which one search procedure is preferred over another, in terms of improved software/hardware reliability, are identified  相似文献   

10.
Reliability is the probability that a system functions according to specifications over a given period of time. During this period, system specifications may allow failures and repairs to occur. This paper considers systems with specifications which limit the repair process. Such systems place a limitation on either the repair duration or the number of repairs. For example, a system controlling a real-time process may go down, be repaired, and continue proper control as long as the repair duration does not exceed a specified bound. Otherwise, the system fails. We model and analyze systems with three different types of limited repairs: 1) Bounded repair time, 2) Bounded cumulative repair time, and 3) Bounded number of repairs. Examples of such models exist in real-time process control, shock models, transaction processing, and maintenance models. For each of the three types of systems with limited repairs, we derive the distributions and the mean values of the system lifetime, the cumulative operational time, and the largest continuous operational time before a complete system failure. We also consider the execution of a task on such systems. The task is preempted upon the occurence of a failure, and is resumed or repeated after repair. The probability of completion of a task with a given work requirement in the three limited downtime scenarios is derived. We study the effect of preemptive-resume versus preemptive-repeat failures on the probability of task completion.  相似文献   

11.
为解决无人机(UAV)集群任务调度时面临各节点动态、不稳定的情况,该文提出一种面向多计算节点的可尽量避免任务中断且具有容错性的任务调度方法。该方法首先为基于多计算节点构建了一个以最小化任务平均完成时间为优化目标的任务分配策略;然后基于任务的完成时间和边缘计算节点的存留时间两者的概率分布,将任务计算节点上的执行风险量化成额外开销时间;最后以任务的完成时间与额外开销时间之和替换原本的完成时间,设计了风险感知的任务分配策略。在仿真环境下将该文提出的任务调度方法与3种基准调度方法进行了对比实验,实验结果表明该方法能够有效地降低任务平均响应时间、任务平均执行次数以及任务截止时间错失率。证明该文提出的方法降低了任务重调度和重新执行带来的额外开销,可实现分布式协同计算任务的调度工作,为复杂场景下的无人机集群网络提供新的技术支持。  相似文献   

12.
An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this paper, we address an important HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulability analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications, which aims to solve the subproblems 1, 2 and 3 effectively in an integrated fashion. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications (i.e., to find a globally optimized allocation/mapping of processing resources with feasible execution schedule of modules). From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0 and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.  相似文献   

13.
A Multiprocessor System-on-Chip (MPSoC) may contain hundreds of processing elements (PEs) and thousands of tasks but design productivity is lagging the evolution of HW platforms. One problem is application task mapping, which tries to find a placement of tasks onto PEs which optimizes several criteria such as application runtime, intertask communication, memory usage, energy consumption, real-time constraints, as well as area in case that PE selection or buffer sizing are combined with the mapping procedure. Among optimization algorithms for the task mapping, we focus in this paper on Simulated Annealing (SA) heuristics. We present a literature survey and 5 general recommendations for reporting heuristics that should allow disciplined comparisons and reproduction by other researchers. Most importantly, we present our findings about SA parameter selection and 7 guidelines for obtaining a good trade-off made between solution quality and algorithm’s execution time. Notably, SA is compared against global optimum. Thorough experiments were performed with 2–8 PEs, 11–32 tasks, 10 graphs per system, and 1000 independent runs, totaling over 500 CPU days of computation. Results show that SA offers 4–6 orders of magnitude reduction is optimization time compared to brute force while achieving high quality solutions. In fact, the globally optimum solution was achieved with a 1.6—90 % probability when problem size is around 1e9–4e9 possibilities. There is approx. 90 % probability for finding a solution that is at most 18 % worse than optimum.  相似文献   

14.
In this paper, we analyze the performance impact of JobTracker failure in Hadoop. A JobTracker failure is a serious problem that affects the overall job processing performance. We describe the cause of failure and the system behaviors because of failed job processing in the Hadoop. On the basis of the analysis, we build a job completion time model that reflects failure effects. Our model is based on a stochastic process with a node crash probability. With our model, we run simulation of performance impact with very credible failure data available from USENIX called computer failure data repository that have been collected for past 9 years. The results show that the performance impact is very severe in that the job completion time increases about four times typically, and in a worst case, it increases up to 68 times. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
This paper discusses two models of two-unit standby redundant systems in which the switchover time is a random variable and the repair facility is not available for a random time immediately after each repair completion. In model I the probability distributions of the life time of the online unit and switchover time are general while all the other distributions are exponential. Model II is a cold standby system in which the probability distributions of the “preparation time” of the repair facility is exponential and all the other distributions are general. Using the regeneration point technique the availability functions of the two systems are determined. Several special cases are also discussed.  相似文献   

16.
A class of redundant cascaded chains with i.i.d. modules is considered in which recovery from a failure takes place by replacing the faulty module by a spare module. The complexity of the reconfiguration process depends upon the location of spare modules in the cascade. This paper deals with the question of optimally placing the spare modules in order to minimize the s-expected recovery time (down time) of the system. Exact analysis is carried out for cascades with one and two spare modules and an approximate analysis is given for three or more spares. Even though exact analysis does not seem to be practical in the general case, the symmetry of spare module positions in the special cases discussed here and linearity of the system suggest that one might expect the optimal positions to be symmetric in general. Because of this symmetry, one can reduce the number of variables to be considered in the general case, however, some inaccuracies might be introduced.  相似文献   

17.
In this paper, we extend the analysis of multipath routing presented in our previous work, so that the basic restrictions on the evaluation and optimization of that scheme can be dropped (e.g., disjoint paths and identical paths in terms of failure probability). In that work, we employed diversity coding in order to provide increased protection against frequent route failures by splitting data packets and distributing them over multiple disjoint paths. Motivated by the high increase in the packet delivery ratio, we study the increase we can achieve through the usage of multiple paths in the general case, where the paths are not necessarily independent and their failure probabilities vary. For this reason, a function that measures the probability of successful transmission is derived as a tight approximation of the evaluation function P/sub succ/. Given the failure probabilities of the available paths and their correlation, we are able to find in polynomial time the set of paths that maximizes the probability of reconstructing the original information at the destination.  相似文献   

18.
A ternary state circular sequential k-out-of-n congestion (TSCSknC) system is presented. The system is an extension of the circular sequential k-out-of-n congestion (CSknC) system which consists of two connection states: a) congestion (server busy), and b) successful. In contrast, a TSCSknC system considers three connection states: i) congestion, ii) break down, and iii) successful. It finds applications in some reliable systems to prevent single-point failures, such as the ones used in (k,n) secret key sharing systems. The system further assumes that each of the n servers has known connection probabilities in congestion, break-down, and successful states. These n servers are arranged in a circle, and are made with connection attempts sequentially round after round. If a server is not congested, the connection can be either successful, or a failure. Previously connected servers are blocked from reconnecting if they were in either states ii), or iii). Congested servers are attempted repeatedly until k servers are connected successfully, or (n-k+1) servers have a break-down status. In other words, the system works when k servers are successfully connected, but fails when (n-k+1) servers are in the break-down state. In this paper, we present the recursive, and marginal formulas for the system successful probability, the system failure probability, as well as the average stop length, i.e. number of connections needed to terminate the system to a successful or failure state, and its computational complexity.  相似文献   

19.
This paper proposes a cross-reference method of nonlinear time series analysis, combining the tasks of dynamical system parameter estimation and noise reduction which were fulfilled separately before. With the positive interaction between the two processing modules, the method is somewhat superior. Some prior works can be viewed as special cases of this general framework and effective new algorithms may be devised according to it. Two examples of chaotic time series analysis are also given to show the applicability of the proposed method.  相似文献   

20.
Hardware-software co-synthesis starts with an embedded-system specification and results in an architecture consisting of hardware and software modules to meet performance, power, and cost goals. Embedded systems are generally specified in terms of a set of acyclic task graphs. In this paper, we present a co-synthesis algorithm COSYN, which starts with periodic task graphs with real-time constraints and produces a low-cost heterogeneous distributed embedded-system architecture meeting these constraints. It supports both concurrent and sequential modes of communication and computation. It employs a combination of preemptive and nonpreemptive static scheduling. It allows task graphs in which different tasks have different deadlines. It introduces the concept of an association array to tackle the problem of multirate systems. It uses a new task-clustering technique, which takes the changing nature of the critical path in the task graph into account. It supports pipelining of task graphs and a mix of various technologies to meet embedded-system constraints and minimize power dissipation. In general, embedded-system tasks are reused across multiple functions. COSYN uses the concept of architectural hints and reuse to exploit this fact. Finally, if desired, it also optimizes the architecture for power consumption. COSYN produces optimal results for the examples from the literature while providing several orders of magnitude advantage in central processing unit time over an existing optimal algorithm. The efficacy of COSYN and its low-power extension COSYN-LP is also established through their application to very large task graphs (with over 1000 tasks)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号