首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Synchronous languages like Esterel have been widely adopted for designing reactive systems in safety-critical domains such as avionics. Specifications written in Esterel are based on the underlying ??synchrony hypothesis??, which needs to be validated when Esterel specifications get compiled to real implementations (such as C code). In this work, we present a model-driven and architecture-aware timing analysis framework for C code generated from Esterel and executed on general-purpose processors. By integrating model-level information into the traditional timing analysis, we can efficiently compute accurate time estimates via systematically eliminating a large number of infeasible paths in the generated code. Experimental results show that with our proposed intermediate representation level infeasible path analysis in the model compilation, we obtain up to 16.1?% tighter WCET estimates compared to the traditional assembly code level infeasible path detection with substantially less analysis time. Furthermore, by maintaining the traceability links between Esterel specifications and the generated C code, we are able to map the time-critical computations at the C-level back to the Esterel-level.  相似文献   

2.
Bounded model checking of software using SMT solvers instead of SAT solvers   总被引:1,自引:0,他引:1  
C bounded model checking (cbmc) has proved to be a successful approach to automatic software analysis. The key idea is to (i) build a propositional formula whose models correspond to program traces (of bounded length) that violate some given property and (ii) use state-of-the-art SAT solvers to check the resulting formulae for satisfiability. In this paper, we propose a generalisation of the cbmc approach on the basis of an encoding into richer (but still decidable) theories than propositional logic. We show that our approach may lead to considerably more compact formulae than those obtained with cbmc. We have built a prototype implementation of our technique that uses a satisfiability modulo theories (SMT) solver to solve the resulting formulae. Computer experiments indicate that our approach compares favourably with—and on some significant problems outperforms—cbmc.  相似文献   

3.
This paper describes an integration of Satisfiability Modulo Theories (SMT) solvers with the HOL4 theorem prover. Proof obligations are passed from the interactive HOL4 prover to the SMT solver, which can often prove them automatically. This makes state-of-the-art SMT solving techniques available to users of the HOL4 system, thereby increasing the degree of automation for a substantial fragment of its logic. We compare a translation to Yices’s native input format with a translation to SMT-LIB format.  相似文献   

4.
Buffer overflow detection using static analysis can provide a powerful tool for software programmers to find difficult bugs in C programs. Sound static analysis based on abstract interpretation, however, often suffers from false alarm problem. Although more precise abstraction can reduce the number of the false alarms in general, the cost to perform such analysis is often too high to be practical for large software. On the other hand, less precise abstraction is likely to be scalable in exchange for the increased false alarms. In order to attain both precision and scalability, we present a method that first applies less precise abstraction to find buffer overflow alarms fast, and selectively applies a more precise analysis only to the limited areas of code around the potential false alarms. In an attempt to develop the precise analysis of alarm filtering for large C programs, we perform a symbolic execution over the potential alarms found in the previous analysis, which is based on the abstract interpretation. Taking advantage of a state-of-art SMT solver, our precise analysis efficiently filters out a substantial number of false alarms. Our experiment with the test cases from three open source programs shows that our filtering method can reduce about 68% of false alarms on average.  相似文献   

5.
Parallel languages allow the programmer to express parallelism at a high level. The management of parallelism and the generation of interprocessor communication is left to the compiler and the runtime system. This approach to parallel programming is particularly attractive if a suitable widely accepted parallel language is available. High Performance Fortran (HPF) has emerged as the first popular machine independent parallel language, and remarkable progress has been made towards compiling HPF efficiently. However, the performance of HPF programs is often poor and unpredictable, and obtaining adequate performance is a major stumbling block that must be overcome if HPF is to gain widespread acceptance. The programmer is often in the dark about how to improve the performance of an HPF program since poor performance can be attributed to a variety of reasons, including poor choice of algorithm, limited use of parallelism, or an inefficient data mapping. This paper presents a profiling tool that allows the programmer to identify the regions of the program that execute inefficiently, and to focus on the potential causes of poor performance. The central idea is to distinguish the code that is executing efficiently from the code that is executing poorly. Efficient code uses all processors of a parallel system to make progress, while inefficient code causes processors to wait, execute replicated code, idle, communicate, or perform compiler bookkeeping. We designate the latter code as non-scalable, since adding more processors generally does not lead to improved performance for such code. By analogy, the former code is called scalable. The tool presented here separates a program into scalable and non-scalable components and identifies the causes of non-scalability of different components. We show that compiler information is the key to dividing the execution times into logical categories that are meaningful to the programmer. We present the design and implementation of a profiler that is integrated with Fx, a compiler for a variant of HPF. The paper includes two examples that demonstrate how the data reported by the profiler are used to identify and resolve performance bugs in parallel programs. © 1997 John Wiley & Sons, Ltd.  相似文献   

6.
Reverse debugging is the software development technique that effectively helps fix bugs occurring at the nondeterministic program behavior. It allows one to examine the past states of the program without rerunning it. An implementation of reverse debugging based on deterministic replay in the QEMU 2.0 emulator is described. A number of techniques improving the debugging performance due to reducing the amount of saved data, optimized storage of system snapshots, indexing, and compressing of the event log are proposed. The emulator can work together with the interactive GDB debugger, which makes it possible to use the reverse-continue, reverse-nexti, reverse-stepi and reverse-finish commands in the course of debugging. The execution time of these commands depends on the frequency of recording the system’s state snapshots. An estimate of the optimal frequency for the reverse-continue command is obtained.  相似文献   

7.
In this paper we consider integration of SMT solvers with the filtering algorithms for the finite domain alldifferent constraint. Such integration makes SMT solvers suitable for solving constraint satisfaction problems with the alldifferent constraint involved. First, we present a novel algorithm for explaining inconsistencies and propagations in the alldifferent constraint. We compare it to Katsirelos’ algorithm and flow-based algorithms that are commonly used for that purpose. Then we describe our DPLL(T)-compliant SMT theory solver for constraint satisfaction problems that include alldifferent constraints. We also provide an experimental evaluation of our approach.  相似文献   

8.
Automatic performance debugging of parallel applications includes two main steps: locating performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in two ways: first, several previous efforts automate locating bottlenecks, but present results in a confined way that only identifies performance problems with a priori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships, but these efforts do not focus on locating performance bottlenecks or uncovering their root causes.The simple program and multiple data (SPMD) programming model is widely used for both high performance computing and Cloud computing. In this paper, we design and implement an innovative system, AutoAnalyzer, that automates the process of debugging performance problems of SPMD-style parallel programs, including data collection, performance behavior analysis, locating bottlenecks, and uncovering their root causes. AutoAnalyzer is unique in terms of two features: first, without any prior knowledge, it automatically locates bottlenecks and uncovers their root causes for performance optimization; second, it is lightweight in terms of the size of performance data to be collected and analyzed. Our contributions are three-fold: first, we propose two effective clustering algorithms to investigate the existence of performance bottlenecks that cause process behavior dissimilarity or code region behavior disparity, respectively; meanwhile, we present two searching algorithms to locate bottlenecks; second, on the basis of the rough set theory, we propose an innovative approach to automatically uncover root causes of bottlenecks; third, on the cluster systems with two different configurations, we use two production applications, written in Fortran 77, and one open source code—MPIBZIP2 (http://compression.ca/mpibzip2/), written in C++, to verify the effectiveness and correctness of our methods. For three applications, we also propose an experimental approach to investigating the effects of different metrics on locating bottlenecks.  相似文献   

9.
Performance related problems play a key role in the Software Development Process (SDP). In particular an early integration of performance specifications in the SDP has been recognized during last years as an effective approach to speed up the production of high quality and reliable software. In this context we defined and implemented a technique for automatically evaluating performance aspects of UML software architectures. To achieve this goal the starting UML model of the software architecture under exam has been mapped into a performance domain afterwards analyzed. The performance indices are inserted in the UML model exploiting the OMG Profile for Schedulability, Performance and Time Specification standard. However, to really automate the process, it was mandatory to specify the UML software representation by fixing semantic rules. The goal of this paper is the formalization of the model representation, characterizing the syntax and the semantics through which specifying performance requirements and behaviors into UML models in order to be compliant with the technique we implemented.  相似文献   

10.
模拟器是计算机系统设计中非常重要的一种技术。O racle研究能够用来确定所研究问题的最优或最差情况,为正常研究提供有用的辅助信息。但是现在常用的一些同步多线程(SM T)模拟器都不能提供支持O racle研究所需的信息。文章结合原有模拟器的基础,提供了一种新的支持O racle研究的模拟平台。同时原SM T模拟器只支持ICO UNT这一种取指策略,文章在原模拟器基础上,又增加了BR COU NT和M ISSCO UNT这两种通用的取指策略。  相似文献   

11.
This article examined the effects of product aesthetics on several outcome variables in usability tests. Employing a computer simulation of a mobile phone, 60 adolescents (14-17 yrs) were asked to complete a number of typical tasks of mobile phone users. Two functionally identical mobile phones were manipulated with regard to their visual appearance (highly appealing vs not appealing) to determine the influence of appearance on perceived usability, performance measures and perceived attractiveness. The results showed that participants using the highly appealing phone rated their appliance as being more usable than participants operating the unappealing model. Furthermore, the visual appearance of the phone had a positive effect on performance, leading to reduced task completion times for the attractive model. The study discusses the implications for the use of adolescents in ergonomic research.  相似文献   

12.
The authors describe Mtool, a software tool for analyzing performance losses in shared memory parallel programs. Mtool augments a program with low overhead instrumentation which perturbs the program's execution as little as possible while generating enough information to isolate memory and synchronization bottlenecks. After running the instrumented version of the parallel program, the programmer can use Mtool's window-based user interface to view compute time, memory, and synchronization objects. The authors describe Mtool's low overhead instrumentation methods, memory bottleneck detection technique, and attention focusing mechanisms, contrast Mtool with other approaches, and offer a case study to demonstrate its effectiveness  相似文献   

13.
We analyze and compare two solvers for Boolean optimization problems: WMaxSatz, a solver for Partial MaxSAT, and MinSatz, a solver for Partial MinSAT. Both MaxSAT and MinSAT are similar, but previous results indicate that when solving optimization problems using both solvers, the performance is quite different on some cases. For getting insights about the differences in the performance of the two solvers, we analyze their behaviour when solving 2SAT-MaxOnes problem instances, given that 2SAT-MaxOnes is probably the most simple, but NP-hard, optimization problem we can solve with them. The analysis is based first on the study of the bounds computed by both algorithms on some particular 2SAT-MaxOnes instances, characterized by the presence of certain particular structures. We find that the fraction of positive literals in the clauses is an important factor regarding the quality of the bounds computed by the algorithms. Then, we also study the importance of this factor on the typical case complexity of Random-p 2SAT-MaxOnes, a variant of the problem where instances are randomly generated with a probability p of having positive literals in the clauses. For the case p=0, the performance results indicate a clear advantage of MinSatz with respect to WMaxSatz, but as we consider positive values of p WMaxSatz starts to show a better performance, although at the same time the typical complexity of Random-p 2SAT-MaxOnes decreases as p increases. We also study the typical value of the bound computed by the two algorithms on these sets of instances, showing that the behaviour is consistent with our analysis of the bounds computed on the particular instances we studied first.  相似文献   

14.
A new method is presented for defining the bounds on achieveable robust performance for stable SISO systems which can be described by approximate linear models. The method provides a tutorial presentation on the use of the H -formulation to define performance specifications which can be guaranteed despite the degree of uncertainty in the model. The parametric uncertainty associated with the linear process model is used to contruct a region of achievable performance in performance weight parameter space, which can be used to characterize bandwidth and underdampedness in the achievable closed-loop response. This region is bounded by locii of ‘best performance guaranteeing robustness’ and ‘worst necessary performance’ and also by robust stability constraints. The position and size of the region depends greatly on the level of model uncertainty. Using the performance bounds thus generated, the designer can pick a performance weight which will guarantee robust performance while incorporating his own preferences regarding controller type and response characteristics.  相似文献   

15.
A recent series of experiments with a group of state-of-the-art SAT solvers and several well-defined classes of problem instances reports statistically significant performance variability for the solvers. A systematic analysis of the observed performance data, all openly archived on the Web, reveals distributions which we classify into three broad categories: (1) readily characterized with a simple 2-test, (2) requiring more in-depth analysis by a statistician, (3) incomplete, due to time-out limit reached by specific solvers. The first category includes two well-known distributions: normal and exponential; we use simple first-order criteria to decide the second category and label the distributions as near-normal, near-exponential and heavy-tail. We expect that good models for some if not most of these may be found with parameters that fit either generalized gamma, Weibull, or Pareto distributions. Our experiments show that most SAT solvers exhibit either normal or exponential distribution of execution time (runtime) on many equivalence classes of problem instances. This finding suggests that the basic mathematical framework for these experiments may well be the same as the one used to test the reliability or lifetime of hardware components such as lightbulbs, A/C units, etc. A batch of N replicated hardware components represents an equivalence class of N problem instances in SAT, a controlled operating environment A represents a SAT solver A, and the survival function (where x represents the lifetime) is the complement of the solvability function where x may represent runtime, implications, backtracks, etc. As demonstrated in the paper, a set of unrelated benchmarks or randomly generated SAT instances available today cannot measure the performance of SAT solvers reliably — there is no control on their hardness. However, equivalence class instances as defined in this paper are, in effect, replicated instances of a specific reference instance. The proposed method not only provides a common platform for a systematic study and a reliable improvement of deterministic and stochastic SAT solvers alike but also supports the introduction and validation of new problem instance classes.  相似文献   

16.
A recent series of experiments with a group of state-of-the-art SAT solvers and several well-defined classes of problem instances reports statistically significant performance variability for the solvers. A systematic analysis of the observed performance data, all openly archived on the Web, reveals distributions which we classify into three broad categories: (1) readily characterized with a simple 2-test, (2) requiring more in-depth analysis by a statistician, (3) incomplete, due to time-out limit reached by specific solvers. The first category includes two well-known distributions: normal and exponential; we use simple first-order criteria to decide the second category and label the distributions as near-normal, near-exponential and heavy-tail. We expect that good models for some if not most of these may be found with parameters that fit either generalized gamma, Weibull, or Pareto distributions. Our experiments show that most SAT solvers exhibit either normal or exponential distribution of execution time (runtime) on many equivalence classes of problem instances. This finding suggests that the basic mathematical framework for these experiments may well be the same as the one used to test the reliability or lifetime of hardware components such as lightbulbs, A/C units, etc. A batch of N replicated hardware components represents an equivalence class of N problem instances in SAT, a controlled operating environment A represents a SAT solver A, and the survival function R A (x) (where x represents the lifetime) is the complement of the solvability function S A (x)=1–R A (x) where x may represent runtime, implications, backtracks, etc. As demonstrated in the paper, a set of unrelated benchmarks or randomly generated SAT instances available today cannot measure the performance of SAT solvers reliably – there is no control on their 'hardness'. However, equivalence class instances as defined in this paper are, in effect, replicated instances of a specific reference instance. The proposed method not only provides a common platform for a systematic study and a reliable improvement of deterministic and stochastic SAT solvers alike but also supports the introduction and validation of new problem instance classes.  相似文献   

17.
This paper is concerned with robust eigenstructure assignment for multivariable systems. It combines time-domain performance specifications provided by eigenstructure assignment and robust performance specifications in the frequency domain considered by H control to realize joint optimal robust control design. A unified parametric solution for state-feedback eigenstructure assignment is derived for both the case where the sets of closed- and open-loop eigenvalues do not intersect and the case where these sets do intersect. This is based on a set of free parameters. All complex operations are converted into the real field so that the algorithm which is developed for the controller design can be easily implemented on computers. It uses a robustness index defined in the frequency domain as the cost function to be optimized. The analytical gradient calculation of the cost function with respect to the free parameters is given. Using gradient-based optimization, the robustness index is minimized by making full use of the freedom provided by eigenstructure assignment. © 1998 John Wiley & Sons, Ltd.  相似文献   

18.
ABSTRACT

This paper investigates the trajectory tracking problem of rigid robot manipulators with unknown dynamics and actuator failures. The goal is to achieve desirable tracking performance with a simple and low-cost control strategy. By introducing a new form of parameter estimation error, together with an error transformation, a robust adaptive and fault-tolerant control scheme is developed without the need for fault information nor precise robotic mathematical model. It is shown that, with the proposed control, the tracking error is ensured to converge to an adjustable residual set within prescribed finite time at a user pre-assignable decay rate. The appealing feature of the developed control also lies in its simplicity in structure (i.e. PID form) and effectiveness in dealing with modelling uncertainties as well as actuation faults.  相似文献   

19.
The problem of the direct design of the closed-loop transfer function matrix is addressed for multivariable discrete systems. The limitations imposed by unstable zeros, time delays and the structure associated with these are quantified. A design procedure is formulated that provides the designer with quantitative measures for evaluating the tradeoffs between different closed-loop interaction structures and durations. The problem of intersample rippling is also considered. The procedure requires only linear-algebra operations, includes the eventual construction of the feedback controller in state space, and is presented in a way that allows its straightforward computer implementation.  相似文献   

20.
This paper presents research resulting in a neural network model relating product design specifications and performance testing results using data from a sanitary ware manufacturer. The main constraint of the work was the limited availability of actual data for neural network training and testing, a situation often found in real situations where a priori product knowledge is limited during the product design phase. The authors used two training techniques, the standard hold-back and the leave-k-out, for the neural network model to leverage the sparseness of the data. Neural network results are compared and contrasted to statistical models relating product design and performance. This work is an exploration of the value of neural network models to assist with interactive product design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号