首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Automatic verification of programs and computer systems with input variables represents a significant and well-motivated challenge. The case of Simulink diagrams is especially difficult, because there the inputs are read iteratively, and the number of input variables is in theory unbounded. We apply the techniques of explicit model checking to account for the temporal (control) aspects of verification and use set-based representation of data, thus handling both sources of non-determinism present in the verification. Two different representations of sets are evaluated in scalability with respect to the range of input variables. Explicit (enumerating) sets are very fast for small ranges but fail to scale. Symbolic sets, represented as first-order formulas in the bit-vector theory and compared using satisfiability modulo theory solvers, scale well to arbitrary (though still bounded) range of input variables. To leverage the combined strengths of explicit and symbolic representations, we have designed a hybrid representation which we showed to outperform both pure representations. Thus, the proposed method allows complete automatic verification without the need to limit the non-determinism of input. Moreover, the principle underlying the hybrid representation entails inferring knowledge about the system under verification, which the developers did not explicitly include in the system, and which can significantly accelerate the verification process.  相似文献   

2.
3.
在一组相同处理器上调度带有通信延迟的任务图以实现其最短的执行时间,这在并行计算的调度理论和实践中具有重要的意义。针对具有通信延迟的任务图调度问题,提出一种基于可满足性模理论(SMT)的改进SMT方法。首先,将处理器映射约束和任务执行顺序等约束条件进行编码,将任务图调度问题转化为SMT问题;然后,调用SMT求解器对可行解空间进行搜索,以确定问题最优解。在约束编码阶段,使用整型变量表示任务和处理器的映射关系,从而降低处理器约束编码的复杂程度;在求解器调用阶段,通过添加独立任务的约束条件减小求解器的搜索空间,进一步提升最优解的查找效率。实验结果表明,与原始SMT方法相比,改进SMT方法在20 s和1 min超时实验中的平均求解时间分别减少了65.9%与53.8%,并且在处理器数量较多时取得了更大的效率优势。改进的SMT方法可以有效求解带通信延迟的任务图调度问题,尤其适用于处理器数量较多的调度场景。  相似文献   

4.
Web Service Business Process Execution Language (WS‐BPEL) is one of the most popular service‐oriented workflow applications. The unique features (e.g. dead path elimination semantics and correlation mechanism) of WS‐BPEL applications have raised enormous problems to its test case generation, especially in unit testing. Existing studies mainly assume that each path in the control flow graphs that correspond to WS‐BPEL applications is feasible, which always yields imprecise test cases or complicates testing results. The current study tackles this problem based on satisfiability modulo theory solvers. First, a new coverage criterion is proposed to measure the quality of test sets for testing WS‐BPEL applications. Second, decomposition algorithms are presented to obtain test paths that meet the proposed coverage criterion. Finally, this paper symbolically encodes each test path with several constraints by capturing the unique features of WS‐BPEL. These constraints are solved and the test cases (test paths and test data) are obtained with the help of satisfiability modulo theory solvers to test WS‐BPEL applications effectively. Experiments are conducted using our approach and other typical approaches (e.g. message‐sequence generation‐based approach and concurrent path analysis approach) with 10 WS‐BPEL applications. Experimental results demonstrate that the test cases generated by our approach can avoid instantiating idle instance and expose more faults. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
The satisfiability problem is a basic core NP-complete problem. In recent years, a lot of heuristic algorithms have been developed to solve this problem, and many experiments have evaluated and compared the performance of different heuristic algorithms. However, rigorous theoretical analysis and comparison are rare. This paper analyzes and compares the expected runtime of three basic heuristic algorithms: RandomWalk, (1+1) EA, and hybrid algorithm. The runtime analysis of these heuristic algorithms on two 2-SAT instances shows that the expected runtime of these heuristic algorithms can be exponential time or polynomial time. Furthermore, these heuristic algorithms have their own advantages and disadvantages in solving different SAT instances. It also demonstrates that the expected runtime upper bound of RandomWalk on arbitrary k-SAT (k?3) is O(n(k−1)), and presents a k-SAT instance that has Θ(n(k−1)) expected runtime bound.  相似文献   

6.
Different methods can be used for learning, and they can be compared in several aspects, especially those related to learning outcomes. In this paper, we present a study in order to compare the learning effectiveness and satisfaction of children using an iPhone game for learning the water cycle vs. the traditional classroom lesson. The iPhone game includes multiple interaction forms and combined augmented reality (AR) mini‐games with non‐AR mini‐games. The traditional classroom lesson had the same learning content as the iPhone game. Thirty‐eight children participated in the study. The analyses showed that the children made significant learning gains about the water cycle, regardless of the method used. Even though the results showed that the iPhone method achieved higher knowledge results than the traditional classroom lesson, no statistically significant differences were found between the iPhone and the classroom lesson. When analysing the motivational outcomes, the results showed that the children found the iPhone game to be more satisfying than the classroom lessons. Since the iPhone game achieved similar learning results and a higher motivational effect than the classroom lesson, this suggests that games of this kind could be used as a tool in primary schools to reinforce students' lessons.  相似文献   

7.
A field experiment was conducted to analyze the process and contents of group discussions. Groups solved a case study either orally or through an asynchronous computer-mediated communication system. Findings show that asynchronous groups had broader discussions and submitted more complete reports than their face-to-face counterparts. However, there was no difference in the ability to transfer information from the discussion to the report; under both conditions, about 15% of the issues mentioned in the discussion were omitted from the final report. In terms of coordination, face-to-face teams covered the case study questions sequentially, while asynchronous groups were more focused on solving their general disagreements.  相似文献   

8.
Collective communication operations (CCOs) are one of the most powerful tools for parallel processing on distributed memory architectures. From the theoretical viewpoint there has been a major effort in the design of optimal algorithms for these operations, especially for massive parallel processors (MPPs). However, in spite of the increasing availability of MPPs, there are just a few limited experimental checks of the different theories, so the assessment of their real value is not easy. The aim of the present paper is to address such issues for the most common CCOs, considering practical algorithms that can be included in a generic communication library. The main result is a new algorithm for building a quasi-optimal broadcast tree that is much simpler than, and as efficient as, previously available algorithms. To investigate the advantages and drawbacks of the proposed algorithms, a large set of experimental data has been collected on an IBM SP2 parallel system. The data demonstrate the efficiency of our approach in a number of interesting cases. Finally, all the experimental results have been related to the model used in designing the algorithms. © 1998 John Wiley & Sons, Ltd.  相似文献   

9.
GASAT: a genetic local search algorithm for the satisfiability problem   总被引:1,自引:0,他引:1  
This paper presents GASAT, a hybrid algorithm for the satisfiability problem (SAT). The main feature of GASAT is that it includes a recombination stage based on a specific crossover and a tabu search stage. We have conducted experiments to evaluate the different components of GASAT and to compare its overall performance with state-of-the-art SAT algorithms. These experiments show that GASAT provides very competitive results.  相似文献   

10.
Simplification in a satisfiability checker for VLSI applications   总被引:1,自引:0,他引:1  
INSTEP is a satisfiability checker designed for the original purpose of solving a specific target set of problems in the formal verification of VLSI circuits. These are real-world problems concerning a sequential circuit that is part of a commercial chip manufactured by Texas Instruments. The program has succeeded in solving these problems, which require satisfiability checking for combinational representations containing up to around 10 000 variables and a graphical representation of around 17 000 nodes. It has also been successfully applied to a number of standard benchmark problems in combinational circuit verification. Results on these benchmarks are overall competitive with those for the widely used method based onbinary decision diagrams, and for the first time demonstrate the solution in polynomial time of certain benchmarks involving combinational multipliers.A central part of the INSTEP algorthim is simplification. Most simplifications that take place in previous tautology checkers consist of the replacement of a formula by a shorter formula that is logically equivalent. Most simplifications in INSTEP replace a formula by another formula which isnot logically equivalent, but such that satisfiability is nevertheless preserved. These new simplifications depend on the pattern of occurrence of one or more variables and particularly on theirpolarity.The simplifications used by INSTEP rest on several new theorems in an area of propositional calculus (or Boolean algebra) which is crucial to the general theory of effective simplification of propositional formulas. The primary purpose of the present paper is to demonstrate these theorems and explain the simplifications that depend on them.For the present paper we have tried INSTEP on the well-known pigeonhole problem. So far as we know INSTEP is the first implemented program to produce proofs of polynomial length for pigeonhole problems. It also produces these proofs in polynomial time.  相似文献   

11.
12.
The purpose of this paper is to address some of the questions on the notion of agent and agency in relation to property and personhood. I argue that following the Kantian criticism of Aristotelian metaphysics, contemporary biotechnology and information and communication technologies bring about a new challenge—this time, with regard to the Kantian moral subject understood in the subject’s unique metaphysical qualities of dignity and autonomy. The concept of human dignity underlies the foundation of many democratic systems, particularly in Europe as well as of international treaties, including the Universal Declaration of Human Rights. Digital agents, artificial organisms as well as new capabilities of the human agents related to their embeddedness in digital and biotechnological environments bring about an important transformation of the human self-appraisal. A critical comparative reflection of this transformation is important because of its ethical implications. I deal first with the concept of agent within the framework of Aristotelian philosophy, which is the basis for further theories in accordance with and/or in opposition to it, particularly since modernity. In the second part of this paper, I deal with the concept of personhood in Kantian philosophy, which supersedes the Aristotelian metaphysics of substance and builds the basis of a metaphysics of the moral human subject. In the third part, I discuss the question of artificial agents arising from modern biology and ICT. Blurring the difference between the human and the natural and/or artificial opens a “new space” for philosophical reflection as well as for debate in law and practical policy.  相似文献   

13.
《Information Fusion》2009,10(2):183-197
Dempster’s rule of combination in evidence theory is a powerful tool for reasoning under uncertainty. Since Zadeh highlighted the counter-intuitive behaviour of Dempster’s rule, a plethora of alternative combination rules have been proposed. In this paper, we propose a general formulation for combination rules in evidence theory as a weighted sum of the conjunctive and disjunctive rules. Moreover, with the aim of automatically accounting for the reliability of sources of information, we propose a class of robust combination rules (RCR) in which the weights are a function of the conflict between two pieces of information. The interpretation given to the weight of conflict between two BPAs is an indicator of the relative reliability of the sources: if the conflict is low, then both sources are reliable, and if the conflict is high, then at least one source is unreliable. We show some interesting properties satisfied by the RCRs, such as positive belief reinforcement or the neutral impact of vacuous belief, and establish links with other classes of rules. The behaviour of the RCRs over non-exhaustive frames of discernment is also studied, as the RCRs implicitly perform a kind of automatic deconditioning through the simple use of the disjunctive operator. We focus our study on two special cases: (1) RCR-S, a rule with symmetric coefficients that is proved to be unique and (2) RCR-L, a rule with asymmetric coefficients based on a logarithmic function. Their behaviours are then compared to some classical combination rules proposed thus far in the literature, on a few examples, and on Monte Carlo simulations.  相似文献   

14.
Bharati  P. 《Computer》2005,38(1):71-75
To remain globally competitive, multinational corporations are outsourcing their IT services and relocating their labor-intensive operations overseas to low-wage countries. In the past decade, India's IT services industry emerged as an important player in this global market. While worldwide IT services revenue increased less than 2 percent from 2000 to 2003, India's IT services industry experienced a 22 percent revenue growth - a pace comparable to the rise in Hong Kong's electronics industry during the 1970s.  相似文献   

15.
One of the most critical phases of software engineering is requirements elicitation and analysis. Success in a software project is influenced by the quality of requirements and their associated analysis since their outputs contribute to higher level design and verification decisions. Real-time software systems are event driven and contain temporal and resource limitation constraints. Natural-language-based specification and analysis of such systems are then limited to identifying functional and non-functional elements only. In order to design an architecture, or to be able to test and verify these systems, a comprehensive understanding of dependencies, concurrency, response times, and resource usage are necessary. Scenario-based analysis techniques provide a way to decompose requirements to understand the said attributes of real-time systems. However they are in themselves inadequate for providing support for all real-time attributes. This paper discusses and evaluates the suitability of certain scenario-based models in a real-time software environment and then proposes an approach, called timed automata, that constructs a formalised view of scenarios that generate timed specifications. This approach represents the operational view of scenarios with the support of a formal representation that is needed for real-time systems. Our results indicate that models with notations and semantic support for representing temporal and resource usage of scenario provide a better analysis domain.H. Saiedian is a member of the Information & Telecommunication Technology Center at the University of Kansas. His research was partially supported by a grant from the National Science Foundation (NSF).  相似文献   

16.
《Artificial Intelligence》2006,170(12-13):1031-1080
We address two aspects of constructing plans efficiently by means of satisfiability testing: efficient encoding of the problem of existence of plans of a given number t of time points in the propositional logic and strategies for finding plans, given these formulae for different values of t.For the first problem we consider three semantics for plans with parallel operator application in order to make the search for plans more efficient. The standard semantics requires that parallel operators are independent and can therefore be executed in any order. We consider a more relaxed definition of parallel plans which was first proposed by Dimopoulos et al., as well as a normal form for parallel plans that requires every operator to be executed as early as possible. We formalize the semantics of parallel plans emerging in this setting and present translations of these semantics into the propositional logic. The sizes of the translations are asymptotically optimal. Each of the semantics is constructed in such a way that there is a plan following the semantics exactly when there is a sequential plan, and moreover, the existence of a parallel plan implies the existence of a sequential plan with as many operators as in the parallel one.For the second problem we consider strategies based on testing the satisfiability of several formulae representing plans of n time steps for several values of n concurrently by several processes. We show that big efficiency gains can be obtained in comparison to the standard strategy of sequentially testing the satisfiability of formulae for an increasing number of time steps.  相似文献   

17.
The researchers investigated the comparative effects of individually-constructed and collaboratively-constructed computer-based concept mapping on middle school science concept learning. One hundred and sixty one students completed the entire study. Using prior science performance scores to assure equivalence of student achievement across groups, students were assigned to three groups: a self-selected study strategy group, an individual-concept mapping group, and a collaborative pairs – concept mapping group. Collaboratively and individually-constructing computer-based concept maps had equally positive effects on seventh grade middle school science concept learning as measured on a comprehension test. However, the students who collaboratively constructed concept maps created significantly higher quality concept maps than those who individually constructed concept maps indicating deeper conceptual understanding.  相似文献   

18.
Sharing cyber security information helps firms to decrease cyber security risks, prevent attacks, and increase their overall resilience. Hence it affects reducing the social security cost. Although previously cyber security information sharing was being performed in an informal and ad hoc manner, nowadays through development of information sharing and analysis centers (ISACs), cyber security information sharing has become more structured, regular, and frequent. This is while, the privacy risk and information disclosure concerns are still major challenges faced by ISACs that act as barriers in activating the potential impacts of ISACs.This paper provides insights on decisions about security investments and information sharing in consideration of privacy risk and security knowledge growth. By the latest concept i.e. security knowledge growth, we mean fusing the collected security information, adding prior knowledge, and performing extra analyses to enrich the shared information. The impact of this concept on increasing the motivation of firms for voluntarily sharing their sensitive information to authorities such as ISACs has been analytically studied for the first time in this paper. We propose a differential game model in which a linear fusion model for characterizing the process of knowledge growth via the ISAC is employed. The Nash equilibrium of the proposed game including the optimized values of security investment, and the thresholds of data sharing with the price of privacy are highlighted. We analytically find the threshold in which the gain achieved by sharing sensitive information outweighs the privacy risks and hence the firms have natural incentive to share their security information. Moreover, since in this case the threshold of data sharing and the security investment levels chosen in Nash equilibrium may be lower than social optimum, accordingly we design mechanisms which would encourage the firms and lead to a socially optimal outcome. The direct impact of the achieved results is on analyzing the way ISACs can convince firms to share their security information with them.  相似文献   

19.
In a previous work, the authors proposed an analog Hopfield-type neural network that identified the K largest components of a list of real numbers. In this work, we identify computable restrictions on the parameters, in order that the network can repeatedly process lists, one after the other, at a given rate. A complete mathematical analysis gives analytical bounds for the time required in terms of circuit parameters, the length of the lists, and the relative separation of list elements. This allows practical setting of circuit parameters for required clocking times. The emphasis is on high gain functioning of each neuron. Numerical investigations show the accuracy of the theoretical predictions, and study the influence of various parameters on performance.  相似文献   

20.
Kernel functions in convolution surfaces: a comparative analysis   总被引:5,自引:0,他引:5  
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号