共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents the results of a study of the software complexity characteristics of a large real-time signal processing system for which there is a 6-yr maintenance history. The objective of the study was to compare values generated by software metrics to the maintenance history in order to determine which software complexity metrics would be most useful for estimating maintenance effort. The metrics that were analyzed were program size measures, software science measures, and control flow measures. During the course of the study two new software metrics were defined. The new metrics, maximum knot depth and knots per jump ratio, are both extensions of the knot count metric. When comparing the metrics to the maintenance data the control flow measures showed the strongest positive correlation. 相似文献
2.
Roma G.-C. Gamble R.F. Ball W.E. 《IEEE transactions on pattern analysis and machine intelligence》1993,19(3):277-296
It is shown that a combination of specification and program refinement may be applied to deriving efficient concurrent rule-based programs. Specification refinement is used to generate an initial rule-based program that is refined into a program which is highly concurrent and efficient. This program derivation strategy is divided into two major tasks. The first task relies on specification refinement. Techniques similar to those employed in the derivation of UNITY programs are used to produce a correct rule-based program having a static knowledge base. The second task involves program refinement and is specific to the development of concurrent rule-based programs. It relies heavily on the availability of a computational model, such as Swarm, that has the ability to dynamically restructure the knowledge base. The ways in which a Swarm program can be translated to OPS5 specifically, given some restrictions, while maintaining the correctness criteria are discussed 相似文献
3.
4.
A new class of algorithms for the computation of bilinear forms has been recently introduced [1, 3]. These algorithms approximate
the result with an arbitrarily small error. Such approximate algorithms may have a multiplicative complexity smaller than
exact ones. On the other hand any comparison between approximate and exact algorithms has to take into account the complexity-stability
relations.
In this paper some complexity measures for matrix multiplication algorithms are discussed and applied to the evaluation of
exact and approximate algorithms. Multiplicative complexity is shown to remain a valid comparison test and the cost of approximation
appears to be only a logarithmic factor. 相似文献
5.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach. 相似文献
6.
Rwo-Hsi Wang Mok A.K. 《IEEE transactions on pattern analysis and machine intelligence》1995,21(7):605-614
A key index of the performance of a rule based program used in real time monitoring and control is its response time, defined by the longest program execution time before a fixed point of the program is reached from a start state. Previous work in computing the response time bounds for rule based programs effectively assumes that all rules take the same amount of firing time. It is also assumed that if two rules are enabled, then either one of them may be scheduled first for firing. These assumptions can result in loose bounds, especially in the case programmers choose to impose a priority structure on the set of rules. We remove the uniform firing cost assumption and discuss how to get tighter bounds by taking rule priority information into account. We show that the rule suppression relation we previously introduced can be extended to incorporate rule priority information. A bound derivation algorithm for programs whose potential trigger relations satisfy an acyclicity condition is presented, followed by its correctness proof and an analysis example 相似文献
7.
Complexity measures of supervised classification problems 总被引:2,自引:0,他引:2
Tin Kam Ho Basu M. 《IEEE transactions on pattern analysis and machine intelligence》2002,24(3):289-300
We studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets. Distributions of problems in this space show that there exist at least two independent factors affecting a problem's difficulty. We suggest using this space to describe a classifier's domain of competence. This can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement, projection, and transformations of the feature vectors 相似文献
8.
V. Wiktor Marek Arcot Rajasekar Mirosław Truszczyński 《Annals of Mathematics and Artificial Intelligence》1995,15(3-4):357-378
In this paper we introduce the notion of anF-program, whereF is a collection of formulas. We then study the complexity of computing withF-programs.F-programs can be regarded as a generalization of standard logic programs. Clauses (or rules) ofF-programs are built of formulas fromF. In particular, formulas other than atoms are allowed as building blocks ofF-program rules. Typical examples ofF are the set of all atoms (in which case the class of ordinary logic programs is obtained), the set of all literals (in this case, we get the class of logic programs with classical negation [9]), the set of all Horn clauses, the set of all clauses, the set of all clauses with at most two literals, the set of all clauses with at least three literals, etc. The notions of minimal and stable models [16, 1, 7] of a logic program have natural generalizations to the case ofF-programs. The resulting notions are called in this paperminimal andstable answer sets. We study the complexity of reasoning involving these notions. In particular, we establish the complexity of determining the existence of a stable answer set, and the complexity of determining the membership of a formula in some (or all) stable answer sets. We study the complexity of the existence of minimal answer sets, and that of determining the membership of a formula in all minimal answer sets. We also list several open problems.This work was partially supported by National Science Foundation under grant IRI-9012902.This work was partially supported by National Science Foundation under grant CCR-9110721. 相似文献
9.
The complexity of evaluating integers and polynomials is studied. A new model is proposed for studying such complexities. This model differs from previous models by requiring the construction of constant to be used in the computation. This construction is given a cost which is dependent upon the size of the constant. Previous models used a uniform cost, of either 0 or 1, for operations involving constants. Using this model, proper hierarchies are shown to exist for both integers and polynomials with respect to evaluation cost. Furthmore, it is shown that almost all integers (polynomials) are as difficult to evaluate as the hardest integer (polynomial). These results remain true even if the underlying basis of binary operations which the algorithm performs are varied. 相似文献
10.
Horatiu Cirstea Claude Kirchner Radu Kopetz Pierre-Etienne Moreau 《Journal of Symbolic Computation》2010
Negation is intrinsic to human thinking and most of the time when searching for something, we base our patterns on both positive and negative conditions. This should be naturally reflected in software that provide pattern-based searches. We would like for example to specify that we search for white cars that are not station wagons, or that we search for a list of objects that does not contain two identical elements. 相似文献
11.
12.
The complexities of the possible rendezvous and the lockout problems for propositional concurrent programs are investigated in detail. We develop a unified strategy, based on domino tiling, to show that the above two problems with respect to a variety of propositional concurrent programs are complete for a broad spectrum of complexity classes, ranging from NLOGSPACE, PTIME, NP, PSPACE to EXPTIME. Our technique is novel in the sense that it demonstrates how two seemingly unrelated models, namely, propositional concurrent programs and dominoes, can be linked together in a natural and elegant fashion. 相似文献
13.
Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures 总被引:2,自引:0,他引:2
Linguistic fuzzy modelling, developed by linguistic fuzzy rule-based systems, allows us to deal with the modelling of systems by building a linguistic model which could become interpretable by human beings. Linguistic fuzzy modelling comes with two contradictory requirements: interpretability and accuracy. In recent years the interest of researchers in obtaining more interpretable linguistic fuzzy models has grown.Whereas the measures of accuracy are straightforward and well-known, interpretability measures are difficult to define since interpretability depends on several factors; mainly the model structure, the number of rules, the number of features, the number of linguistic terms, the shape of the fuzzy sets, etc. Moreover, due to the subjectivity of the concept the choice of appropriate interpretability measures is still an open problem.In this paper, we present an overview of the proposed interpretability measures and techniques for obtaining more interpretable linguistic fuzzy rule-based systems. To this end, we will propose a taxonomy based on a double axis: “Complexity versus semantic interpretability” considering the two main kinds of measures; and “rule base versus fuzzy partitions” considering the different components of the knowledge base to which both kinds of measures can be applied. The main aim is to provide a well established framework in order to facilitate a better understanding of the topic and well founded future works. 相似文献
14.
《Fuzzy Systems, IEEE Transactions on》1994,2(1):82-88
The authors present a method for learning fuzzy logic membership functions and rules to approximate a numerical function from a set of examples of the function's independent variables and the resulting function value. This method uses a three-step approach to building a complete function approximation system: first, learning the membership functions and creating a cell-based rule representation; second, simplifying the cell-based rules using an information-theoretic approach for induction of rules from discrete-valued data; and, finally, constructing a computational (neural) network to compute the function value given its independent variables. This function approximation system is demonstrated with a simple control example: learning the truck and trailer backer-upper control system 相似文献
15.
《Artificial Intelligence in Engineering》2001,15(1):37-46
The use of event–condition–action (ECA) rules has transformed database systems from passive query-based data repositories to active sources of information delivery. In a similar fashion, ECA rules can be used to benefit workflow systems. In this paper, a software framework known as STEP workflow management facility is proposed in order to manage collaborative and distributed workflows and to provide interfaces to object management group-compliant product data management systems. Issues related to implementation using open standards such as CORBA are discussed. A key point underlying the framework is the flexibility it affords to users to re-configure the system according to evolving needs in collaborative product development. 相似文献
16.
The complexity of the verification problem for the behavior (dynamical properties) of systems of interacting intelligent agents is considered. This paper is a continuation of our publications [1–3], in which this problem was mainly considered as applied to deterministic and nondeterministic systems, and largely focuses on asynchronous systems. 相似文献
17.
Zizette Boufriche-Boufaïda 《Expert Systems》1999,16(1):11-18
Our research originates from a study of the possibilities of integrating rules and objects in knowledge-based systems. In the present work, we are interested in the interactionist perspective of an object. The stepwise reasoning of a diagnostic expert system, possibly involving subgoaling and interactions with the environment, can be easily codified by means of production rules over proposition literals. This set of rules can be graphically represented in a network manner denoting the relations between the rules. The individual nodes in the network can be expressed by means of autonomous objects and their relations, interpreted as possible communications between them. The objects are given a structure and a proper behaviour and cooperate for performing logical reasoning by means of forward and backward chaining inference processes. Therefore, designing this system implies addressing several basic issues such as inter-object communications and their synchronization. The problem here is not necessarily to develop a great intelligence locally but to develop strong networks of good communicators. This approach belongs to the interactionist representation current, where objects are called actors. In principle, the actors may carry out computation in parallel and provide a conceptual foundation for massively concurrent object-oriented paradigms. From this point of view, a system allowing for the simultaneous investigation of several rules and premises in the forward or the backward chaining would be significantly more efficient. 相似文献
18.
This paper proposes an approach to investigate norm-governed learning agents which combines a logic-based formalism with an equation-based counterpart. This dual formalism enables us to describe the reasoning of such agents and their interactions using argumentation, and, at the same time, to capture systemic features using equations. The approach is applied to norm emergence and internalisation in systems of learning agents. The logical formalism is rooted into a probabilistic defeasible logic instantiating Dung??s argumentation framework. Rules of this logic are attached with probabilities to describe the agents?? minds and behaviours as well as uncertain environments. Then, the equation-based model for reinforcement learning, defined over this probability distribution, allows agents to adapt to their environment and self-organise. 相似文献
19.
Evolutionary rule-based systems for imbalanced data sets 总被引:2,自引:1,他引:1
Albert Orriols-Puig Ester Bernadó-Mansilla 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2009,13(3):213-225
This paper investigates the capabilities of evolutionary on-line rule-based systems, also called learning classifier systems (LCSs), for extracting knowledge from imbalanced data. While some learners may suffer from class imbalances and instances sparsely distributed around the feature space, we show that LCSs are flexible methods that can be adapted to detect such cases and find suitable models. Results on artificial data sets specifically designed for testing the capabilities of LCSs in imbalanced data show that LCSs are able to extract knowledge from highly imbalanced domains. When LCSs are used with real-world problems, they demonstrate to be one of the most robust methods compared with instance-based learners, decision trees, and support vector machines. Moreover, all the learners benefit from re-sampling techniques. Although there is not a re-sampling technique that performs best in all data sets and for all learners, those based in over-sampling seem to perform better on average. The paper adapts and analyzes LCSs for challenging imbalanced data sets and establishes the bases for further studying the combination of re-sampling technique and learner best suited to a specific kind of problem. 相似文献
20.
优先规则是解决大规模资源受限的项目调度问题(Resource-Constrained Project Scheduling Problem,RCPSP)强有力的方法,但是单一的优先规则的往往仅在某些特定的问题上表现出良好的性能。以粒子群算法为基础,提出了基于优先规则编码的粒子群算法(Priority Rule based Particle Swarm Optimization,PRPSO),求解资源受限的项目调度问题。该方法能够通过粒子群算法搜索优先规则和调度生成方案的组合。分别对PRPSO采用串行调度方案、并行调度方案和混合调度方案时,不同任务数和资源强度的问题实例进行了分析。通过对PSPLIB进行测试,结果表明该方法与其它基于优先规则的启发式方法相比有较低的偏差率,因而有较好的性能。 相似文献