首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Software Quality Evaluation Based on Expert Judgement   总被引:3,自引:0,他引:3  
A method using expert judgement for the evaluation of software quality is presented. The underlying principle of the approach is the encoding of experts' tacit knowledge into probabilistic measures associated with the achievement level of software quality attributes. An aggregated quality measure is obtained based on preference statements related to the quality attributes. The technical objectives of the paper are to develop of a generic and operationally feasible measurement technique to transform the tacit knowledge of a software expert to a probability distribution depicting his/her uncertainty of the level of achievement related to a quality attribute; to develop rules for the construction of a consensus probability measure based on expert-specific probability measures; to derive a framework for specifying software quality strategy and for evaluating the acceptance of a software produced in a software development process;The above technical developments are used to support group decision-making regarding the launch or implementation decision of a software version; the allocation of resources during the software development process.  相似文献   

2.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

3.
We provide techniques to integrate resolution logic with equality in type theory. The results may be rendered as follows. A clausification procedure in type theory, equipped with a correctness proof, all encoded using higher-order primitive recursion. A novel representation of clauses in minimal logic such that the -representation of resolution steps is linear in the size of the premisses. A translation of resolution proofs into lambda terms, yielding a verification procedure for those proofs. Availability of the power of resolution theorem provers in interactive proof construction systems based on type theory.  相似文献   

4.
5.
We study path integration on a quantum computer that performs quantum summation. We assume that the measure of path integration is Gaussian, with the eigenvalues of its covariance operator of order j-k with k>1. For the Wiener measure occurring in many applications we have k=2. We want to compute an -approximation to path integrals whose integrands are at least Lipschitz. We prove: Path integration on a quantum computer is tractable. Path integration on a quantum computer can be solved roughly -1 times faster than on a classical computer using randomization, and exponentially faster than on a classical computer with a worst case assurance. The number of quantum queries needed to solve path integration is roughly the square root of the number of function values needed on a classical computer using randomization. More precisely, the number of quantum queries is at most 4.46 -1. Furthermore, a lower bound is obtained for the minimal number of quantum queries which shows that this bound cannot be significantly improved. The number of qubits is polynomial in -1. Furthermore, for the Wiener measure the degree is 2 for Lipschitz functions, and the degree is 1 for smoother integrands. PACS: 03.67.Lx; 31.15Kb; 31.15.-p; 02.70.-c  相似文献   

6.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

7.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

8.
Blum  Avrim  Burch  Carl 《Machine Learning》2000,39(1):35-58
The problem of combining expert advice, studied extensively in the Computational Learning Theory literature, and the Metrical Task System (MTS) problem, studied extensively in the area of On-line Algorithms, contain a number of interesting similarities. In this paper we explore the relationship between these problems and show how algorithms designed for each can be used to achieve good bounds and new approaches for solving the other. Specific contributions of this paper include: An analysis of how two recent algorithms for the MTS problem can be applied to the problem of tracking the best expert in the decision-theoretic setting, providing good bounds and an approach of a much different flavor from the well-known multiplicative-update algorithms. An analysis showing how the standard randomized Weighted Majority (or Hedge) algorithm can be used for the problem of combining on-line algorithms on-line, giving much stronger guarantees than the results of Azar, Y., Broder, A., & Manasse, M. (1993). Proc ACM-SIAM Symposium on Discrete Algorithms (pp. 432–440) when the algorithms being combined occupy a state space of bounded diameter. A generalization of the above, showing how (a simplified version of) Herbster and Warmuth's weight-sharing algorithm can be applied to give a finely competitive bound for the uniform-space Metrical Task System problem. We also give a new, simpler algorithm for tracking experts, which unfortunately does not carry over to the MTS problem.Finally, we present an experimental comparison of how these algorithms perform on a process migration problem, a problem that combines aspects of both the experts-tracking and MTS formalisms.  相似文献   

9.
Adaptive control is considered for a two-dimensional linear discrete-time plant with randomly drifting parameters. The certainty equivalent minimum variance control law along with the projection-like identification algorithm are used. The stability of the parameter estimates and exponential stability of the closed-loop system are proved in the absence of any persistent excitation assumption.  相似文献   

10.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

11.
The termF-cardinality of (=F-card()) is introduced whereF: n n is a partial function and is a set of partial functionsf: n n . TheF-cardinality yields a lower bound for the worst-case complexity of computingF if only functionsf can be evaluated by the underlying abstract automaton without conditional jumps. This complexity bound isindependent from the oracles available for the abstract machine. Thus it is shown that any automaton which can only apply the four basic arithmetic operations needs (n logn) worst-case time to sortn numbers; this result is even true if conditional jumps witharbitrary conditions are possible. The main result of this paper is the following: Given a total functionF: n n and a natural numberk, it is almost always possible to construct a set such that itsF-cardinality has the valuek; in addition, can be required to be closed under composition of functionsf,g . Moreover, ifF is continuous, then consists of continuous functions.  相似文献   

12.
Deterministic global optimization with interval analysis involves using interval enclosures for ranges of the constraints, objective, and gradient to reject infeasible regions, regions without global optima, and regions without critical points; using interval Newton methods to converge on optimum-containing regions and to verify global optima.There are certain problems for which interval dependency leads to overestimation in the enclosures of the individual components, causing the optimization search to become prohibitively inefficient. As Hansen has observed earlier, in other problems, there is no overestimation in the individual components, but overestimation is introduced in the preconditioning in the interval Newton method.We examine these issues for a particular nonlinear systems problem that, to date, has defied numerical solution. To reduce overestimation, we use Taylor models. The Taylor models sometimes reduce individual overestimation but, consistent with Hansen's observations, especially reduce the overestimation due to preconditioning. From numerical experiments, we conclude that, in certain instances, Taylor models can greatly reduce both the number of subregions necessary to complete an exhaustive search and the total computational effort.  相似文献   

13.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

14.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

15.
Reduction, equality, and unification are studied for a family of simply typed -calculi with subtypes. The subtype relation is required to relate base types only to base types and to satisfy some order-theoretic conditions. Constants are required to have a least type, that is, no overloading. We define the usual and a subtype-dependent -reduction. These are related to a typed equality relation and shown to be confluent in a certain sense. We present a generic algorithm for preunification modulo -conversion and an arbitrary subtype relation. Furthermore it is shown that unification with respect to any subtype relation is universal.  相似文献   

16.
Summary For a family of languages , CAL() is defined as the family of images of under nondeterministic two-way finite state transducers, while FINITE · VISIT() is the closure of under deterministic two-way finite state transducers; CAL0()= and for n0, CAL n+1()=CAL n (CAL()). For any semiAFL , if FINITE · VISIT() CAL(), then CAL n () forms a proper hierarchy and for every n0, FINITE · VISIT(CALn()) CAL n+1() FINITE · VISIT(CAL n+1()). If is a SLIP semiAFL or a weakly k-iterative full semiAFL or a semiAFL contained in any full bounded AFL, then FINITE · VISIT() CAL() and in the last two cases, FINITE · VISIT(). If is a substitution closed full principal semiAFL and FINITE · VISIT(), then FINITE · VISIT() CAL(). If is a substitution closed full principal semiAFL generated by a language without an infinite regular set and 1 is a full semiAFL, then is contained in CALm(1) if and only if it is contained in 1. Among the applications of these results are the following. For the following families , CAL n () forms a proper hierarchy: =INDEXED, =ETOL, and any semiAFL contained in CF. The family CF is incomparable with CAL m (NESA) where NESA is the family of one-way nonerasing stack languages and INDEXED is incomparable with CAL m (STACK) where STACK is the family of one-way stack languages.This work was supported in part by the National Science Foundation under Grants No. DCR74-15091 and MCS-78-04725  相似文献   

17.
One major task in requirements specification is to capture the rules relevant to the problem at hand. Declarative, rule-based approaches have been suggested by many researchers in the field. However, when it comes to modeling large systems of rules, not only for the behavior of the computer system but also for the organizational environment surrounding it, current approaches have problems with limited expressiveness, flexibility, and poor comprehensibility. Hence, rule-based approaches may benefit from improvements in two directions: (1) improvement of the rule languages themselves and (2) better integration with other, complementary modeling approaches.In this article, both issues are addressed in an integrated manner. The proposal is presented in the context of the Tempora project on rule-based information systems development, but has also been integrated with PPP. Tempora has provided a rule language based on an executable temporal logic working on top of a temporal database. The rule language is integrated with static (ER-like) and dynamic (SA/RT-like) modeling approaches. In the current proposal, the integration with complementary modeling approaches is extended by including organization modeling (actors, roles), and the expressiveness of the rule language is increased by introducing deontic operators and rule hierarchies. The main contribution of the article is not seen as any one of the above-mentioned extensions, but as the resulting comprehensive modeling support. The approach is illustrated by examples taken from an industrial case study done in connection with Tempora.C. List of Symbols Subset of set - Not subset of set - Element of set - Not element of set - Equivalent to - Not equivalent to - ¬ Negation - Logical and - Logical or - Implication - Sometime in past - Sometime in future - Always in past - Always in future - Just before - Just after - u Until - s Since - Trigger - Condition - s State condition - Consequence - a Action - s State - Role - Actor - ¬ - General deontic operator - O Obligatory - R Recommended - P Permitted - D Discouraged - F Forbidden - (/–) General rule - t R Real time - t M Model time  相似文献   

18.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

19.
The notion of the rational closure of a positive knowledge base K of conditional assertions | (standing for if then normally ) was first introduced by Lehmann (1989) and developed by Lehmann and Magidor (1992). Following those authors we would also argue that the rational closure is, in a strong sense, the minimal information, or simplest, rational consequence relation satisfying K. In practice, however, one might expect a knowledge base to consist not just of positive conditional assertions, | , but also negative conditional assertions, i (standing for not if then normally . Restricting ourselves to a finite language we show that the rational closure still exists for satisfiable knowledge bases containing both positive and negative conditional assertions and has similar properties to those exhibited in Lehmann and Magidor (1992). In particular an algorithm in Lehmann and Magidor (1992) which constructs the rational closure can be adapted to this case and yields, in turn, completeness theorems for the conditional assertions entailed by such a mixed knowledge base.  相似文献   

20.
Lockheed Martin InVision provides software renovation and sustainment services, including analyzing systems for interesting features, transforming systems to new environments, and recasting systems to new architectures and languages. We seek an optimal blend of effort by automating the straightforward parts of a reengineering task under human control. We achieve this automation through a judicious combination of artificial intelligence and compiler-compiler techniques. This paper describes the InVision tool set and reengineering process and presents some examples of the applications of this technology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号