首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The study reported in this paper suggests that in order to achieve optimal benefits from implementing process improvement programs, organisations must move towards becoming what is termed a learning organisation. Software process assessment leads to the identification and selection of key activities for improvement and the continuous application of improvements to match business needs (ISO/IEC 1996). Continuous improvement requires a commitment to learning on the part of the organisation (Garvin 1993). A model to help identify evidence of learning (the Organisational Learning Evaluation Cycle [OLEC] has been developed and empirically tested in the study. We have found evidence to suggest that the case study organisation had not moved through all three of Garvin's (1993) overlapping phases of organisational learning and as a result the firm's improvement program did not achieve optimal benefits for the organisation. The paper concludes by discussing why significant improvement in performance was not achieved.  相似文献   

2.
This paper deals with the problem of modelling the dynamics of articulation for a parameterised talking head based on phonetic input. Four different models are implemented and trained to reproduce the articulatory patterns of a real speaker, based on a corpus of optical measurements. Two of the models, (Cohen-Massaro and Öhman) are based on coarticulation models from speech production theory and two are based on artificial neural networks, one of which is specially intended for streaming real-time applications. The different models are evaluated through comparison between predicted and measured trajectories, which shows that the Cohen-Massaro model produces trajectories that best matches the measurements. A perceptual intelligibility experiment is also carried out, where the four data-driven models are compared against a rule-based model as well as an audio-alone condition. Results show that all models give significantly increased speech intelligibility over the audio-alone case, with the rule-based model yielding highest intelligibility score.  相似文献   

3.
This article is a report on research in progress into the structure of finite diagrams of intuitionistic propositional logic with the aid of automated reasoning systems for larger calculations. Afragment of a propositional logic is the set of formulae built up from a finite number of propositional variables by means of a number of connectives of the logic, among which possibly non-standard ones like ¬¬ or which are studied here. Thediagram of that fragment is the set of equivalence classes of its formulae partially ordered by the derivability relation. N.G. de Bruijn's concept of exact model has been used to construct subdiagrams of the [p, q, , , ¬]-fragment.  相似文献   

4.
A key managerial challenge, of interest to academics and practitioners alike, is the assessment and management of customer satisfaction. In this paper, we examine the underlying processes involving consumer satisfaction and switching patterns among ISPs using different satisfaction models, including the expectations-disconfirmation model, the attribution model, and an affective model. Our results indicate that the satisfaction levels of ISP consumers are generally relatively low, despite the fact that consumer expectations of ISPs are also low, reflecting mediocrity in the marketplace. In addition, consumers attribute their dissatisfaction to ISP indifference and believe that managing dissatisfaction is within the control of the ISP. Moreover, affective factors play an important role in satisfaction processes and switching behavior. Customer service including technical support and responsiveness of service staff is an important determinant factor in ISP selection. We suggest that as the ISP market matures, service providers that pay attention to affective factors and to building relationships with their customers will have a competitive advantage in the marketplace of the future.  相似文献   

5.
This paper presents algorithms for multiterminal net channel routing where multiple interconnect layers are available. Major improvements are possible if wires are able to overlap, and our generalized main algorithm allows overlap, but only on everyKth (K 2) layer. Our algorithm will, for a problem with densityd onL layers,L K + 3,provably use at most three tracks more than optimal: (d + 1)/L/K + 2 tracks, compared with the lower bound of d/L/K. Our algorithm is simple, has few vias, tends to minimize wire length, and could be used if different layers have different grid sizes. Finally, we extend our algorithm in order to obtain improved results for adjacent (K = 1) overlap: (d + 2)/2L/3 + 5 forL 7.This work was supported by the Semiconductor Research Corporation under Contract 83-01-035, by a grant from the General Electric Corporation, and by a grant at the University of the Saarland.  相似文献   

6.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

7.
Summary The complements of an AFL form an AFL if and only if is closed under length-preserving universal quantification. The complements of the context-sensitive languages form a principal AFL with a hardest set L 1. The context-sensitive languages are closed under complementation if and only if L 1 is context-sensitive.This research was supported in part by the National Science Foundation under Grants MCS76-10076 and DCR74-15091  相似文献   

8.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

9.
This paper gives PAC guarantees for Bayesian algorithms—algorithms that optimize risk minimization expressions involving a prior probability and a likelihood for the training data. PAC-Bayesian algorithms are motivated by a desire to provide an informative prior encoding information about the expected experimental setting but still having PAC performance guarantees over all IID settings. The PAC-Bayesian theorems given here apply to an arbitrary prior measure on an arbitrary concept space. These theorems provide an alternative to the use of VC dimension in proving PAC bounds for parameterized concepts.  相似文献   

10.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

11.
In this paper, a novel neural network approach to real-time collision-free path planning of robot manipulators in a nonstationary environment is proposed, which is based on a biologically inspired neural network model for dynamic trajectory generation of a point mobile robot. The state space of the proposed neural network is the joint space of the robot manipulators, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The real-time robot path is planned through the varying neural activity landscape that represents the dynamic environment. The proposed model for robot path planning with safety consideration is capable of planning a real-time comfortable path without suffering from the too close nor too far problems. The model algorithm is computationally efficient. The computational complexity is linearly dependent on the neural network size. The effectiveness and efficiency are demonstrated through simulation studies.  相似文献   

12.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

13.
General Convergence Results for Linear Discriminant Updates   总被引:1,自引:0,他引:1  
The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method automatically produces close variants of existing proofs (recovering similar bounds)—thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.  相似文献   

14.
In this paper an analytical framework similar to a robust control problem was developed for the one-state, one-control variable model to examine the response of the control to changes in the free parameter. However, in contrast to Gonzalez and Rodriguez (2003), the sign multiplying the free parameter in the criterion function of the min–max problem is positive. We find that this set up corresponds to the case where nature is benevolent while the problem posed by Gonzalez and Rodriguez (2003) corresponds to a malevolent nature. We show that for the benevolent case, the solution is a minimum giving way to an ordinary control problem. In addition, the left side of the discontinuity in Gonzalez and Rodriguez (2003) corresponds to the benevolent case.The opinions contained in this note are exclusively of the authors and do not represent either those of the Sociedad Hipotecaria Federal or Bank of Mexico.  相似文献   

15.
Exact upper bounds are obtained for the probability F() - F(u), 0 < u < < , on the set of distribution functions F(x) of nonnegative random variables with unimodal density with an arbitrary mode m 0 and one or two fixed first moments.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 72–83, September–October 2004.  相似文献   

16.
Indecomposable local maps of one-dimensional tessellation automata are studied. The main results of this paper are the following. (1) For any alphabet containing two or more symbols and for anyn 1, there exist indecomposable scope-n local maps over . (2) If is a finite field of prime order, then a linear scope-n local map over is indecomposable if and only if its associated polynomial is an irreducible polynomial of degreen – 1 over , except for a trivial case. (3) Result (2) is no longer true if is a finite field whose order is not prime.  相似文献   

17.
Multimedia synchronization involving independent sources is a challenging issue imposed by the distributed multimedia applications. In our work, this issue is studied by investigating the teleorchestra application (remote multimedia presentation). In teleorchestration, among the data objects to be presented, relative and uncertain temporal requirements may be involved. Fuzzy presentation scenarios are thus generated. In this paper, we describe a temporal model that can handle these fuzzy scenarios that contain imprecise synchronization constraints, such as unknown object presentation durations and relative event occurring times. The model supports a distributed synchronization algorithm that can schedule the independent sources for the multimedia teleorchestration.  相似文献   

18.
In this paper, we study the complexity of computing better solutions to optimization problems given other solutions. We use a model of computation suitable for this purpose, the counterexample computation model. We first prove that, if PH P 3 , polynomial time transducers cannot compute optimal solutions for many problems, even givenn 1– non-trivial solutions, for any >0. These results are then used to establish sharp lower bounds for several problems in the counterexample model. We extend the model by defining probabilistic counterexample computations and show that our results hold even in the presence of randomness.  相似文献   

19.
Summary The current proposals for applying the so called fast O(N loga N) algorithms to multivariate polynomials is that the univariate methods be applied recursively, much in the way more conventional algorithms are used. Since the size of the problems is rather large for which a fastrd algorithm is more efficient than a classical one, the recursive approach compounds this size completely out of any practical range.The degree homomorphism is proposed here as an alternative to this recursive approach. It is argued that methods based on the degree homomorphism and a fast algorithm may be viable alternatives to more conventional algorithms for certain multivariate problems in the setting of algebraic manipulation. Several such problems are discussed including: polynomial multiplication, powering, division (both exact and with remainder), greatest common divisors and factoring.This research was supported by NRC Grant A9284.  相似文献   

20.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号