首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 41 毫秒
1.
Workflow Management Systems (WFMSs) are often used in context of B2B integration as a base technology to implement business-to-business (B2B) integration processes across enterprises. In this context the notion of distributed inter-organizational workflows is introduced to indicate the collaboration of enterprises on a process level. This notion requires a thorough examination presented in this article since WFMSs were not designed with inter-enterprise distribution as one of the design goals. At a closer look, the proposed use of WFMSs in context of B2B integration is often very naïve and inappropriate. Consequently it does not address the real requirements found in enterprises. Enterprises do not share common workflow definitions, let alone common workflow instance execution state and have no intent to do so due to competitive knowledge protection. Furthermore, trading partner specific business rules within enterprises are not accounted for leading to an unwanted explosion of workflow definitions. This article clarifies the notion of distributed inter-organizational workflows as well as private and public processes. Based on this definition, the appropriate use of WFMSs is shown in context of an overall B2B integration solution that allows enterprises to protect their competitive knowledge while participating in B2B integration.  相似文献   

2.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

3.
A nonlinear stochastic integral equation of the Hammerstein type in the formx(t; ) = h(t, x(t; )) + s k(t, s; )f(s, x(s; ); )d(s) is studied wheret S, a measure space with certain properties, , the supporting set of a probability measure space (,A, P), and the integral is a Bochner integral. A random solution of the equation is defined to be an almost surely continuousm-dimensional vector-valued stochastic process onS which is bounded with probability one for eacht S and which satisfies the equation almost surely. Several theorems are proved which give conditions such that a unique random solution exists. AMS (MOS) subject classifications (1970): Primary; 60H20, 45G99. Secondary: 60G99.  相似文献   

4.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

5.
Software engineering programmes are not computer science programmes   总被引:1,自引:1,他引:0  
Programmes in Software Engineering have become a source of contention in many universities. Some Computer Science departments, many of which have used that phrase to describe individual courses for decades, claim software engineering as part of their discipline. However, Engineering faculties claim Software Engineering as a new speciality in the family of engineering disciplines. This paper discusses the differences between traditional computer science programmes and most engineering programmes and argues that we need programmes that follow the traditional engineering approach to professional education and educate engineers whose speciality within engineering is software construction. One such programme is described.  相似文献   

6.
The reuse of business-process and information specifications can reduce the cost of business design by helping to alleviate bottlenecks in the development of workflow systems. Using a new concept we call workflow design pattern, we create a library of standard business-process and information specifications to be used as workflow business template. Business design and system design are done by customizing the standard specifications. New support tools are used to implement the design result. By using this proposed method, the cost of system development can be reduced.  相似文献   

7.
The processes of constructing meaning in digital database environments entail a paradigm shift from previous models of audio-visual communication. Media emerging from the Electro-mechanical era (film/TV/video) present fixed spatio-temporal linearity and material conditions which objectify and render passive viewer and process. The problematic aspects of cinematic communication were addressed by Latin American filmmakers of the Third Cinema movement. Their concerns and approach presaged and assisted an understanding of the radical redefinition of audio-visual communication possible with digital databases. The conceptual and aesthetic aspirations of Third Cinema artists such as Julio Garcia Espinosa and Fernando Solanas were ultimately contradictory to linear media and find their fitting medium in digital modular construction. The materiality of database expression lacks an intrinsic temporal or spatial state and permits a more dynamic and multidirectional set of power relationships between author/s, piece, viewer/s. Other important referents for contextualising database art are modern art practitioners that rejected linear representational space and fractured the centrality of authorship. The author's own work, ...two, three, many Guevaras, an exploratory database environment, embraces the redefinition of process as artistic expression, the empowerment of interacting generative forces, and serves to illustrate the revolutionary potential of the new media.  相似文献   

8.
Exception Handling in Workflow Systems   总被引:13,自引:0,他引:13  
In this paper, defeasible workflow is proposed as a framework to support exception handling for workflow management. By using the justified ECA rules to capture more contexts in workflow modeling, defeasible workflow uses context dependent reasoning to enhance the exception handling capability of workflow management systems. In particular, this limits possible alternative exception handler candidates in dealing with exceptional situations. Furthermore, a case-based reasoning (CBR) mechanism with integrated human involvement is used to improve the exception handling capabilities. This involves collecting cases to capture experiences in handling exceptions, retrieving similar prior exception handling cases, and reusing the exception handling experiences captured in those cases in new situations.  相似文献   

9.
Personalization and adaptation techniques are an interesting opportunity to design new services on-board vehicles. In this context, in fact, the need of an individual user to receive the right service at the right time and in the right way is more critical than in other cases, where personalization and adaptation already showed interesting advantages. At the same time, this context of application can provide new interesting insights for user modeling and adaptation. In the paper we present an architecture for providing personalized services on-board vehicles and we discuss an application to the case of tourist information. We focus on the choices we made to design an on-board system which was as less intrusive and distracting as possible and that could adapt its recommendations, the way it presents them and its own behavior to the user's preferences/interests and to the context of interaction (especially the driving conditions).  相似文献   

10.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

11.
The number of virtual connections in the nodal space of an ATM network of arbitrary structure and topology is computed by a method based on a new concept—a covering domain having a concrete physical meaning. The method is based on a network information sources—boundary switches model developed for an ATM transfer network by the entropy approach. Computations involve the solution of systems of linear equations. The optimization model used to compute the number of virtual connections in a many-category traffic in an ATM network component is useful in estimating the resource of nodal equipment and communication channels. The variable parameters of the model are the transmission bands for different traffic categories.  相似文献   

12.
Summary Many reductions among combinatorial problems are known in the context of NP-completeness. These reductions preserve the optimality of solutions. However, they may change the relative error of approximative solutions dramatically. In this paper, we apply a new type of reductions, called continuous reductions. When one problem is continuously reduced to another, any approximation algorithm for the latter problem can be transformed into an approximation algorithm for the former. Moreover, the performance ratio is preserved up to a constant factor. We relate the problem Minimum Number of Inverters in CMOS-Circuits, which arises in the context of logic synthesis, to several classical combinatorial problems such as Maximum Independent Set and Deletion of a Minimum Number of Vertices (Edges) in Order to Obtain a Bipartite (Partial) Subgraph.  相似文献   

13.
Data structures are interpreted as sets which may contain repeating elements. An algebraic system close to the integer ring is constructed on these sets.Translated from Kibernetika, No. 4, pp. 82–88, July–August, 1990.  相似文献   

14.
Summary A. N. Habermann recently published some Critical comments on the programming language Pascal. His reproaches are principally that numerous constructs are ill-defined, that there is confusion amongst ranges, types and structures, and that the goto statement should have been abolished. The present reply successively deals with points that are clearly refutable, those which are debatable and those which constitute valid criticism. Its principal aim is to encourage the reader to form his own opinion.  相似文献   

15.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

16.
Mutual convertibility of bound entangled states under local quantum operations and classical communication (LOCC) is studied. We focus on states associated with unextendible product bases (UPB) in a system of three qubits. A complete classification of such UPBs is suggested. We prove that for any pair of UPBs S and T the associated bound entangled states S and T cannot be converted to each other by LOCC, unless S and T coincide up to local unitaries. More specifically, there exists a finite precision (S,T) > 0 such that for any LOCC protocol mapping S into a probabilistic ensemble (p, ), the fidelity between T and any possible final state satisfies F(T, ) = 1 - (S,T).PACS: 03.65.Bz; 03.67.-a; 89.70+c.  相似文献   

17.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

18.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

19.
A multicriterion optimization method is proposed for complex systems with parameters ranked by descending importance. This method requires weaker expert estimates for choosing an optimal alternative from the set of equally good solutions by formal specification of functional dependence between ranked parameter weights.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 167–170, November–December, 1991.  相似文献   

20.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号