首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article gives a new approach to aggregating assuming that there is an indistinguishability operator or similarity defined on the universe of discourse. The very simple idea is that when we want to aggregate two values a and b we are looking for a value λ that is as similar to a as to b or, in a more logical language, the degrees of equivalence of λ with a and b must coincide. Interesting aggregation operators on the unit interval are obtained from natural indistinguishability operators associated to t‐norms that are ordinal sums. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 857–873, 2006.  相似文献   

2.
 In this paper, some geometric aspects of indistinguishability operators are studied by using the concept of morphism between them. Among all possible types of morphisms, the paper is focused on the following cases: Maps that transform a T-indistinguishability operator into another of such operators with respect to the same t-norm T and maps that transform a T-indistinguishability operator into another one of such operators with respect to a different t-norm T . The group of isometries of a given T-indistinguishability operator is also studied and it is determined for the case of one-dimensional operators, in particular for the natural indistinguishability operators E T on [0, 1]. Finally, the indistinguishability operators invariant under translations on the real line are characterized.  相似文献   

3.
J. Recasens   《Information Sciences》2008,178(21):4094-4104
Decomposable fuzzy relations are studied.Symmetric fuzzy relations are proved to be generated by a single fuzzy subset.For Archimedean t-norms, decomposable indistinguishability operators generate special kinds of betweenness relations that characterize them.A new way to generate indistinguishability operators coherent with the underlying ordering structure of the real line is given in the sense that this structure is compatible with the betweenness relation generated by the relation is developed.  相似文献   

4.
Learning indistinguishability from data   总被引:1,自引:0,他引:1  
 In this paper we revisit the idea of interpreting fuzzy sets as representations of vague values. In this context a fuzzy set is induced by a crisp value and the membership degree of an element is understood as the similarity degree between this element and the crisp value that determines the fuzzy set. Similarity is assumed to be a notion of distance. This means that fuzzy sets are induced by crisp values and an appropriate distance function. This distance function can be described in terms of scaling the ordinary distance between real numbers. With this interpretation in mind, the task of designing a fuzzy system corresponds to determining suitable crisp values and appropriate scaling functions for the distance. When we want to generate a fuzzy model from data, the parameters have to be fitted to the data. This leads to an optimisation problem that is very similar to the optimisation task to be solved in objective function based clustering. We borrow ideas from the alternating optimisation schemes applied in fuzzy clustering in order to develop a new technique to determine our set of parameters from data, supporting the interpretability of the fuzzy system.  相似文献   

5.
As a continuation of the work initiated by Demirci, the main subject of this paper is the problem of constructing indistinguishability operators in terms of probability distribution functions and the probability density functions, addressed in Demirci [“Indistinguishability operators in measurement theory, Part I: Conversions of indistinguishability operators with respect to scales”, Int. J. General Systems (2003f) To appear]. In detail, two different approaches have been developed for the solution of this construction problem.  相似文献   

6.
For two given ordinal scales in a measurement process, the present paper investigates how an indistinguishability operator evaluated according to one of these ordinal scales can be converted to another indistinguishability operator w.r.t. the other ordinal scale, and establishes the mathematical base of these conversions under the framework of measurement theory [Krantz, D.H., Luce, R.D., Suppes, P., Tversky, A. (1971) Foundations of Measurement, Vol. 1 (Academic Press, San Diego)]. Additionally, this work exposes the rudimentary facts behind the studies in [“Fuzzy Numbers and Equality Relations”, Proc. FUZZ'IEEE 93 (1993) 1298–1301; “Fuzzy Sets and Vague Environments”, Fuzzy Sets and Systems 66 (1994) 207–221; “Fuzzy Control on the Basis of Equality Relations-with an Example from Idle Speed Control”, IEEE Transactions on Fuzzy Systems 3 (1995) 336–350; and “T-partitions of the Real Line Generated by Idempotent Shapes”, Fuzzy Sets and Systems 91 (1997) 177–184], and points out the measurement theoretic derivations of the results in these studies.  相似文献   

7.
So far, very little is known about local indistinguishability of multipartite orthogonal product bases except some special cases. We first give a method to construct an orthogonal product basis with n parties each holding a \(\frac{1}{2}(n+1)\)-dimensional system, where \(n\ge 5\) and n is odd. The proof of the local indistinguishability of the basis exhibits that it is a sufficient condition for the local indistinguishability of an orthogonal multipartite product basis that all the positive operator-valued measure elements of each party can only be proportional to the identity operator to make further discrimination feasible. Then, we construct a set of n-partite product states, which contains only 2n members and cannot be perfectly distinguished by local operations and classic communication. All the results lead to a better understanding of the phenomenon of quantum nonlocality without entanglement in multipartite and high-dimensional quantum systems.  相似文献   

8.
In this letter, we mainly study the local indistinguishability of mutually orthogonal maximally entangled states, which are in canonical form. Firstly, we present a feasible sufficient and necessary condition for distinguishing such states by one-way local operations and classical communication (LOCC). Secondly, for the application of this condition, we exhibit one class of maximally entangled states that can be locally distinguished with certainty. Furthermore, sets of $d-1$ indistinguishable maximally entangled states by one-way LOCC are demonstrated in $d \otimes d$ (for $d=7, 8, 9, 10$ ). Interestingly, we discover there exist sets of $d-2$ such states in $d \otimes d$ (for $d=8, 9, 10$ ), which are not perfectly distinguishable by one-way LOCC. Finally, we conjecture that there exist $d-1$ or fewer indistinguishable maximally entangled states in $d \otimes d(d \ge 5)$ by one-way LOCC.  相似文献   

9.
As robots become more pervasive and ubiquitous in the lives of humans, they become increasingly involved in every aspect of the lives of humans. People expect that robots will take on tasks to simplify our lives, by working with humans just as other humans do, in normal organizations and societies. This labor specialization, by ubiquitous robots, allows humans more comfort, time or focus to concentrate on higher level desires or tasks. To further this unification of relationships, the defined line between humans and other robots must become somewhat indistinguishable. This ever increasing degree of indistinguishability provides that we care less about who or what executes a task or solves a goal, as long as that entity is capable and available. In this paper, we propose a model and a simple example implementation which minimizes the strict line between humans, software agents, robots, machines and sensors (HARMS) and reduces the distinguishability between these actors.  相似文献   

10.
Contour crafting utilizes computer-aided ancient sculpting techniques for fabrication of large components. The article presents the essentials of the contour crafting process, the status of research and development of the process, experiments with thermoplastics and ceramics materials, engineering analysis of certain aspects of the technology, and its potential application areas  相似文献   

11.
Recognition by prototypes   总被引:1,自引:0,他引:1  
A scheme for recognizing 3D objects from single 2D images under orthographic projection is introduced. The scheme proceeds in two stages. In the first stage, the categorization stage, the image is compared to prototype objects. For each prototype, the view that most resembles the image is recovered, and, if the view is found to be similar to the image, the class identity of the object is determined. In the second stage, the identification stage, the observed object is compared to the individual models of its class, where classes are expected to contain objects with relatively similar shapes. For each model, a view that matches the image is sought. If such a view is found, the object's specific identity is determined. The advantage of categorizing the object before it is identified is twofold. First, the image is compared to a smaller number of models, since only models that belong to the object's class need to be considered. Second, the cost of comparing the image to each model in a class is very low, because correspondence is computed once for the whole class. More specifically, the correspondence and object pose computed in the categorization stage to align the prototype with the image are reused in the identification stage to align the individual models with the image. As a result, identification is reduced to a series of simple template comparisons. The paper concludes with an algorithm for constructing optimal prototypes for classes of objects.  相似文献   

12.
Abstract. This paper describes a technique for improving the semantic consensus of conceptual database designs. Semantic consensus is a condition where there is pragmatic agreement among database designers and all of the users about which aspect of reality is being represented by a particular database element, and how that representation is being coded. The technique, called semantic database prototyping (SDP), involves a prototype that has been designed and constructed purely as a consequence of the semantic data model. The purpose of the semantic database prototype is to promote direct user validation during the conceptual database design phase of information systems analysis and design. Its distinguishing characteristic is its capture of data element occurrences within the context of the database design. The research method was action research, and the project is also briefly described.  相似文献   

13.
There is a disparity between the multitude of apparently successful expert system prototypes and the scarcity of expert systems in real everyday use. Modern tools make it deceptively easy to make reasonable prototypes, but these prototypes are seldom made subject to serious evaluation. Instead the development team confronts their product with a set of cases, and the primary evaluation criterion is the percentage of correct answers: we are faced with a 95% syndrome. Other aspects related to the use of the system are almost ignored. There is still a long way to go from a promising prototype to a final system.It is maintained in the article that a useful test must be performed by future users in a situation that is as realistic as possible. If this is not done claims of usefulness cannot be justified. It is also stated that prototyping does not make traditional analysis and design obsolete, although the contents of these activities will change.In order to discuss the effects of using the systems a distinction between expert systems as media, tools and experts is proposed.  相似文献   

14.
Summary. A technique for the automated synthesis of FSMs (finite state machines) from sets of interworkings (synchronous sequence charts) is described. This is useful for obtaining feedback from a set of scenarios during a system's definition phase or test phase. It is sound in the sense that the generated FSM only exhibits traces that correspond to one of the interworkings from the given set. It preserves deadlock freedom in the sense that no behaviours are lost. The concrete syntax of SDL is used to represent the resulting FSMs. Received: December 1996 / Accepted: September 1998  相似文献   

15.
16.
Summary The many faces of programming and systems development demand an immense amount of mechanical routine work. The present paper tries to explain some areas where automation of many tasks may be of great help. One special area, where progress seems to lag behind unduly, can be found in debugging, testing, and diagnosing systems. Here we attempted the generation of programs automatically from a definition of a problem and the characteristics of programs for its solution by a software system, which has been specially designed for this purpose. It has been indicated how the ideas underlying this project may be applied successfully to other areas.
Zusammenfassung Bei der Programmierung und Systementwicklung wird zu einem erheblichen Umfang mechanische Routinearbeit erfordert. Der vorliegende Beitrag grenzt verschiedene Gebiete ab, wo mehrere Aufgabenlösungen automatisiert werden können. Ein Spezialgebiet, in welchem die Techniken auf niedrigem Stand zurückgeblieben scheinen, findet sich im Analysieren und Ausprüfen von Systemprogrammen. Hier haben wir versucht, Programme durch ein System automatisch zu erzeugen, ausgehend von einer Definition des Problems und den Charakteristiken seiner Lösung. Dieses System war zu diesem Zweck entwickelt worden. Ferner wird angedeutet, wie die Ideen, auf denen dieser Beitrag beruht, auf andere Gebiete erfolgreich angewendet werden können.


With 18 Figures  相似文献   

17.
This paper describes an approach for generating graphical, structure-oriented software engineering tools from graph-based specifications. The approach is based on the formal meta modeling of visual languages using graph rewriting systems. Besides the syntactical and semantical rules of the language, these meta models include knowledge from the application domains. This enables the resulting tools to provide the user with high level operations for editing, analysis and execution of models. Tools are constructed by generating source code from the meta model of the visual language, which is written in the very high level programming language PROGRES. The source code is integrated into a framework which is responsible for the invocation of commands and the visualization of graphs. As a case study, a visual language for modeling development processes together with its formal meta model is introduced. The paper shows how a process management tool based on this meta model is generated and reports on our experiences with this approach.  相似文献   

18.
Prototype classifiers are a type of pattern classifiers, whereby a number of prototypes are designed for each class so as they act as representatives of the patterns of the class. Prototype classifiers are considered among the simplest and best performers in classification problems. However, they need careful positioning of prototypes to capture the distribution of each class region and/or to define the class boundaries. Standard methods, such as learning vector quantization (LVQ), are sensitive to the initial choice of the number and the locations of the prototypes and the learning rate. In this article, a new prototype classification method is proposed, namely self-generating prototypes (SGP). The main advantage of this method is that both the number of prototypes and their locations are learned from the training set without much human intervention. The proposed method is compared with other prototype classifiers such as LVQ, self-generating neural tree (SGNT) and K-nearest neighbor (K-NN) as well as Gaussian mixture model (GMM) classifiers. In our experiments, SGP achieved the best performance in many measures of performance, such as training speed, and test or classification speed. Concerning number of prototypes, and test classification accuracy, it was considerably better than the other methods, but about equal on average to the GMM classifiers. We also implemented the SGP method on the well-known STATLOG benchmark, and it beat all other 21 methods (prototype methods and non-prototype methods) in classification accuracy.  相似文献   

19.
Nonlocality is an important resource for quantum information processing. Tripartite nonlocality is more difficult to produce in experiments than bipartite ones. In this paper, we analyze a simple setting to generate tripartite nonlocality from two classes of bipartite resources, namely two-qubit entangled pure states and Werner states. Upper bounds on the tripartite nonlocality, characterized by the maximal violation of Svetlichny inequalities, are given, and the optimal measurements to achieve these bounds are provided.  相似文献   

20.
由一阶逻辑公式得到命题逻辑可满足性问题实例   总被引:2,自引:0,他引:2  
黄拙  张健 《软件学报》2005,16(3):327-335
命题逻辑可满足性(SAT)问题是计算机科学中的一个重要问题.近年来许多学者在这方面进行了大量的研究,提出了不少有效的算法.但是,很多实际问题如果用一组一阶逻辑公式来描述,往往更为自然.当解释的论域是一个固定大小的有限集合时,一阶逻辑公式的可满足性问题可以等价地归约为SAT问题.为了利用现有的高效SAT工具,提出了一种从一阶逻辑公式生成SAT问题实例的算法,并描述了一个自动的转换工具,给出了相应的实验结果.还讨论了通过增加公式来消除同构从而减小搜索空间的一些方法.实验表明,这一算法是有效的,可以用来解决数学研究和实际应用中的许多问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号