首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, two new approaches for handling multiple tasks in redundant manipulators based on predefined allocated priorities are proposed. The first approach is an integrated scheme which employs null-space base vector for handling prioritized tasks. Clear task and null-space representation, better execution of the lower priority tasks, and intuitive formulation are its basic features. The second approach aims to improve the performance of all the prioritized tasks, especially during algorithmic singularities beside clear null-space dynamics representation. This approach can be considered as a modification and extension of the Reverse Priority (RP) algorithm in acceleration level. The commands related to each tasks are added to each other following reverse order of priorities and by suitable projectors. The projector definition is given using minimal representation of the null-space. Clear null-space dynamics in the proposed methods facilitate the stability analysis. A detailed evaluation by means of computer simulation in various cases is reported. Tasks accomplishment using the proposed approaches is compared with the classic method. The results, in general, show higher performance and accuracy of the tasks by the proposed approaches.  相似文献   

2.
Although many techniques for merging conflicting propositional knowledge bases have already been proposed, most existing work is based on the idea that inconsistency results from the presence of incorrect pieces of information, which should be identified and removed. In contrast, we take the view in this paper that conflicts are often caused by statements that are inaccurate rather than completely false, suggesting to restore consistency by interpreting certain statements in a flexible way, rather than ignoring them completely. In accordance with this view, we propose a novel approach to merging which exploits extra-logical background information about the semantic relatedness of atomic propositions. Several merging operators are presented, which are based on different formalizations of this background knowledge, ranging from purely qualitative approaches, related to possibilistic logic, to quantitative approaches with a probabilistic flavor. Both syntactic and semantic characterizations are provided for each merging operator, and the computational complexity is analyzed.  相似文献   

3.
In this paper, a fairly general framework for reasoning from inconsistent propositional bases is defined. Variable forgetting is used as a basic operation for weakening pieces of information so as to restore consistency. The key notion is that of recoveries, which are sets of variables whose forgetting enables restoring consistency. Several criteria for defining preferred recoveries are proposed, depending on whether the focus is laid on the relative relevance of the atoms or the relative entrenchment of the pieces of information (or both). Our framework encompasses several previous approaches as specific cases, including reasoning from preferred consistent subsets, and some forms of information merging. Interestingly, the gain in flexibility and generality offered by our framework does not imply a complexity shift compared to these specific cases.  相似文献   

4.
We propose an adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process. The merging approach consists of two steps: the splitting step and the combination step. The splitting step splits each knowledge base into two subbases and then in the second step, different classes of subbases are combined using different operators. Our merging approach is applied to knowledge bases which are self-consistent and results in a knowledge base which is also consistent. Two operators are proposed based on two different splitting methods. Both operators result in a possibilistic knowledge base which contains more information than that obtained by the t-conorm (such as the maximum) based merging methods. In the flat case, one of the operators provides a good alternative to syntax-based merging operators in classical logic. This paper is a revised and extended version of [36].  相似文献   

5.
This paper discusses an automated process of merging conflicting information from disparate sources into a combined knowledge base. The algorithm provided generates a mathematically consistent, majority-rule merging by assigning weights to the various sources. The sources may be either conflicting portions of a single knowledge base or multiple knowledge bases. Particular attention is paid to maintaining the original rule format of the knowledge, while ensuring logical equivalence. This preservation of rule format keeps the knowledge in a more intuitive implication form as opposed to a collection of clauses with many possible logical roots. It also facilitates tracking using the support for each deductive result so that final knowledge in rule form can be ascribed back to original experts. As the approach is fairly involved mathematically, an automated procedure is developed.  相似文献   

6.
There are relatively few proposals for inconsistency measures for propositional belief bases. However inconsistency measures are potentially as important as information measures for artificial intelligence, and more generally for computer science. In particular, they can be useful to define various operators for belief revision, belief merging, and negotiation. The measures that have been proposed so far can be split into two classes. The first class of measures takes into account the number of formulae required to produce an inconsistency: the more formulae required to produce an inconsistency, the less inconsistent the base. The second class takes into account the proportion of the language that is affected by the inconsistency: the more propositional variables affected, the more inconsistent the base. Both approaches are sensible, but there is no proposal for combining them. We address this need in this paper: our proposal takes into account both the number of variables affected by the inconsistency and the distribution of the inconsistency among the formulae of the base. Our idea is to use existing inconsistency measures in order to define a game in coalitional form, and then to use the Shapley value to obtain an inconsistency measure that indicates the responsibility/contribution of each formula to the overall inconsistency in the base. This allows us to provide a more reliable image of the belief base and of the inconsistency in it.  相似文献   

7.
The need to merge multiple sources of uncertain information is an important issue in many application areas, particularly when there is potential for contradictions between sources. Possibility theory offers a flexible framework to represent, and reason with, uncertain information, and there is a range of merging operators, such as the conjunctive and disjunctive operators, for combining information. However, with the proposals to date, the context of the information to be merged is largely ignored during the process of selecting which merging operators to use. To address this shortcoming, in this paper, we propose an adaptive merging algorithm which selects largely partially maximal consistent subsets of sources, which can be merged through the relaxation of the conjunctive operator, by assessing the coherence of the information in each subset. In this way, a fusion process can integrate both conjunctive and disjunctive operators in a more flexible manner and thereby be more context dependent. A comparison with related merging methods shows how our algorithm can produce a more consensual result.   相似文献   

8.
The general context of this work is the problem of merging data provided by several sources which can be contradictory. Focusing on the case when the information sources do not contain any disjunction, this paper first defines a propositional modal logic for reasoning with data obtained by merging several information sources according to a majority approach. Then it defines a theorem prover to automatically deduce these merged data. Finally, it shows how to use this prover to implement a query evaluator which answers queries addressed to several databases. This evaluator is such that the answer to a query is the one that could be computed by a classical evaluator if the query was addressed to the merged databases. The databases we consider are made of an extensional part, i.e. a set of positive or negative ground literals, and an intensional part i.e. a set of first order function-free clauses. A restriction is imposed to these databases in order to avoid disjunctive data.  相似文献   

9.
基于数据融合和相关度反馈的信息检索方法   总被引:1,自引:1,他引:0  
王非 《计算机应用》2008,28(9):2321-2323
数据融合和基于相关度反馈的查询扩展是两种有效的检索过程优化技术。前者通过集成多个检索结果提高检索性能,后者执行多次查询,依据前次结果修改/扩展用户查询,以求更好地反映用户信息需求。在混合数据融合和查询扩展技术的基础上提出一种检索过程优化方法——HQD方法,由相关度反馈结果生成多个替代查询,检索这些替代查询后采用求和余弦方法生成最终检索结果。HQD方法能有效提高检索性能。  相似文献   

10.
When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function.  相似文献   

11.
One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowledge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the consistency check can also be implemented by algebraic approach. Furthermore, algorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.  相似文献   

12.
Fusion is a popular practice to combine multiple sources of biometric information to achieve systems with greater performance and flexibility. In this paper various approaches to fusion within a multibiometrics context are considered and an application to the fusion of 2D and 3D face information is discussed. An optimal method for fusing the accept/reject decisions of individual biometric sources by means of simple logical rules is presented. Experimental results on the FRGC 2D and 3D face data show that the proposed technique performs effectively without the need for score normalization.  相似文献   

13.
In real world applications, information is often provided by multiple sources having different priority levels reflecting for instance their reliability. This paper investigates ”Prioritized Removed Sets Revision” (PRSR) for revising stratified DL-Lite knowledge bases when a new sure piece of information, called the input, is added. The strategy of revision is based on inconsistency minimization and consists in determining smallest subsets of assertions (prioritized removed sets) that should be dropped from the current stratified knowledge base in order to restore consistency and accept the input. We consider different forms of input: A membership assertion, a positive or a negative inclusion axiom. To characterize our revision approach, we first rephrase Hansson’s postulates for belief bases revision within a DL-Lite setting, we then give logical properties of PRSR operators. In some situations, the revision process leads to several possible revised knowledge bases where defining a selection function is required to keep results within DL-Lite fragment. The last part of the paper shows how to use the notion of hitting set in order to compute the PRSR outcome. We also study the complexity of PRSR operators, and show that, in some cases, the computational complexity of the result can be performed in polynomial time.  相似文献   

14.
Multibiometric systems fuse information from different sources to compensate for the limitations in performance of individual matchers. We propose a framework for the optimal combination of match scores that is based on the likelihood ratio test. The distributions of genuine and impostor match scores are modeled as finite Gaussian mixture model. The proposed fusion approach is general in its ability to handle 1) discrete values in biometric match score distributions, 2) arbitrary scales and distributions of match scores, 3) correlation between the scores of multiple matchers, and 4) sample quality of multiple biometric sources. Experiments on three multibiometric databases indicate that the proposed fusion framework achieves consistently high performance compared to commonly used score fusion techniques based on score transformation and classification.  相似文献   

15.
A multimodal biometric system that alleviates the limitations of the unimodal biometric systems by fusing the information from the respective biometric sources is developed. A general approach is proposed for the fusion at score level by combining the scores from multiple biometrics using triangular norms (t-norms) due to Hamacher, Yager, Frank, Schweizer and Sklar, and Einstein product. This study aims at tapping the potential of t-norms for multimodal biometrics. The proposed approach renders very good performance as it is quite computationally fast and outperforms the score level fusion using the combination approach (min, mean, and sum) and classification approaches like SVM, logistic linear regression, MLP, etc. The experimental evaluation on three databases confirms the effectiveness of score level fusion using t-norms.  相似文献   

16.
Belief merging has been an active research field with many important applications. The major approaches for the belief merging problems, considered as arbitration processes, are based on the construction of the total pre-orders of alternatives using distance functions and aggregation functions. However, these approaches require that all belief bases are provided explicitly and the role of agents, who provide the belief bases, are not adequately considered. Therefore, the results are merely ideal and difficult to apply in the multi-agent systems. In this paper, we approach the merging problems from other point of view. Namely, we treat a belief merging problem as a game, in which rational agents participate in a negotiation process to find out a jointly consistent consensus trying to preserve as many important original beliefs as possible. To this end, a model of negotiation for belief merging is presented, a set of rational and intuitive postulates to characterize the belief merging operators are proposed, and a representation theorem is presented.  相似文献   

17.
The number of clinical trials reports is increasing rapidly due to a large number of clinical trials being conducted; it, therefore, raises an urgent need to utilize the clinical knowledge contained in the clinical trials reports. In this paper, we focus on the qualitative knowledge instead of quantitative knowledge. More precisely, we aim to model and reason with the qualitative comparison (QC for short) relations which consider qualitatively how strongly one drug/therapy is preferred to another in a clinical point of view. To this end, first, we formalize the QC relations, introduce the notions of QC language, QC base, and QC profile; second, we propose a set of induction rules for the QC relations and provide grading interpretations for the QC bases and show how to determine whether a QC base is consistent. Furthermore, when a QC base is inconsistent, we analyze how to measure inconsistencies among QC bases, and we propose different approaches to merging multiple QC bases. Finally, a case study on lowering intraocular pressure is conducted to illustrate our approaches. © 2010 Wiley Periodicals, Inc.  相似文献   

18.
19.
一类命题知识库的更新算法   总被引:4,自引:0,他引:4  
本文介绍了知识库更新的基本概念及命题知识库更新的复杂性研究现状。近年来,学者们提出了许多方法进行命题知识库的更新,一类是基于公式的方法,一类是基于模型的方法,但所有这些方法在通常上都是难解的。本文结合实际应用,提出了一种特殊情况下的更新方法,并给 相应的多项式时间算法。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号